Data & AI Predictions in 2023

As we reveal our data and AI predictions for 2023, join us at 2nd Watch to stay ahead of the curve and propel your business towards innovation and success. How do we know that artificial intelligence (AI) and large language models (LLMs) have reached a tipping point? It was the hot topic at most families’ dinner tables during the 2022 holiday break.

AI has become mainstream and accessible. Most notably, OpenAI’s ChatGPT took the internet by storm, so much so that even our parents (and grandparents!) are talking about it. Since AI is here to stay beyond the Christmas Eve dinner discussion, we put together a list of 2023 predictions we expect to see regarding AI and data.

#1. Proactively handling data privacy regulations will become a top priority.

Regulatory changes can have a significant impact on how organizations handle data privacy: businesses must adapt to new policies to ensure their data is secure. Modifications to regulatory policies require governance and compliance teams to understand data within their company and the ways in which it is being accessed. 

To stay ahead of regulatory changes, organizations will need to prioritize their data governance strategies. This will mitigate the risks surrounding data privacy and potential regulations. As a part of their data governance strategy, data privacy and compliance teams must increase their usage of privacy, security, and compliance analytics to proactively understand how data is being accessed within the company and how it’s being classified. 

#2. AI and LLMs will require organizations to consider their AI strategy.

The rise of AI and LLM technologies will require businesses to adopt a broad AI strategy. AI and LLMs will open opportunities in automation, efficiency, and knowledge distillation. But, as the saying goes, “With great power comes great responsibility.” 

There is disruption and risk that comes with implementing AI and LLMs, and organizations must respond with a people- and process-oriented AI strategy. As more AI tools and start-ups crop up, companies should consider how to thoughtfully approach the disruptions that will be felt in almost every industry. Rather than being reactive to new and foreign territory, businesses should aim to educate, create guidelines, and identify ways to leverage the technology. 

Moreover, without a well-thought-out AI roadmap, enterprises will find themselves technologically plateauing, teams unable to adapt to a new landscape, and lacking a return on investment: they won’t be able to scale or support the initiatives that they put in place. Poor road mapping will lead to siloed and fragmented projects that don’t contribute to a cohesive AI ecosystem.

#3. AI technologies, like Document AI (or information extraction), will be crucial to tap into unstructured data.

According to IDC, 80% of the world’s data will be unstructured by 2025, and 90% of this unstructured data is never analyzed. Integrating unstructured and structured data opens up new use cases for organizational insights and knowledge mining.

Massive amounts of unstructured data – such as Word and PDF documents – have historically been a largely untapped data source for data warehouses and downstream analytics. New deep learning technologies, like Document AI, have addressed this issue and are more widely accessible. Document AI can extract previously unused data from PDF and Word documents, ranging from insurance policies to legal contracts to clinical research to financial statements. Additionally, vision and audio AI unlocks real-time video transcription insights and search, image classification, and call center insights.

Organizations can unlock brand-new use cases by integrating with existing data warehouses. Finetuning these models on domain data enables general-purpose models across a wide variety of use cases. 

#4. “Data is the new oil.” Data will become the fuel for turning general-purpose AI models into domain-specific, task-specific engines for automation, information extraction, and information generation.

Snorkel AI coined the term “data-centric AI,” which is an accurate paradigm to describe our current AI lifecycle. The last time AI received this much hype, the focus was on building new models. Now, very few businesses need to develop novel models and algorithms. What will set their AI technologies apart is the data strategy.

Data-centric AI enables us to leverage existing models that have already been calibrated to an organization’s data. Applying an enterprise’s data to this new paradigm will accelerate a company’s time to market, especially those who have modernized their data and analytics platforms and data warehouses

#5. The popularity of data-driven apps will increase.

Snowflake recently acquired Streamlit, which makes application development more accessible to data engineers. Additionally, Snowflake introduced Unistore and hybrid tables (OLTP) to allow data science and app teams to work together and jointly off of a single source of truth in Snowflake, eliminating silos and data replication.

Snowflake’s big moves demonstrate that companies are looking to fill gaps that traditional business intelligence (BI) tools leave behind. With tools like Streamlit, teams can harness tools to automate data sharing and deployment, which is traditionally manual and Excel-driven. Most importantly, Streamlit can become the conduit that allows business users to work directly with the AI-native and data-driven applications across the enterprise.

#6. AI-native and cloud-native applications will win.

Customers will start expecting AI capabilities to be embedded into cloud-native applications. Harnessing domain-specific data, companies should prioritize building upon module data-driven application blocks with AI and machine learning. AI-native applications will win over AI-retrofitted applications. 

When applications are custom-built for AI, analytics, and data, they are more accessible to data and AI teams, enabling business users to interact with models and data warehouses in a new way. Teams can begin classifying and labeling data in a centralized, data-driven way, rather than manually and often-repeated in Excel, and can feed into a human-in-the-loop system for review and to improve the overall accuracy and quality of models. Traditional BI tools like dashboards, on the other hand, often limit business users to consume and view data in a “what happened?” manner, rather than in a more interactive, often more targeted manner.

#7. There will be technology disruption and market consolidation.

The AI race has begun. Microsoft’s strategic partnership with OpenAI and integration into “everything,” Google’s introduction of Bard and funding into foundational model startup Anthropic, AWS with their own native models and partnership with Stability AI, and new AI-related startups are just a few of the major signals that the market is changing. The emerging AI technologies are driving market consolidation: smaller companies are being acquired by incumbent companies to take advantage of the developing technologies. 

Mergers and acquisitions are key growth drivers, with larger enterprises leveraging their existing resources to acquire smaller, nimbler players to expand their reach in the market. This emphasizes the importance of data, AI, and application strategy. Organizations must stay agile and quickly consolidate data across new portfolios of companies. 

Conclusion

The AI ball is rolling. At this point, you’ve probably dabbled with AI or engaged in high-level conversations about its implications. The next step in the AI adoption process is to actually integrate AI into your work and understand the changes (and challenges) it will bring. We hope that our data and AI predictions for 2023 prime you for the ways it can have an impact on your processes and people.

Think you’re ready to get started? Find out with 2nd Watch’s data science readiness assessment.


FinOps Driven Modernization: An Approach for Large Enterprises

Congratulations! You made it to the cloud.

You made a decision and a plan. You selected a migration partner. And you exited your traditional datacenter successfully by migrating thousands of virtual machines to the public cloud. You breathed a huge sigh of relief because it wasn’t easy, but you and your team pulled through.

While you and your teams were preparing to return to focusing on business value-driven tasks and features, the newly minted cloud estate was ticking away like a taxi meter 24 hours a day, seven days a week. The first invoice came, and it seemed a little higher than your forecast. The second monthly invoice was even higher than the first! Your business units (BUs) are now all-in on the cloud, just like you asked, and deploying resources and new environments at will like kids in a candy store. The invoices keep coming, and eventually, Finance takes notice. “What happened here? I thought moving to the cloud would reduce costs?”  If this sounds familiar, you’re not alone.

 

As with many new technologies and strategies, moving to the public cloud comes with risks and rewards. The cloud value proposition is multi-faceted and, according to AWS, includes:

  • Total Cost of Ownership (TCO) Reduction
  • Staff Productivity
  • Operational Resilience
  • Business Agility

For many enterprises, the last three pillars of productivity, resilience, and agility have gotten overshadowed by the promise of a lower TCO. It’s not hard to understand why. Measuring cloud usage costs is easy. The cloud service provider (CSP) does this for you every month. The idea that migrating to the cloud is a cost-driven exercise excludes three-fourths of the potential business value – especially when migrating with a lift-and-shift approach. 

The Lift-and-Shift Approach

When you consider workloads like black boxes, you start your journey without complete visibility into the public cloud’s opportunities. Maybe you had an expiring datacenter contract and had to evacuate under time pressure. That’s understandable. But were you educated and prepared for the tradeoffs of that approach? Or were you shocked by the first invoice and the speed at which the invoices are growing? Did you prepare the CFO in advance and share the next steps? So, what did you miss?  

When you took a black-box lift-and-shift (BBLAS) approach, the focus was on moving virtual machines in groups based on dependency mapping. Your teams or your cloud partner, usually with the help of automation, defined the groups and then worked with you to plan the movement of those groups – typically referred to as “wave planning.” What you ended up with is a mirror image of your datacenter in the public cloud. 

You have now migrated to someone else’s datacenter.

 The old datacenter was predictable where fixed hardware investments dictated capacity, and efforts towards efficiency only occurred when available resources started to dwindle from new and existing services and applications being deployed or scaled. This new datacenter charges by the millisecond, has unlimited capacity, and the investment in additional capacity bypasses procurement and is in the hands of the engineers. Controlling costs in this new datacenter is a whole new world for most enterprises. Enter the FinOps movement.

“FinOps is an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions.” – finops.org

Related: How to Choose the Best Cloud Service Provider for Your Application Modernization Strategy

What is FinOps?

FinOps is to finance and engineering, as DevOps is to development and operations. The FinOps philosophy and approach is how you regain cost control in a BBLAS environment. Before diving into how FinOps can help, let’s look at the Cloud Cost Optimization Cycle (CCOC). The CCOC is a precursor to the FinOps framework and another black-box approach to cost efficiency in the cloud.

A black-box approach is when virtual machines are viewed as a fixed infrastructure without regard to the applications and services running on them. Seasoned professionals have lived through this traditional IT view for years, and it is what separates operations and development concerns. DevOps philosophy is making inroads, of course, but many enterprises have only begun to introduce this philosophy at scale.

 

The Cloud Cost Optimization Cycle goes like this. Every month your CSP makes cost and usage data available. An in-house resource or a consultant analyzes the massive amount of data and prepares recommendations for potential savings. The consultant presents these recommendations to the operations team, which then reconfigures the deployed infrastructure to achieve cloud savings. 

This cycle can produce significant savings at scale and is the traditional starting point for gaining visibility and control over runaway cloud costs. The process follows the FinOps recommended progression of crawl, walk run toward a mature practice. This approach has both benefits and limitations:

 

 

BenefitsLimitations
Infrastructure-focused cost savingsCost savings are limited to infrastructure and cloud configuration changes
Brings financial accountability and cloud spend awareness to the enterpriseApplication architecture remains unchanged
Sets a trajectory towards FinOps best practices (crawl, walk, run)Can create friction between Operations and AppDev teams
Accomplished primarily by the Operations teamApp refactoring focused on patching instead of business value or modernization

The Black-Box Dilemma

In an ideal state, operations teams can iteratively reconfigure public cloud infrastructure based on cost and usage data until the fleet of virtual machines and associated storage are fully optimized. In this ideal state, interactions with the application teams are minimal and driven by the operation team’s needs. The approach ignores the side effects that right-sizing infrastructure can have on somewhat brittle monolithic legacy applications. What usually happens in a BBLAS environment is that the lift-and-shift migration and the subsequent CCOC reveal unforeseen shortcomings in the application architecture, and runtime defects surface. 

CCOC – Mixed Results

A lack of necessary cloud skills and experience on the operations side can exacerbate the issues. For example, if the operations team chooses the incorrect cloud instance type for the workload, applications can become bound by resource constraints. When cloud skill and experience are missing from the application development side, this can cause long delays where defects are difficult for the team to triage and patch. So now, instead of cost optimization efforts gloriously precipitating savings, they are producing a mixture of saving money and addressing issues.

This combination creates an environment where engineering and operations teams begin to collide. The applications were stable in the old datacenter due to factors like:

  • Extremely low network latency between services
  • Applications and databases tuned for the hardware they were running on
  • Debugging and quality processes tuned over the years for efficiency

Now application teams have a new stream of issues entering their backlogs driven by fundamental changes in infrastructure introduced by the noble pursuit of tuning for cost savings. Business value, architectural improvements, and elimination of technical debt are slowed to the point that Application Development leaders start to push back on the CCOC. Operations teams don’t understand why the application is falling apart because the metrics and the cloud cost data they collect support the reduction or reconfiguration of cloud resources. Additional factors are now in play from an application development perspective with a black-box cloud cost optimization strategy:

  • Users are constantly communicating new feature requests to the business
  • Enterprise and Application Architects are pushing teams for modernization
  • Software Team leads are insisting on dedicating capacity to technical debt reduction

Enterprises are struggling to retain developers and are more resource-constrained than ever, causing a general slowdown in time to market for features and architectural improvements when flaws in legacy applications need patching.

 

Going Beyond the Lift-and-Shift

You need a different strategy to overcome these challenges. You must look inside the black boxes to move forward. The CCOC, at its best, will produce a finely tuned version of a legacy application running in the public cloud. You can address the cost pillar of the cloud value framework from an operations perspective, but additional opportunities abound in the form of Application Modernization.

 Enterprises in situations like the one described here need to do two things to move forward on their cloud journeys.

  1. Mature cloud cost optimization towards FinOps
  2. Invest in Application Modernization

These two strategies are complementary and, combined, what 2nd Watch has dubbed “FinOps Driven Modernization.”

The amount of cost and usage data available to enterprises operating in the cloud reveals an opportunity to use that data to drive application modernization strategy at scale across all business units. The biggest challenges in approaching application modernization at scale are:

  • Resource constraints
  • Cloud skills and experience
  • Analysis paralysis – where do we start
  • Calculating Return on Investment

Modernization efforts will be slow and costly without the resource capacity having the necessary cloud architecture and operations skills. They will not produce further buy-in through the socialization of success stories. Getting started seems impossible when an enterprise consists of multiple business units and thousands of virtual machines across hundreds of accounts and development teams. Modernization costs rise dramatically when cloud cost optimization requires changes to the software running on the virtual machines. It can add a significantly higher risk than changing instance families and reconfiguring storage tiers.

How Can FinOps Help Drive Modernization?

 

Let’s look at how maturing FinOps drives modernization opportunities and capacity. We discussed how an infrastructure-focused CCOC could slow down features, business value, and modernization efforts. 

 A potentially significant percentage of the savings realized from this approach will be diverted to triaging and patching application issues.

  • Do the additional development efforts overshadow the infrastructure savings?
  • Is the time to market for new features slowed to the point that the enterprise’s competitive advantage suffers?  

Most enterprises don’t have the processes in place to answer these questions. FinOps Driven Modernization is the answer. With the data from the CSP, the FinOps team can work with the operations and development teams to determine if an optimization recommendation is feasible and valuable to the business.

How does this work at scale among all business units? When you combine cost and usage data with information like:

  • Process information from inside each virtual machine
  • SLA metrics
  • Service ticket and bug metrics
  • Nature of the cloud service
    • IaaS, PaaS, FaaS
    • More on this in the next installment of this series
  • Revenue – unit economics

You begin to see a more holistic view of the cloud estate and can derive insights that include cost and much deeper business intelligence.

Sample FinOps Output from 2nd Watch

Consider being able to visualize where to focus cost optimization and modernization efforts across multiple business units, thousands of virtual machines, and hundreds of applications in a single pane of glass dashboard. The least innovative, noisiest, and most costly areas in your enterprise will begin glowing like hot coals. You can then focus the expenditure of resources, time, and money on high-impact optimization and modernization investments. This reallocation of spending is the power of FinOps Driven Modernization. Finance, operations, engineering, product, and executives are all working together to ensure that the enterprise realizes the actual value of the cloud.

 

Related: How FinOps Can Optimize Cloud Costs and Drive Innovation

A Business Case for FinOps

Let’s dig into a hypothetical business unit struggling with cloud costs. The FinOps team has identified that their per-unit costs exceed the recommended range for their cloud cost-to-revenue ratio. The power of FinOps-Driven Modernization has revealed that the BBLAS approach has resulted in a fleet of virtual machines running commonly modernized workloads like web servers, database servers, file, or image servers, etc. In addition to this IaaS-heavy approach, the BU heavily leverages licensed software and operating systems. This revelation triggers a series of interviews with the BU leadership and application owners to investigate the potential and level of effort to introduce application modernization approaches. The teams within the BU know there is room for improvement but lack the skills and available resources to act. 

They learn through the interview process that they can move licensed databases from virtual machines to a managed cloud platform. Additionally, they discover they could migrate most of the databases to open-source alternatives. Further, they can decommission the cluster of file servers and migrate the data to cloud-native storage with minimal application refactoring. By leveraging the CSP and operational data, a business case for investing in helping the BU make improvements writes itself.  

Without leveraging the FinOps philosophy and extending it with a focus on application modernization, this business unit would have operated for years in a BBLAS state, costing the enterprise orders of magnitude more in cloud spend than the investment in modernization. Extending this approach across the enterprise takes cloud cost management to the next level, resulting in purpose-driven, high-impact progress towards realizing the value of the public cloud.

FinOps is the practice that every enterprise should be adopting to help drive financial awareness throughout the organization. FinOps enables an inclusive and virtuous cycle of continuously improving when leveraged as a driver for application modernization.

Schedule a whiteboard session with our FinOps and Application Modernization experts to discover how 2nd Watch’s approach can help you and your team meet your transformation objectives.

Jesse Samm, Application Modernization Practice Director at 2nd Watch

 


Gartner Critical Capabilities for Public Cloud IaaS

Learn from Gartner what the Critical Capabilities for Public Cloud Infrastructure as a Service were for 2014.  Gartner evaluates 15 public cloud IaaS service providers, listed in the 2014 Magic Quadrant, against eight critical capabilities and for four common use cases your enterprise manages today.

Gartner takes an in-depth look at the critical capabilities for:

  • Application Development – for the needs of large teams of developers building new applications
  • Batch Computing – including high-performance computing (HPC), data analytics and other one-time (but potentially recurring), short-term, large-scale, scale-out workloads
  • Cloud Native Applications – for applications at any scale, which have been written with the strengths and weaknesses of public cloud IaaS in mind
  • General Business Applications – for applications not designed with the cloud in mind, but that can run comfortably in virtualized environments
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.