As we reveal our data and AI predictions for 2023, join us at 2nd Watch to stay ahead of the curve and propel your business towards innovation and success. How do we know that artificial intelligence (AI) and large language models (LLMs) have reached a tipping point? It was the hot topic at most families’ dinner tables during the 2022 holiday break.
AI has become mainstream and accessible. Most notably, OpenAI’s ChatGPT took the internet by storm, so much so that even our parents (and grandparents!) are talking about it. Since AI is here to stay beyond the Christmas Eve dinner discussion, we put together a list of 2023 predictions we expect to see regarding AI and data.
#1. Proactively handling data privacy regulations will become a top priority.
Regulatory changes can have a significant impact on how organizations handle data privacy: businesses must adapt to new policies to ensure their data is secure. Modifications to regulatory policies require governance and compliance teams to understand data within their company and the ways in which it is being accessed.
To stay ahead of regulatory changes, organizations will need to prioritize their data governance strategies. This will mitigate the risks surrounding data privacy and potential regulations. As a part of their data governance strategy, data privacy and compliance teams must increase their usage of privacy, security, and compliance analytics to proactively understand how data is being accessed within the company and how it’s being classified.
#2. AI and LLMs will require organizations to consider their AI strategy.
The rise of AI and LLM technologies will require businesses to adopt a broad AI strategy. AI and LLMs will open opportunities in automation, efficiency, and knowledge distillation. But, as the saying goes, “With great power comes great responsibility.”
There is disruption and risk that comes with implementing AI and LLMs, and organizations must respond with a people- and process-oriented AI strategy. As more AI tools and start-ups crop up, companies should consider how to thoughtfully approach the disruptions that will be felt in almost every industry. Rather than being reactive to new and foreign territory, businesses should aim to educate, create guidelines, and identify ways to leverage the technology.
Moreover, without a well-thought-out AI roadmap, enterprises will find themselves technologically plateauing, teams unable to adapt to a new landscape, and lacking a return on investment: they won’t be able to scale or support the initiatives that they put in place. Poor road mapping will lead to siloed and fragmented projects that don’t contribute to a cohesive AI ecosystem.
#3. AI technologies, like Document AI (or information extraction), will be crucial to tap into unstructured data.
Massive amounts of unstructured data – such as Word and PDF documents – have historically been a largely untapped data source for data warehouses and downstream analytics. New deep learning technologies, like Document AI, have addressed this issue and are more widely accessible. Document AI can extract previously unused data from PDF and Word documents, ranging from insurance policies to legal contracts to clinical research to financial statements. Additionally, vision and audio AI unlocks real-time video transcription insights and search, image classification, and call center insights.
Organizations can unlock brand-new use cases by integrating with existing data warehouses. Finetuning these models on domain data enables general-purpose models across a wide variety of use cases.
#4. “Data is the new oil.” Data will become the fuel for turning general-purpose AI models into domain-specific, task-specific engines for automation, information extraction, and information generation.
Snorkel AI coined the term “data-centric AI,” which is an accurate paradigm to describe our current AI lifecycle. The last time AI received this much hype, the focus was on building new models. Now, very few businesses need to develop novel models and algorithms. What will set their AI technologies apart is the data strategy.
Data-centric AI enables us to leverage existing models that have already been calibrated to an organization’s data. Applying an enterprise’s data to this new paradigm will accelerate a company’s time to market, especially those who have modernized their data and analytics platforms and data warehouses.
#5. The popularity of data-driven apps will increase.
Snowflake recently acquired Streamlit, which makes application development more accessible to data engineers. Additionally, Snowflake introduced Unistore and hybrid tables (OLTP) to allow data science and app teams to work together and jointly off of a single source of truth in Snowflake, eliminating silos and data replication.
Snowflake’s big moves demonstrate that companies are looking to fill gaps that traditional business intelligence (BI) tools leave behind. With tools like Streamlit, teams can harness tools to automate data sharing and deployment, which is traditionally manual and Excel-driven. Most importantly, Streamlit can become the conduit that allows business users to work directly with the AI-native and data-driven applications across the enterprise.
#6. AI-native and cloud-native applications will win.
Customers will start expecting AI capabilities to be embedded into cloud-native applications. Harnessing domain-specific data, companies should prioritize building upon module data-driven application blocks with AI and machine learning. AI-native applications will win over AI-retrofitted applications.
When applications are custom-built for AI, analytics, and data, they are more accessible to data and AI teams, enabling business users to interact with models and data warehouses in a new way. Teams can begin classifying and labeling data in a centralized, data-driven way, rather than manually and often-repeated in Excel, and can feed into a human-in-the-loop system for review and to improve the overall accuracy and quality of models. Traditional BI tools like dashboards, on the other hand, often limit business users to consume and view data in a “what happened?” manner, rather than in a more interactive, often more targeted manner.
#7. There will be technology disruption and market consolidation.
The AI race has begun. Microsoft’s strategic partnership with OpenAI and integration into “everything,” Google’s introduction of Bard and funding into foundational model startup Anthropic, AWS with their own native models and partnership with Stability AI, and new AI-related startups are just a few of the major signals that the market is changing. The emerging AI technologies are driving market consolidation: smaller companies are being acquired by incumbent companies to take advantage of the developing technologies.
Mergers and acquisitions are key growth drivers, with larger enterprises leveraging their existing resources to acquire smaller, nimbler players to expand their reach in the market. This emphasizes the importance of data, AI, and application strategy. Organizations must stay agile and quickly consolidate data across new portfolios of companies.
Conclusion
The AI ball is rolling. At this point, you’ve probably dabbled with AI or engaged in high-level conversations about its implications. The next step in the AI adoption process is to actually integrate AI into your work and understand the changes (and challenges) it will bring. We hope that our data and AI predictions for 2023 prime you for the ways it can have an impact on your processes and people.
As a Practice Director of Managed Cloud Services, my team and I see well-intentioned organizations fall victim to this very common scenario… Despite the business migrating from its data center to Amazon Web Services (AWS), its system operations team doesn’t make adjustments for the new environment. The team attempts to continue performing the same activities they did when their physical hardware resided in a data center or at another hosting provider.
The truth is, that modernizing your monolithic applications and infrastructure requires new skill sets, knowledge, expertise, and understanding to get desired results. Unless you’re a sophisticated, well-funded, start-up, most established organizations don’t know where to begin after the migration is complete. The transition from deploying legacy software in your own data center, to utilizing Elastic Kubernetes Service (EKS) and micro-services, while deploying code through an automated Continuous Integration and Continuous Delivery (CI/CD) pipeline, is a whole new ballgame. Not to mention how to keep it functioning after it is deployed.
In this article, I’m providing some insight on how to overcome the stagnation that hits post-migration. With forethought, AWS understanding, and a reality check on your internal capabilities, organizations can thrive with cloud-native services. At the same time, kicking issues downstream, maintaining inefficiencies, and failing to address new system requirements will compromise the ROI and assumed payoffs of modernization.
Is Your Team Prepared?
Sure, going serverless with Lambda might be all the buzz right now, but it’s not something you can effectively accomplish overnight. Running workloads on cloud-native services and platforms requires a different way of operating. New operational demands require that your internal teams are equipped with these new skill sets. Unfortunately, a team that may have mastered the old data center or dedicated hosting provider environment, may not be able to jump in on AWS.
The appeal of AWS is the great flexibility to drive your business and solve unique challenges. However, because of the ability to provision and decommission on demand, it also introduces new complexities. If these new challenges are not addressed early on, you will definitely see friction between teams which can damage collaboration and adoption, the potential for system sprawl increases, and cost overruns can compromise the legitimacy and longevity of modernization.
Due to the high cost and small talent pool of technically efficient cloud professionals, many organizations struggle to nab the attention of these highly desired employees. Luckily, modern cloud-managed service providers can help you wade through the multitude of services AWS introduces. With a trusted and experienced partner by your side, businesses are able to gain the knowledge necessary to drive business efficiencies and solve unique challenges. Depending on the level of interaction, existing team members may be able to level up to better manage AWS growth going forward. In the meantime, involving a third-party cloud expert is a quick and efficient way to make sure post-migration change management evolves with your goals, design, timeline, and promised outcomes.
Are You Implementing DevOps?
Modern cloud operations and optimizations address the day two necessities that go into the long-term management of AWS. DevOps principles and automation need to be heavily incorporated into how the AWS environment operates. With hundreds of thousands of distinct prices and technical combinations, even the most experienced IT organizations can get overwhelmed.
Consider traditional operations management versus cloud-based DevOps. One is a physical hardware deployment that requires logging into the system to perform configurations, and then deploying software on top. It’s slow, tedious, and causes a lag for developers as they wait for feature delivery, which negatively impacts productivity. Instead of system administrators performing monthly security patching, and having to log into each instance separately, a modern cloud operation can efficiently utilize a pipeline with infrastructure as code. Now, you can update your configuration files to use a new image and then use infrastructure automation to redeploy. This treats each one as an ephemeral instance, minimizing any friction or delay on the developer teams.
This is just one example of how DevOps can and should be used to achieve strong availability, agility, and profitability. Measuring DevOps with the CALMS model provides a guideline for addressing the five fundamental elements of DevOps: Culture, Automation, Lean, Measurement, and Sharing. Learn more about DevOps in our eBook, 7 Major Roadblocks in DevOps Adoption and How to Address Them.
Do You Continue With The Same Behavior?
Monitoring CPU, memory, and disk at the traditional thresholds used on legacy hardware are not necessarily appropriate when utilizing AWS EC2. To achieve the financial and performance benefits of the cloud, you purposely design systems and applications to use and pay for the number of resources required. As you increasingly deploy new cloud-native technology, such as Kubernetes and serverless operations, require that you monitor in different ways so as to reduce an abundance of unactionable alerts that eventually become noise.
For example, when running a Kubernetes cluster, you should implement monitoring that alerts on desired pods. If there’s a big difference between the number of desired pods and currently running pods, this might point to resource problems where your nodes lack the capacity to launch new pods. With a modern managed cloud service provider, cloud operations engineers receive the alert and begin investigating the cause to ensure uptime and continuity for application users. With fewer unnecessary alerts and an escalation protocol for the appropriate parties, triage of the issue can be done more quickly. In many cases remediation efforts can be automated, allowing for more efficient resource allocation.
How Are You Cutting Costs?
Many organizations initiate cloud migration and modernization to gain cost-efficiency. Of course, these financial benefits are only accessible when modern cloud operations are fully in place.
Considering that anyone can create an AWS account but not everyone has visibility or concerns for budgetary costs, it can result in costs exceeding expectations quickly. This is where establishing a strong governance model and expanding automation can help you to achieve your cost-cutting goals. You can limit instance size deployment using IAM policies to insure larger, more expensive instances are not unnecessarily utilized. Another cost that can quickly grow without the proper controls is your S3 storage. Enabling policies to have objects expire and automatically be deleted can help to curb an explosion in storage costs. Enacting policies like these to control costs requires that your organization take the time to think through the governance approach and implement it.
Evolving in the cloud can reduce computing costs by 40-60% while increasing efficiency and performance. However, those results are not guaranteed. Download our eBook, A Holistic Approach to Cloud Cost Optimization, to ensure a cost-effective cloud experience.
How Will You Start Evolving Now?
Time is of the essence when it comes to post-migration outcomes – and the board and business leaders around you will be expecting results. As your organization looks to leverage AWS cloud-native services, your development practices will become more agile and require a more modern approach to managing the environment. To keep up with these business drivers, you need a team to serve as your foundation for evolution.
2nd Watch works alongside organizations to help start or accelerate your cloud journey to become fully cloud native on AWS. With more than 10 years of migrating, operating, and effectively managing workloads on AWS, 2nd Watch can help your operations staff evolve to operate in a modern way with significant goal achievement. Are you ready for the next step in your cloud journey? Contact us and let’s get started.