As we reveal our data and AI predictions for 2023, join us at 2nd Watch to stay ahead of the curve and propel your business towards innovation and success. How do we know that artificial intelligence (AI) and large language models (LLMs) have reached a tipping point? It was the hot topic at most families’ dinner tables during the 2022 holiday break.
AI has become mainstream and accessible. Most notably, OpenAI’s ChatGPT took the internet by storm, so much so that even our parents (and grandparents!) are talking about it. Since AI is here to stay beyond the Christmas Eve dinner discussion, we put together a list of 2023 predictions we expect to see regarding AI and data.
#1. Proactively handling data privacy regulations will become a top priority.
Regulatory changes can have a significant impact on how organizations handle data privacy: businesses must adapt to new policies to ensure their data is secure. Modifications to regulatory policies require governance and compliance teams to understand data within their company and the ways in which it is being accessed.
To stay ahead of regulatory changes, organizations will need to prioritize their data governance strategies. This will mitigate the risks surrounding data privacy and potential regulations. As a part of their data governance strategy, data privacy and compliance teams must increase their usage of privacy, security, and compliance analytics to proactively understand how data is being accessed within the company and how it’s being classified.
#2. AI and LLMs will require organizations to consider their AI strategy.
The rise of AI and LLM technologies will require businesses to adopt a broad AI strategy. AI and LLMs will open opportunities in automation, efficiency, and knowledge distillation. But, as the saying goes, “With great power comes great responsibility.”
There is disruption and risk that comes with implementing AI and LLMs, and organizations must respond with a people- and process-oriented AI strategy. As more AI tools and start-ups crop up, companies should consider how to thoughtfully approach the disruptions that will be felt in almost every industry. Rather than being reactive to new and foreign territory, businesses should aim to educate, create guidelines, and identify ways to leverage the technology.
Moreover, without a well-thought-out AI roadmap, enterprises will find themselves technologically plateauing, teams unable to adapt to a new landscape, and lacking a return on investment: they won’t be able to scale or support the initiatives that they put in place. Poor road mapping will lead to siloed and fragmented projects that don’t contribute to a cohesive AI ecosystem.
#3. AI technologies, like Document AI (or information extraction), will be crucial to tap into unstructured data.
Massive amounts of unstructured data – such as Word and PDF documents – have historically been a largely untapped data source for data warehouses and downstream analytics. New deep learning technologies, like Document AI, have addressed this issue and are more widely accessible. Document AI can extract previously unused data from PDF and Word documents, ranging from insurance policies to legal contracts to clinical research to financial statements. Additionally, vision and audio AI unlocks real-time video transcription insights and search, image classification, and call center insights.
Organizations can unlock brand-new use cases by integrating with existing data warehouses. Finetuning these models on domain data enables general-purpose models across a wide variety of use cases.
#4. “Data is the new oil.” Data will become the fuel for turning general-purpose AI models into domain-specific, task-specific engines for automation, information extraction, and information generation.
Snorkel AI coined the term “data-centric AI,” which is an accurate paradigm to describe our current AI lifecycle. The last time AI received this much hype, the focus was on building new models. Now, very few businesses need to develop novel models and algorithms. What will set their AI technologies apart is the data strategy.
Data-centric AI enables us to leverage existing models that have already been calibrated to an organization’s data. Applying an enterprise’s data to this new paradigm will accelerate a company’s time to market, especially those who have modernized their data and analytics platforms and data warehouses.
#5. The popularity of data-driven apps will increase.
Snowflake recently acquired Streamlit, which makes application development more accessible to data engineers. Additionally, Snowflake introduced Unistore and hybrid tables (OLTP) to allow data science and app teams to work together and jointly off of a single source of truth in Snowflake, eliminating silos and data replication.
Snowflake’s big moves demonstrate that companies are looking to fill gaps that traditional business intelligence (BI) tools leave behind. With tools like Streamlit, teams can harness tools to automate data sharing and deployment, which is traditionally manual and Excel-driven. Most importantly, Streamlit can become the conduit that allows business users to work directly with the AI-native and data-driven applications across the enterprise.
#6. AI-native and cloud-native applications will win.
Customers will start expecting AI capabilities to be embedded into cloud-native applications. Harnessing domain-specific data, companies should prioritize building upon module data-driven application blocks with AI and machine learning. AI-native applications will win over AI-retrofitted applications.
When applications are custom-built for AI, analytics, and data, they are more accessible to data and AI teams, enabling business users to interact with models and data warehouses in a new way. Teams can begin classifying and labeling data in a centralized, data-driven way, rather than manually and often-repeated in Excel, and can feed into a human-in-the-loop system for review and to improve the overall accuracy and quality of models. Traditional BI tools like dashboards, on the other hand, often limit business users to consume and view data in a “what happened?” manner, rather than in a more interactive, often more targeted manner.
#7. There will be technology disruption and market consolidation.
The AI race has begun. Microsoft’s strategic partnership with OpenAI and integration into “everything,” Google’s introduction of Bard and funding into foundational model startup Anthropic, AWS with their own native models and partnership with Stability AI, and new AI-related startups are just a few of the major signals that the market is changing. The emerging AI technologies are driving market consolidation: smaller companies are being acquired by incumbent companies to take advantage of the developing technologies.
Mergers and acquisitions are key growth drivers, with larger enterprises leveraging their existing resources to acquire smaller, nimbler players to expand their reach in the market. This emphasizes the importance of data, AI, and application strategy. Organizations must stay agile and quickly consolidate data across new portfolios of companies.
Conclusion
The AI ball is rolling. At this point, you’ve probably dabbled with AI or engaged in high-level conversations about its implications. The next step in the AI adoption process is to actually integrate AI into your work and understand the changes (and challenges) it will bring. We hope that our data and AI predictions for 2023 prime you for the ways it can have an impact on your processes and people.
Software & Solutions for Marketers is the final installment in our Marketers’ Guide to Data Management and Analytics series. Throughout this series, we’ve covered major terms, acronyms, and technologies you might encounter as you seek to take control of your data, improve your analytics, and get more value from your MarTech investments.
In this last section, we will cover various aspects of software and solutions for marketing, including:
The differences between the cloud and on-premise (on-prem) solutions
Customer data platforms (CDP)
Custom development (custom dev)
Cloud vs. On-Prem
Cloud
Also known as “cloud computing,” the cloud is a global network of software and services that run over the internet on someone else’s server, as opposed to running locally on your computer or server.
Why It Matters for Marketers:
Get the flexibility your business needs. Today’s marketing teams are mobile, require a variety of working schedules, and are often spread across geographies and time zones. Cloud-based software and services are accessible by any device with an internet connection, quick to set up, and reliable to access, regardless of the user’s location or device.
Deliver the level of service your customers expect. Hosting your website or e-commerce business on the cloud means your site won’t get bogged down with high traffic or large data files. Additionally, hosting your data in the cloud reduces the amount of siloed information, empowering teams to work more seamlessly and deliver a higher quality, more personalized experience to customers.
Spend your money on campaigns, not infrastructure. While many softwares are sold with on-premise or cloud options, the cloud-native options (tools such as Snowflake, Azure, AWS, and Looker) enable marketers to use these technologies with little to no reliance on IT resources to maintain the back-end infrastructure.
Real-World Examples:
Most marketing organizations use cloud-based applications such as Salesforce, HubSpot, or Sprout Social. These cloud-based applications allow marketing users to quickly and reliably create, collaborate on, and manage their marketing initiatives without being tied to a single location or computer.
On-Prem
On-premise or on-prem refers to any software, storage, or service run from on-site computers or servers.
Why It Matters for Marketers:
Most marketing software is run on the cloud these days. Cloud solutions are faster, more dynamic, and more reliable.
So why would a business choose on-prem? Today, there are two main reasons a business might still have on-prem software:
The company is in a highly regulated industry where data ownership or security are big concerns.
The company has legacy on-prem solutions with massive amounts of data, making the switch to cloud more challenging.
However, many of these companies still recognize the need to update their infrastructures. On-prem is harder to maintain and has reduced up-time as glitches or breaks are fixed at the speed of IT teams. What’s more: on-prem solutions can bottleneck your insights and ability to deliver insights at scale.
With this in mind, even companies with more complicated situations can use a hybrid of cloud and on-prem solutions. By doing this, they migrate less sensitive information to the cloud while keeping more regulated files on their own servers.
Real-World Examples:
In marketing, it’s likely that most data will be in the cloud but if you’re working with a client in a highly regulated industry, like government or healthcare, you might have some on-premise data sources.
Healthcare companies have patient privacy regulations like HIPAA about how customer data can be used, including marketing campaigns. In this case, an on-prem solution might be a better alternative to protect patients’ rights.
Customer Data Platform (CDP)
A customer data platform is a software solution that synthesizes customer data from various sources to keep them in sync with each other. CDPs often additionally offer the ability to send this data to a database of your choice for analytics.
Why It Matters for Marketers:
CDPs allow your various tools (such as your CRM, Google Analytics, and e-commerce systems) to stay in sync with each other around customer data. This means if you change a detail about a customer in one system, everyone else sees this update come through automatically without any manual updating.
Real-World Examples:
CDPs make it really easy to create quality account-based marketing (ABM) campaigns. CDPs deliver a persistent, accurate, and unified customer base, making it easy to use data throughout the ABM campaign.
For example, selecting and validating target accounts uses data from across your entire organization. Once pulled into the CDP, you can perform analytics on that data to identify the best accounts to go after. You will have thousands of attributes to better understand which customers are more likely to purchase.
One note: CDPs do not usually tie these customers and their information to other subject areas like products, orders, loyalty, etc. They are also not meant for analytic use cases. If you are doing deeper, company-wide analysis, you might want a data warehouse.
Custom Dev
Custom development, or custom dev, is a term that refers to any application or solution developed to satisfy the requirements of a specific user or business rather than for general use.
Why It Matters for Marketers:
Even the best out-of-the-box software or solutions are designed to overcome the challenges of a broad user base, providing functionality that only satisfies generalized needs. Custom dev solutions address your specific business needs in a way that gives you a competitive advantage or reduces the amount of time spent trying to make a generic software match your unique needs.
Real-World Examples:
One retail company was receiving flat files from a monthly vendor report that were hard to integrate with the rest of their reports. This made it challenging to get the deeper insights their marketing team needed to make informed omni-channel decisions.
As there were no tools available in the market with a connector to their system, a custom dev solution was needed. An application was created to automatically take in these flat files from the vendor so the marketing team could receive new data without the lengthy request and ingest process that relied heavily on IT resources. This enabled the marketing team to easily target the same customer across channels by using personalized campaigns that aligned with purchasing habits and history.
Another example of custom dev is the implementation of automated customer touchpoints. Adding features that trigger events based on business rules is a great way to personalize your customers’ experience. For example, you could create a rule that emails customers a coupon for their most frequently purchased product when they haven’t made a purchase in the past six months.
Throughout this Marketers’ Guide to Data Management and Analytics series, we hope you’ve learned about the different tools to manage, integrate, analyze, and use your data more strategically to get the most out of your investments. Please contact us to learn how we can help build and implement these various solutions, so you can better understand your customer base and target your customers accurately.
Does this sound familiar? “You will move to the cloud, for right or wrong, because of a business imperative to get out of your data center, not tomorrow, but yesterday.” Or, “You’re sold on the idea that by migrating to the cloud, you’d be able to reduce your total cost of ownership (TCO), increase flexibility, and accelerate innovation projects.” The cloud practically sells itself, and as a result, you plan to ditch your legacy, on-premises technology and begin your cloud migration journey.
However, suppose you hop into the cloud without a defined strategy and approach. In that case, you’ll experience cloud sprawl, and spiraling cloud costs will negate the touted benefits of the cloud. This sort of “blind faith” in all the cloud offers is a common mistake many business leaders make. It has prevented you from considering cloud management and economics as part of your cloud migration strategy.
Withoutcloud cost governance, your organization will suffer O2: Overprovisioning and Overspending. You’re left confused because this is theexact opposite result you thought cloud migration would have. Additionally, if you find yourself in this predicament, you have difficulty pinpointing areas for improvement to initiate corrective action.
Enter Innovation Scoring by 2nd Watch. Our data-driven scoring system will help you assess your applications running in the cloud environment and identify where you are overprovisioning and overspending. Innovation Scoring is the first step to establishing cloud economics and maximizing the value of cloud computing to your business in the long run.
The Importance of Cloud Economics
If O2 is how you define your cloud environment, you’ve learned the hard way about the need for cloud economics. While cost savings is a component of cloud economics, the ultimate goal of the practice is tomaximize the value of cloud computing for your organization. Implementing cloud economics will give your business insights into which departments are utilizing the cloud, what applications and workloads are using the cloud, and how these moving parts contribute to more impactful and cost effective business goals.
Without cloud economics, your business will deal with overrun cloud budgets, which are usually due to one or more of the following:
Ungoverned costs: your organization has no idea what it is spending on.
Unforecasted usage: you see more cloud projects than you had anticipated.
Uncommitted mindset: you don’t want to commit to a cloud contract (because you can’t predict its usage), so you miss out on contractual discounts.
Wasted dev/test resources: your dev team is overprovisioning their infrastructure.
Overestimated production headroom: you are not auto-scaling or have not set proper parameters for autoscaling for your applications.
Wrongsized production: your production environment is overprovisioned, and pay for the excess resources monthly.
Poor design and implementation: your architects make suboptimal design choices for cloud solutions because they are unaware of the costs to the business.
For cloud economics to work, there must be a company-wide commitment to the practice beyond simply calculating cloud costs. Just likeimplementing a DevOps practice, impactful cloud economics requires promoting a cross-functional and collaborative culture. Business leaders must encourage transparency and trackability to enable teams to work together harmoniously to manage their cloud infrastructure and prove the true business benefits of the cloud.
2nd Watch’s Innovation Scoring
Cloud economics is critical for your business to reap the maximum benefits of cloud computing. However, cloud economics is a pervasive cultural practice, so it won’t happen at the snap of your fingers. It will require time and effort for your business to establish cloud economics.
The first step in controlling your cloud budget and governing your cloud platform is to identify areas of improvement. 2nd Watch created the Innovation Scoring system, our proprietary scoring methodology, to help you identify opportunities for optimization and modernization in a data-driven way.
Our Innovation Scoring methodology will reveal the underlying problem with your cloud management. We’ll be able to identify the application needing improvement and determine why it is suboptimal. Did you set it up incorrectly and need to move to PaaS with autoscaling capabilities? Or did someone write your application in 2005, and you are in dire need of application modernization? Or is it a combination of both? 2nd Watch designed its Innovation Scoring to pinpoint areas for improvement in your database, infrastructure, and/or application. When we ascertain the source of inefficiency, we can address issues contributing to cloud sprawl and skyrocketing cloud costs.
To calculate your Innovation Score, we analyze several different dynamics related to your cloud applications. The ratings from each category are then cross-tabulated to generate a total view of your entire cloud environment. Your Innovation Score will not only reveal inefficiencies but also allow us to compare your efforts against other similarly sized companies and make sure you are up to industry standards.
2nd Watch understands that cloud economics is a cultural undertaking; therefore, when we assign Innovation Scores to our clients, we do so in a way that encourages company-wide participation. To promote engagement and commitment, we’ve gamified our Innovation Scoring: we split our clients’ technical leadership into teams and calculate each team’s score. When we check in with our clients, we reveal each team’s score to showcase which ones are being innovative and taking advantage of the cloud and which ones have room for improvement.
Sample Innovation Scoring Output
Our approach to Innovation Scoring promotes friendly competition, which fosters collaboration between teams and a transparent high-level overview of how each team is leveraging the cloud. When our clients are a part of our Innovation Scoring system, it jumpstarts a culture of innovation, transparency, and accountability within their business.
Conclusion
Consider the importance of cloud economics when planning to run your applications in a cloud environment. It is easy to overspend, get overwhelmed, and have no sense of direction. Therefore, cloud economics is beneficial whether you implement it proactively or reactively.
2nd Watch’s Innovation Scoring is a practical first step to getting your cloud budget in order and establishing cloud economics as a standard cultural practice in your organization. Through data and analysis, our Innovation Scoring will help you identify how you can optimize your cloud instance so that you are receiving maximum cloud value for your business. Moreover, Innovation Scoring trains your teams to be communicative and cross-collaborative, which are the traits your company culture needs to succeed in cloud economics.
Dating back to 2014, Snowflake disrupted analytics by introducing the Snowflake Elastic Data Warehouse, the first data warehouse built from the ground up for the cloud with patented architecture that would revolutionize the data platform landscape. Four years later, Snowflake continued to disrupt the data warehouse industry by introducing Snowflake Data Sharing, an innovation for data collaboration that would eliminate the barriers of traditional data sharing methods in favor of enabling enterprises to easily share live data in real time without moving the shared data. This year, in 2022, under the bright Sin City lights, Snowflake intends to disrupt application development by echoing a unified platform for developing data applications from coding to monetization.
Currently, data teams looking to add value, whether it be improving their team’s analytical efficiency or reducing costs to their enterprise’s processes, develop internal data products such as ML-powered product categorization models or C-suite dashboards in whichever flavor their team is savvy in. However, to produce an external data product that brings value to their enterprise, there is only one metric executives truly care about: revenue.
To bridge the value gap between internal and external data products comes the promise of the Snowflake Native Application Framework. This framework will now enable developers to build, distribute, and deploy applications natively in the Data Cloud landscape through Snowflake. Moreover, these applications can be monetized on the Snowflake Marketplace, where consumers can securely purchase, install, and run these applications natively in their Snowflake environments, with no data movement required. It’s important to note that Snowflake’s goal is not to compete with OLTP Oracle DB workloads, but rather to disrupt how cloud applications are built by seamlessly blending the transactional and analytical capabilities Snowflake has to offer.
To round out the Snowflake Native Application Framework, a series of product announcements were made at the Summit:
Unistore (Powered by Hybrid Tables): To bridge transactional and analytical data in a single platform, Snowflake developed a new workload called Unistore. At its core, the new workload enables customers to unify their datasets across multiple solutions and streamline application development by incorporating all the same simplicity, performance, security, and governance customers expect from the Snowflake Data Cloud platform. To power the core, Snowflake developed Hybrid Tables. This new table type supports fast single-row operations driven by a ground-breaking row-based storage engine that will allow transactional applications to be built entirely in Snowflake. Hybrid Tables will also support primary key enforcement to protect against duplicate record inserts.
Snowpark for Python: Snowpark is a development framework designed to bridge the skill sets of engineers, data scientists, and developers. Previously, Snowpark only supported Java and Scala, but Snowflake knew what the people wanted – the language of choice across data engineers/scientists and application developers, Python. Allowing Python workloads into Snowflake removes the burden of security, a challenge developers often face within their enterprises.
Snowflake Worksheets for Python: Though in private preview currently, Snowflake will support Python development natively within Snowsight Worksheets, to develop pipelines, ML models, and applications. This will allow streamlining development features like auto-complete and the ability to code custom Python logic in seconds.
Streamlit Acquisition: To democratize access to data, a vision both Snowflake and Streamlit share, Snowflake acquired Streamlit, an open-source Python project for building data-based applications. Streamlit helps fill the void of bringing a simplified app-building experience for data scientists who want to quickly translate an ML model into a visual application that anyone within their enterprise can access.
Large Memory Warehouses: Still in development (out for preview in AWS in EU-Ireland), Snowflake will soon allow consumers to access 5x and 6x larger warehouses. These larger warehouses will enable developers to execute memory-intensive operations such as training ML models on large datasets through open-source Python libraries that will be natively available through Anaconda integration.
On top of all those features released for application development, Snowflake also released key innovations to improve data accessibility, such as:
Snowpipe Streaming: To eliminate the boundaries between batch and streaming pipelines, Snowflake introduced Snowpipe Streaming. This new feature will simplify stitching together both real-time and batch data into one single system. Users can now ingest via a client API endpoint aggregated log data for IoT devices without adding event hubs and even ingest CDC streams at a lower latency.
External Apache Iceberg Tables: Developed by Netflix, Apache Iceberg tables are open-source tables that can support a variety of file formats (e.g., Parquet, ORC, Avro). Snowflake will now allow consumers to query Iceberg tables in place, without moving the table data or existing metadata. This translates to being able to access customer-supplied storage buckets with Iceberg tables without compromising on security and taking advantage of the consistent governance of the Snowflake platform.
External On-Prem Storage Tables: For many enterprises, moving data into the Data Cloud is not a reality due to a variety of reasons, including size, security concerns, cost, etc. To overcome this setback, Snowflake has released in private preview the ability to create External Stages and External Tables on storage systems such as Dell or Pure Storage that can expose a highly compliant S3 API. This will allow customers to access a variety of storage devices using Snowflake without worrying about concurrency issues or the effort of maintaining compute platforms.
Between the Native Application Framework and the new additions for data accessibility, Snowflake has taken a forward-thinking approach on how to effectively disrupt the application framework. Developers should be keen to take advantage of all the new features this year while understanding that some key features such as Unistore and Snowpipe Streaming will have bumps along the road as they are still under public/private preview.
Most businesses today have evaluated their options for application modernization. Planned movement to the cloud happened ahead of schedule, driven by the need for rapid scalability and agility in the wake of COVID-19.
Legacy applications already rehosted or replatformed in the cloud saw increased load, highlighting painful inefficiencies in scalability and sometimes even causing outages. Your business has likely already taken some first steps in app modernization and updating legacy systems.
Of the seven options to modernize with legacy systems outlined by Gartner, 2nd Watch commonly works with clients who have already successfully rehosted and replatformed applications. To a lesser extent, we see mainframe applications encapsulated in a modern RESTful API or replaced altogether. Businesses frequently take those first steps in their digital transformation but find themselves stuck crossing the gap to a fully modern application.
What are common issues and solutions businesses face as they move away from outdated technologies and progress towards fully modern applications?
Keeping the Goal in Mind
Overcoming the inertia to begin a modernization project is often a lengthy process, requiring several months or as much as a year or more to complete the first phases. Development teams require training, thorough and careful planning must occur, and unforeseen challenges are encountered and overcome. Through it all, the needs of the business never slow down, and the temptation to halt or dramatically slow legacy modernization efforts after the initial phases of modernization can be substantial.
No matter what the end state of the modernization journey looks like, it can be helpful to keep it at the forefront of the development team’s minds. In today’s remote and hybrid working environment, that’s not as easy as keeping a whiteboard or poster in a room. Sprint ceremonies should include a brief reminder of long-term business goals, especially for backlog or sprint reviews. Keep the team invested in the business and technical reasons and the question “why modernize legacy applications” at the forefront of their minds. Most importantly, solicit their feedback on the process required to accomplish the long-term strategic goals of the business.
With the goal firmly in your development team’s minds, it’s time to tackle tactics in migrating from legacy apps to newer systems. What are some of these common stumbling blocks on the road to refactoring and rearchitecting legacy software?
Refactoring an application can encompass a broad set of areas. Refactoring is sometimes as straightforward as reducing technical debt, or it can be as complex as breaking apart a monolithic application into smaller services. In 2nd Watch’s experience, some common issues when refactoring running applications include:
Limited knowledge of cloud-based architectural patterns. Even common architectures like 2- and 3-tier applications require some legacy code changes when an application has moved from a data center to a cloud service provider or among cloud service providers. Where an older application may have hardcoded IP addresses or DNS, a modern approach to accessing application tiers would use environment variables configured at runtime, pointing at load balancers.
Lack of telemetry and observability. Development teams are frequently hesitant to make changes quickly because there are too many unknowns in their application. Proper monitoring of known unknowns (metrics) and unknown unknowns (observability) can demystify the impact of refactoring. For more context around the types of unknowns and how to work with them in an application, Charity Majors frequently writes on the topic.
Lack of thorough automated tests. A lack of automated tests also slows the ability to make changes because developers cannot anticipate what their changes might break. Improved telemetry and observability can help, but automated testing is the other side of the equation. Tools like Codecovcan initially help improve test coverage, but unless carefully attended, incentivizing a percentage of test coverage across the codebase can lead to tests that do not thoroughly cover all common use cases. Good unit tests and integration testing can halt problems before they even start.
No blueprint for optimal refactoring. Without a clear blueprint for understanding what an optimally refactored app looks like, development and information technology teams can become frustrated or unclear about their end goals. Heroku’s Twelve-Factor App methodology is one commonly used framework for crafting or refactoring modern applications. It has the added benefit of being applicable to many deployment models – single- or multiple-server, containers, or serverless.
Rearchitecting
Rearchitecting an application to leverage better capabilities, such as those found in a cloud service provider’s Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) options, may present some challenges. The most common challenge 2nd Watch encounters with clients is not fully understanding the options available in modern environments. Older applications are the product of their time and typically were built optimally for the available technology and needs. However, when rearchitecting those applications, sometimes development teams either don’t know or don’t have details about better options that may be available.
Running a MySQL database on the same machine as the rest of the monolithic application may have made sense when initially writing the application. Today, many applications can run more cheaply, more securely, and with the same or better performance using a combination of cloud storage buckets, managed caches like Redis or Memcached, and secrets managers. These consumption-based cloud options tend to be significantly cheaper than managed databases or databases running on cloud virtual machines. Scaling automatically with end-user demand and reduced management overhead are additional benefits of software modernization.
Rearchitecting an application can also be frustrating for experienced systems administrators tasked with maintaining and troubleshooting production applications. For example, moving from VMs to containers introduces an entirely different way of dealing with logs. Sysadmins must forward them to a log aggregator instead of storing them on disk. Autoscaling a service can mean the difference between identifying which instances – of potentially dozens or hundreds – had an issue instead of a small handful of them. Application modernization impacts every person involved with the long-term success of that application, not just developers and end-users.
Conclusion
Application Modernization is a long-term strategic activity, not a short-term tactical activity. Over time, you will realize the benefits of the lower total cost of ownership (TCO), increased agility, and faster time to market. Recognizing and committing to the future of your business will help you overcome the short- and mid-term challenges of app modernization.
Engaging a trusted partner to accelerate your app modernization journey and lead the charge across that gap is a powerful strategy to overcome some of the highlighted problems. It can be difficult to overcome a challenge with the same mindset that led to creating that challenge. An influx of different ideas and experiences can be the push development teams need to reach the next level for a business.
If you’re wondering how to modernize legacy applications and are ready to work with a trusted advisor that can help you cross that gap, 2nd Watch will meet you wherever you are in your journey. Contact us to schedule a discussion of your goals, challenges, and how we can help you reach the end game of modern business applications.
If the global pandemic taught us anything, it’s that digital transformation isa must-have for businesses to keep up with customer demands and remain competitive. To do this, organizations are moving their workloads to and modernizing their applications for the cloud faster than ever.
In fact, according to a recent survey, 91% of respondents agree or strongly agree that application modernization plays a critical role in their organization’s adaptability to rapidly changing business conditions. But there are so many cloud service providers to choose from! How do you know which one is best for your application modernization objectives? Keep reading to find out!
What is a Cloud Services Provider (CSP)?
A cloud services provider is a cloud computing company that provides public clouds, managed private clouds, or on-demand cloud infrastructures, platforms, and services. Many CSPs are available worldwide, including Alibaba Cloud, Amazon Web Services (AWS), Google Cloud Platform (GCP), IBM Cloud, Oracle Cloud, and Microsoft Azure. However, three industry giants are noteworthy because of their services and global footprint: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure.
What is Application Modernization?
Application modernization is the process of revamping an application to take advantage of breakthrough technical innovations to improve the overall efficiency of the application remarkably. This efficiency typically involves high availability, increased fault tolerance, high scalability, improved security, eliminating a single point of failure, disaster recovery, contemporary and simplified tools, new coding language, and reduced resource requirements, among other benefits. Many companies running legacy applications are now looking at how they can best modernize their monolith applications.
Application Rationalization: The First Step to Modernization
The best way to start any application modernization journey is with application rationalization. In this process, you identify company-wide business applications and strategically determine which ones you should keep, replace, retire, or consolidate. Once you identify those applications, you can list each one’s ease or difficulty level, total cost of ownership (TCO), and business value, enabling you to decide and prioritize which action to take. (Hint: Start with high value and minimal effort apps!) Doing this will also help you eliminate redundancies, lower costs, and maximize efficiency.
The high-value apps that are difficult to move to the cloud will likely cause the most grief in your decision-making process. But, like Rome, your modernization strategy doesn’t need to be built in a day.
You can develop an approach to application modernization over time and still reduce costs and risks while moving your portfolio forward.
It is crucial to evaluate your current application stack and determine the most suitable application modernization strategy to migrate to the cloud when it comes to application modernization in the cloud. Many on-premises applications are legacy monoliths that may benefit more from refactoring than a rehosting (“lift and shift”) approach. (Check out Rehost, Refactor, Replatform – What, When, & Why? | AppMod Essentials)
Refactoring may require overhauling your application code, which takes some high-level effort but offers the most benefits. However, not all applications are ideal candidates for refactoring. Rearchitecting will become necessary for some obsolete applications that are not compatible with the cloud due to architectural designs made while building the app. In this scenario, the value proposition considers rearchitecting, dividing the application into several functional components that can be individually adapted and further developed. These small, independent pieces—or “microservices”—can then be migrated to the cloud quickly and efficiently.
Determining the Best Cloud Services Provider for Your Application Modernization
Each application modernization journey is unique, as is the process of choosing the best cloud service provider that meets your demands. What works for one business’ application may not be the best for yours, even if they are in the same industry. And just because a competitor has chosen one CSP over another does not mean you should.
When evaluating the CSP that is best for you, consider the following:
Service Level Agreements (SLAs): Determine if the CSP’s service level agreements suit your production workloads, whether the cloud service is generally available yet, and they retain satisfactory levels of support knowledge. Managing workloads in the cloud can sometimes be tedious. The managed services department may not have the required expertise to efficiently manage and monitor the cloud environment. It is critical to your business to do your due diligence to ensure your preferred CSP can administer their managed offerings with as close to zero downtime as possible.
Vendor Lock-in: It is important to have alternatives to any single CSP and that you retain the flexibility to substitute for a better value proposition.
Enterprise Adoption: Consider the likelihood of scalability of your use of the CSP across your organization.
Economic Impact: Consider the positive business or financial impacts that result from the service usage at the individual, department, and company-wide levels.
Automation and Deployment: Verify the CSP’s integration capabilities with your organization’s preferred automation tooling and availability of automated and local testing frameworks.
When modernizing existing applications to take the best advantage of the cloud, cloud technologies like serverless and containers are good options to consider. Serverless computing and containers are cloud-native tools that automate code deployment into isolated environments. Developers can build highly scalable applications with fewer resources within a short time. They both also reduce overhead for cloud-hosted web applications but differ in many ways. Private cloud, hybrid cloud, and multi-cloud approaches to application modernization are worth considering too.
Serverless Computing and Containers
Serverless computing is an exaction model where the CSP executes a piece of code by dynamically allocating the resources and can only charge for the services used to run the code. Code is typically run in stateless containers. Various events such as HTTP requests, monitoring alerts, database events, queuing services, file uploads, scheduled events (cron jobs), and more can trigger them.
The cloud service provider then receives the code in a function to execute, which is why serverless computing is sometimes referred to as a Function-as-a-Service (FaaS) platform. Add that to your list of as-a-Service acronyms: IaaS, PaaS, SaaS, FaaS!
Containersprovidea discrete environment set up within an operating system. They can run one or more applications, typically assigned only those resources necessary for the application to function correctly. Because containers are smaller and faster than virtual machines, they allow applications to run quickly and reliably among various computing environments. Container images become containers at runtime and include everything needed to run an application: code, runtime, system tools, system libraries, and settings.
Private, Hybrid, and Multi-Cloud
The public cloud is a vital part of any modernization strategy. However, some organizations may not be ready to go directly to the public cloud from the datacenter. Cloud architects should consider private, hybrid, and multi-cloud strategies in those cases. These models can help resolve any architectural, security, or latency concerns. They will also reduce the complexity associated with the policies for specific workloads based on their unique characteristics.
Conclusion
Migration to the cloud is ideal for investing in application modernization as it can lower your overall operational costs and increase your application’s resiliency. But not all use cases—nor cloud service providers—are the same. You need to do your homework before choosing the best-suited one for your business.
2nd Watch offers a comprehensive consulting methodology and proven tools to accelerate your cloud-native and app modernization objectives. Our modernization process begins with a complete assessment of your existing application portfolio to identify which you should keep, replace, retire, or consolidate. We then develop and implement a modernization strategy that best meets your business needs.
From application rationalization to application modernization and beyond, 2nd Watch is your go-to trusted advisor throughout your entire modernization journey.
Contact us to schedule a brief meeting with our specialists to discuss your current modernization objectives.