Data and AI Predictions in 2023

As we reveal our data and AI predictions for 2023, join us at 2nd Watch to stay ahead of the curve and propel your business towards innovation and success. How do we know that artificial intelligence (AI) and large language models (LLMs) have reached a tipping point? It was the hot topic at most families’ dinner tables during the 2022 holiday break.

AI has become mainstream and accessible. Most notably, OpenAI’s ChatGPT took the internet by storm, so much so that even our parents (and grandparents!) are talking about it. Since AI is here to stay beyond the Christmas Eve dinner discussion, we put together a list of 2023 predictions we expect to see regarding AI and data.

1. Proactively handling data privacy regulations will become a top priority.

Regulatory changes can have a significant impact on how organizations handle data privacy: businesses must adapt to new policies to ensure their data is secure. Modifications to regulatory policies require governance and compliance teams to understand data within their company and the ways in which it is being accessed. 

To stay ahead of regulatory changes, organizations will need to prioritize their data governance strategies. This will mitigate the risks surrounding data privacy and potential regulations. As a part of their data governance strategy, data privacy and compliance teams must increase their usage of privacy, security, and compliance analytics to proactively understand how data is being accessed within the company and how it’s being classified. 

2. AI and LLMs will require organizations to consider their AI strategy.

The rise of AI and LLM technologies will require businesses to adopt a broad AI strategy. AI and LLMs will open opportunities in automation, efficiency, and knowledge distillation. But, as the saying goes, “With great power comes great responsibility.” 

There is disruption and risk that comes with implementing AI and LLMs, and organizations must respond with a people- and process-oriented AI strategy. As more AI tools and start-ups crop up, companies should consider how to thoughtfully approach the disruptions that will be felt in almost every industry. Rather than being reactive to new and foreign territory, businesses should aim to educate, create guidelines, and identify ways to leverage the technology. 

Moreover, without a well-thought-out AI roadmap, enterprises will find themselves technologically plateauing, teams unable to adapt to a new landscape, and lacking a return on investment: they won’t be able to scale or support the initiatives that they put in place. Poor road mapping will lead to siloed and fragmented projects that don’t contribute to a cohesive AI ecosystem.

3. AI technologies, like Document AI (or information extraction), will be crucial to tap into unstructured data.

According to IDC, 80% of the world’s data will be unstructured by 2025, and 90% of this unstructured data is never analyzed. Integrating unstructured and structured data opens up new use cases for organizational insights and knowledge mining.

Massive amounts of unstructured data – such as Word and PDF documents – have historically been a largely untapped data source for data warehouses and downstream analytics. New deep learning technologies, like Document AI, have addressed this issue and are more widely accessible. Document AI can extract previously unused data from PDF and Word documents, ranging from insurance policies to legal contracts to clinical research to financial statements. Additionally, vision and audio AI unlocks real-time video transcription insights and search, image classification, and call center insights.

Organizations can unlock brand-new use cases by integrating with existing data warehouses. Finetuning these models on domain data enables general-purpose models across a wide variety of use cases. 

4. Data is the new oil.

Data will become the fuel for turning general-purpose AI models into domain-specific, task-specific engines for automation, information extraction, and information generation. Snorkel AI coined the term “data-centric AI,” which is an accurate paradigm to describe our current AI lifecycle. The last time AI received this much hype; the focus was on building new models. Now, very few businesses need to develop novel models and algorithms. What will set their AI technologies apart is the data strategy.

Data-centric AI enables us to leverage existing models that have already been calibrated to an organization’s data. Applying an enterprise’s data to this new paradigm will accelerate a company’s time to market, especially those who have modernized their data and analytics platforms and data warehouses

5. The popularity of data-driven apps will increase.

Snowflake recently acquired Streamlit, which makes application development more accessible to data engineers. Additionally, Snowflake introduced Unistore and hybrid tables (OLTP) to allow data science and app teams to work together and jointly off of a single source of truth in Snowflake, eliminating silos and data replication.

Snowflake’s big moves demonstrate that companies are looking to fill gaps that traditional business intelligence (BI) tools leave behind. With tools like Streamlit, teams can harness tools to automate data sharing and deployment, which is traditionally manual and Excel-driven. Most importantly, Streamlit can become the conduit that allows business users to work directly with the AI-native and data-driven applications across the enterprise.

6. AI-native and cloud-native applications will win.

Customers will start expecting AI capabilities to be embedded into cloud-native applications. Harnessing domain-specific data, companies should prioritize building upon module data-driven application blocks with AI and machine learning. AI-native applications will win over AI-retrofitted applications. 

When applications are custom-built for AI, analytics, and data, they are more accessible to data and AI teams, enabling business users to interact with models and data warehouses in a new way. Teams can begin classifying and labeling data in a centralized, data-driven way, rather than manually and often-repeated in Excel, and can feed into a human-in-the-loop system for review and to improve the overall accuracy and quality of models. Traditional BI tools like dashboards, on the other hand, often limit business users to consume and view data in a “what happened?” manner, rather than in a more interactive, often more targeted manner.

7. There will be technology disruption and market consolidation.

The AI race has begun. Microsoft’s strategic partnership with OpenAI and integration into “everything,” Google’s introduction of Bard and funding into foundational model startup Anthropic, AWS with their own native models and partnership with Stability AI, and new AI-related startups are just a few of the major signals that the market is changing. The emerging AI technologies are driving market consolidation: smaller companies are being acquired by incumbent companies to take advantage of the developing technologies. 

Mergers and acquisitions are key growth drivers, with larger enterprises leveraging their existing resources to acquire smaller, nimbler players to expand their reach in the market. This emphasizes the importance of data, AI, and application strategy. Organizations must stay agile and quickly consolidate data across new portfolios of companies. 

Conclusion

The AI ball is rolling. At this point, you’ve probably dabbled with AI or engaged in high-level conversations about its implications. The next step in the AI adoption process is to actually integrate AI into your work and understand the changes (and challenges) it will bring. We hope that our data and AI predictions for 2023 prime you for the ways it can have an impact on your processes and people.

Why choose 2nd Watch?

Choose 2nd Watch as your partner and let us empower you to harness the power of AI and data to propel your business forward.

  • Expertise: With years of experience in cloud optimization and data analytics, we have the expertise to guide you through the complexities of AI implementation and maximize the value of your data.
  • Comprehensive Solutions: Our range of services covers every aspect of your AI and data journey, from cost analysis and optimization to AI strategy development and implementation. We offer end-to-end solutions tailored to your specific needs.
  • Proven Track Record: Our track record speaks for itself. We have helped numerous organizations across various industries achieve significant cost savings, improve efficiency, and drive innovation through AI and data-driven strategies.
  • Thoughtful Approach: We understand that implementing AI and data solutions requires a thoughtful and strategic approach. We work closely with you to understand your unique business challenges and goals, ensuring that our solutions align with your vision.
  • Continuous Support: Our commitment to your success doesn’t end with the implementation. We provide ongoing support and monitoring to ensure that your AI and data initiatives continue to deliver results and stay ahead of the curve.

Contact us now to get started on your journey towards transformation and success.


Evolving Operations to Maximize AWS Cloud Native Services

As a Practice Director of Managed Cloud Services, my team and I see well-intentioned organizations fall victim to this very common scenario… Despite the business migrating from its data center to Amazon Web Services (AWS), its system operations team doesn’t make adjustments for the new environment. The team attempts to continue performing the same activities they did when their physical hardware resided in a data center or at another hosting provider.

The truth is, that modernizing your monolithic applications and infrastructure requires new skill sets, knowledge, expertise, and understanding to get desired results. Unless you’re a sophisticated, well-funded, start-up, most established organizations don’t know where to begin after the migration is complete. The transition from deploying legacy software in your own data center, to utilizing Elastic Kubernetes Service (EKS) and micro-services, while deploying code through an automated Continuous Integration and Continuous Delivery (CI/CD) pipeline, is a whole new ballgame. Not to mention how to keep it functioning after it is deployed.

In this article, I’m providing some insight on how to overcome the stagnation that hits post-migration. With forethought, AWS understanding, and a reality check on your internal capabilities, organizations can thrive with cloud-native services. At the same time, kicking issues downstream, maintaining inefficiencies, and failing to address new system requirements will compromise the ROI and assumed payoffs of modernization.

Is Your Team Prepared?

Sure, going serverless with Lambda might be all the buzz right now, but it’s not something you can effectively accomplish overnight. Running workloads on cloud-native services and platforms requires a different way of operating. New operational demands require that your internal teams are equipped with these new skill sets. Unfortunately, a team that may have mastered the old data center or dedicated hosting provider environment, may not be able to jump in on AWS.

The appeal of AWS is the great flexibility to drive your business and solve unique challenges.  However, because of the ability to provision and decommission on demand, it also introduces new complexities. If these new challenges are not addressed early on, you will definitely see friction between teams which can damage collaboration and adoption, the potential for system sprawl increases, and cost overruns can compromise the legitimacy and longevity of modernization.

Due to the high cost and small talent pool of technically efficient cloud professionals, many organizations struggle to nab the attention of these highly desired employees. Luckily, modern cloud-managed service providers can help you wade through the multitude of services AWS introduces. With a trusted and experienced partner by your side, businesses are able to gain the knowledge necessary to drive business efficiencies and solve unique challenges. Depending on the level of interaction, existing team members may be able to level up to better manage AWS growth going forward. In the meantime, involving a third-party cloud expert is a quick and efficient way to make sure post-migration change management evolves with your goals, design, timeline, and promised outcomes.

Are You Implementing DevOps?

Modern cloud operations and optimizations address the day two necessities that go into the long-term management of AWS. DevOps principles and automation need to be heavily incorporated into how the AWS environment operates. With hundreds of thousands of distinct prices and technical combinations, even the most experienced IT organizations can get overwhelmed.

Consider traditional operations management versus cloud-based DevOps. One is a physical hardware deployment that requires logging into the system to perform configurations, and then deploying software on top. It’s slow, tedious, and causes a lag for developers as they wait for feature delivery, which negatively impacts productivity. Instead of system administrators performing monthly security patching, and having to log into each instance separately, a modern cloud operation can efficiently utilize a pipeline ­with infrastructure as code. Now, you can update your configuration files to use a new image and then use infrastructure automation to redeploy. This treats each one as an ephemeral instance, minimizing any friction or delay on the developer teams.

This is just one example of how DevOps can and should be used to achieve strong availability, agility, and profitability. Measuring DevOps with the CALMS model provides a guideline for addressing the five fundamental elements of DevOps: Culture, Automation, Lean, Measurement, and Sharing. Learn more about DevOps in our eBook, 7 Major Roadblocks in DevOps Adoption and How to Address Them.

Do You Continue With The Same Behavior?

Monitoring CPU, memory, and disk at the traditional thresholds used on legacy hardware are not necessarily appropriate when utilizing AWS EC2. To achieve the financial and performance benefits of the cloud, you purposely design systems and applications to use and pay for the number of resources required. As you increasingly deploy new cloud-native technology, such as Kubernetes and serverless operations, require that you monitor in different ways so as to reduce an abundance of unactionable alerts that eventually become noise.

For example, when running a Kubernetes cluster, you should implement monitoring that alerts on desired pods. If there’s a big difference between the number of desired pods and currently running pods, this might point to resource problems where your nodes lack the capacity to launch new pods. With a modern managed cloud service provider, cloud operations engineers receive the alert and begin investigating the cause to ensure uptime and continuity for application users. With fewer unnecessary alerts and an escalation protocol for the appropriate parties, triage of the issue can be done more quickly. In many cases remediation efforts can be automated, allowing for more efficient resource allocation.

How Are You Cutting Costs?

Many organizations initiate cloud migration and modernization to gain cost-efficiency. Of course, these financial benefits are only accessible when modern cloud operations are fully in place.

Considering that anyone can create an AWS account but not everyone has visibility or concerns for budgetary costs, it can result in costs exceeding expectations quickly. This is where establishing a strong governance model and expanding automation can help you to achieve your cost-cutting goals. You can limit instance size deployment using IAM policies to insure larger, more expensive instances are not unnecessarily utilized. Another cost that can quickly grow without the proper controls is your S3 storage. Enabling policies to have objects expire and automatically be deleted can help to curb an explosion in storage costs. Enacting policies like these to control costs requires that your organization take the time to think through the governance approach and implement it.

Evolving in the cloud can reduce computing costs by 40-60% while increasing efficiency and performance. However, those results are not guaranteed. Download our eBook, A Holistic Approach to Cloud Cost Optimization, to ensure a cost-effective cloud experience.

How Will You Start Evolving Now?

Time is of the essence when it comes to post-migration outcomes – and the board and business leaders around you will be expecting results. As your organization looks to leverage AWS cloud-native services, your development practices will become more agile and require a more modern approach to managing the environment. To keep up with these business drivers, you need a team to serve as your foundation for evolution.

2nd Watch works alongside organizations to help start or accelerate your cloud journey to become fully cloud native on AWS. With more than 10 years of migrating, operating, and effectively managing workloads on AWS, 2nd Watch can help your operations staff evolve to operate in a modern way with significant goal achievement. Are you ready for the next step in your cloud journey? Contact us and let’s get started.

 


App Modernization in the Cloud

The cloud market is maturing, and organizations worldwide are well into implementing their cloud strategies. In fact, a recent McKinsey survey estimates that, by 2022, 75% all workloads will be running in either public or private clouds. Additionally, according to VMWare, 72% of businesses are looking for a path forward for their existing applications, and it is important to consider an app modernization strategy as part of these migration efforts.

Whether it be a desire to containerize, utilize cloud-native services, increase agility, or realize cost savings, the overall goal should be to deliver business value faster in the rapidly changing cloud environment.

Modern Application

Application modernization has a focus on legacy or “incumbent” line of business applications, and approaches range anywhere between re-hosting from the datacenter to cloud, to full cloud native application rewrites. We prefer to take a pragmatic approach, which is to address issues with legacy applications that hinder organizations from realizing the benefits of modern software and cloud native approaches, while retaining as much of the intellectual property that has been built into incumbent applications over the years as possible. Additionally, we find ways of augmenting existing code bases to make use of modern paradigms.

Application Modernization Strategies

When approaching legacy software architecture, people often discuss breaking apart monolithic applications and microservices. However, the most important architectural decisions should be centered around how to best allow the application to function well in the cloud, with scalability, fault-tolerance, and observability all being important aspects. A popular approach is to consider the tenants of the 12-Factor App to help guide these decisions.

Architecture discussions go hand in hand with considering platforms. Containerization and serverless functions are popular approaches, but equally valid is traditional VM clustering or even self-hosting. Additionally, we start to think about utilizing cloud services to offload some application complexity, such as AWS S3 for document storage or AWS KMS for key management. This leads us to consider different cloud providers themselves for best fit for the organization and the applications overall, whether it be AWS, Azure, GCP (Google cloud platform), or hybrid-cloud solutions.

Another very important aspect of application modernization, especially in the cloud, is ensuring that applications have proper automation.

Strong continuous integration and continuous deployment (CI/CD) pipelines should be implemented or enhanced for legacy applications. Additionally, we apply CD/CI automation for deploying database migrations and performing infrastructure-as-code (IaaC) updates, and ensure paradigms like immutable infrastructure (i.e. pre-packaging machine images or utilizing containerization) are utilized.

Last, there is an important cultural aspect to modernization from an organizational to team level. Organizations must consider modernization a part of their overall cloud strategy and support their development teams in this area. Development teams must adapt to new paradigms to understand and best utilize the cloud – adopting strong DevOps practices and reorganizing teams along business objectives instead of technology objectives is key.

By implementing a solid modernization strategy, businesses can realize the benefits the cloud provides, deliver value to their customers more rapidly, and compete in a rapidly changing cloud environment. If you’re ready to implement a modernization strategy in your organization, contact us for guidance on how to get started. Learn more about application modernization here.

– James Connell, Sr Cloud Consultant


What is App Modernization and Why is it Important?

Modernizing software architecture is often described as splitting a monolithic codebase but can imply any improvements to the software itself, such as decoupling of components or addressing tech debt in the codebase. Other examples might be finding new design patters that allow for scale, addressing resiliency within an application or improving observability through logs and tracing.

What is Meant by App Modernization?

Application modernization is the process of migrating an incumbent or legacy software application to modern development patterns, paradigms and platforms with the explicit purpose of improving business value. It’s a part of your entire application modernization strategy and implies improving the software architecture, application infrastructure, development techniques and business strategy using a cloud native approach. Essentially, it allows you to derive increased business value from your existing application code.

 

We often think of application modernization in the context of cloud, and when planning a migration to cloud or modernizing an application already in the cloud, we look at what services and platforms are beneficial to the effort. Utilizing a service such as Amazon S3 for serving documents instead of a network share or utilizing ElasticSearch instead of the database for search are examples of infrastructure improvements. Containerization and other serverless platforms are also considered.

Development techniques also need to be addressed in the context of modernization. Developers should focus on the parts of the application that deliver value to customers and provide competitive advantage.

If developers are focused on maintenance, long manual deployments, bugs, and log investigation, they are unable to deliver value quickly.

When working with modern distributed cloud applications, teams need to follow strong DevOps practices in order to be successful. CI/CD, unit testing, diagnostics and alerting are all areas that development teams can focus on modernizing.

Legacy Application and Legacy Systems

In this context, legacy software refers to an incumbent application or system that blocks or slows an organization’s ability to accomplish its business goals. These systems still provide value and are great candidates for modernization.

Legacy can imply many things, but some common characteristics of legacy apps are:

  • Applications that run older libraries, outdated frameworks, or development platforms or operating systems that are no longer supported.
  • Architectural issues – monolithic or tightly coupled systems can lead to difficulties in deployment, long release cycles and high defect rates.
  • Large amounts of technical debt, dead or unused code, teams who no longer understand how older parts of the application work, etc.
  • Security issues caused by technical debt, outdated security paradigms, unpatched operating systems, and improper secret management.
  • Lack instrumentation with no way to observe the application.
  • Maintain session state on the client (require sticky sessions, etc.).
  • Manually deployed or must be deployed in specific ways due to tight coupling.

Pillars of Application Modernization

When approaching a modernization project, we specifically look to ensure the following:

Flexible Architecture

The modernization initiative should follow a distributed computing approach, meaning it should take advantage of concepts such as elasticity, resiliency, and containerization. Converting applications to adhere to the principals of the “12-factor app” in order to take advantage of containerization is a prime example.

Automation

The application must be built, tested and deployed using modern CI/CD processes. Older source control paradigms such as RCS or SVN should be replaced with distributed version control systems (git). Infrastructure as code should be included as part of the CI/CD system.

Observability

Holistically integrate logs, metrics, and events enabling “the power to ask new questions of your system, without having to ship new code or gather new data in order to ask those new questions” (Charity Majors https://www.honeycomb.io/blog/observability-a-manifesto).  Observability is key to understanding performance, error rates, and communication patterns and enables the ability to measure your system and establish baselines.

Culture

Application teams should be aligned along business function, not technology, meaning multi-disciplinary teams that can handle operations (DevOps), database, testing (QA) and development. A culture of ownership is important in a cloud-native application.

App Modernization Examples

Application Modernization is not:

  • Just containerization – To take full advantage of containerization, applications must be properly architected (12-factor), instrumented for observability and deployed using CI/CD.
  • Just technical solutions adapting the latest framework or technology – The technology might be “modern” in a sense but doesn’t necessarily address cultural or legacy architectural issues.
  • Just addressing TCO – Addressing cost savings without addressing legacy issues does not constitute modernization.
  • Just running a workload in the cloud
  • Just changing database platforms – Licensing issues or the desire to move to open source clustered cloud databases does not equate to modernization.
  • Limited to a specific programming languages or specific cloud providers as a hybrid cloud approach can be deployed.

Application modernization includes, among others, combinations of:

  • Moving a SaaS application from a single to multi-tenant environment.
  • Breaking up a monolithic application into microservices.
  • Applying event driven architecture to decouple and separate concerns.
  • Utilizing cloud services such as S3 to replace in-house solutions.
  • Refactoring to use NoSQL technologies such as MongoDB, ElasticSearch, or Redis.
  • Containerization and utilization of PaaS technologies such as Kubernetes or Nomad.
  • Utilization of Serverless (FaaS) technologies such as AWS Lambda, Azure Functions, OpenFaas, or Kubeless.
  • Creating strong API abstractions like REST or gRPC and utilizing API Gateways.
  • Transitioning to client-side rendering frameworks (React, Vue.js, etc.) and serverless edge deployment of UI assets, removing the webserver.
  • Moving long running synchronous tasks to asynchronous batch processes.
  • Utilizing saga patterns or business process workflows.
  • Will ultimately focus on enhancing business applications, improving customer experience, and enable rapid digital transformation for the organization.

If you’re ready to start considering application modernization in your organization, contact us for guidance on how to get started.

-James Connell, Sr Cloud Consultant


Ransomware Attack Leaves Some Companies WannaCrying Over Technical Debt

The outbreak of a virulent strain of ransomware, alternately known as WannaCry or WannaCrypt, is finally winding down. A form of malware, the WannaCry attack exploited certain vulnerabilities in Microsoft Windows and infected hundreds of thousands of Windows computers worldwide.  As the dust begins to settle, the conversation inevitably turns to what could have been done to prevent it.

The first observation is that most organizations could have been protected simply by following best practices—most notably, the regular installation of known security and critical patches that help to minimize vulnerabilities. WannaCry was not an exotic “zero day” incident. The patch for the underlying vulnerabilities (MS17-010) has been available since March. Companies like 2nd Watch maintain a regular patch schedule to protect their systems from these and similar attacks. It should be noted that due to the prolific nature of this malware and the active attack vectors, 2nd Watch is requiring that all Windows systems must be patched by 5/31/2017.

Other best practices include:

  • Maintaining support contracts for out-of-date operating systems
  • Enabling firewalls, in addition to intrusion detection and prevention systems
  • Proactively monitoring and validating traffic going in and out of the network
  • Implementing security mechanisms for other points of entry attackers can use, such as email and websites
  • Deploying application control to prevent suspicious files from executing in addition to behavior monitoring that can thwart unwanted modifications to the system
  • Employing data categorization and network segmentation to mitigate further exposure and damage to data
  • Backing up important data. This is the single, most effective way of combating ransomware infection. However, organizations should ensure that backups are appropriately protected or stored off-line so that attackers can’t delete them.

 

The importance of regularly scheduled patching and keeping systems up-to-date cannot be overemphasized. It may not be sexy, but it is highly effective.

All of these recommendations seem simple enough, but why did the outbreak spread so quickly if the vulnerabilities were known and patches were readily available? It spread because the patches were released for currently supported systems, but the vulnerability has been present in all versions of Windows dating back to Windows XP. For these older systems – no longer supported by Microsoft but still widely used – the patches weren’t there in the first place. One of the highest profile victims, Britain’s National Health Service, discovered that 90 percent of NHS trusts run at least one Windows XP device, an operating system Microsoft first introduced in 2001 and hasn’t supported since 2014. In fact, it was only because of the high-profile nature of this malware that Microsoft took the rare step this week of publishing a patch for Windows XP, Windows Server 2003 and Windows 8.

This brings us to the challenging topic of “technical debt”—the extra cost and effort to continue using older technology. The WannaCry/WannaCrypt outbreak is simply the most recent teachable moment about those costs.

A big benefit of moving to cloud computing is its ability to help rid one’s organization of technical debt. By migrating workloads into the cloud, and even better, by evolving those workloads into modern, cloud-native architectures, the issue of supporting older servers and operating systems is minimized. As Gartner pointed out in the 2017 Gartner Magic Quadrant for Public Cloud Infrastructure Managed Service Providers, Worldwide, through 2018, the cloud managed service market will remain relatively immature, and more than 75% of fully successful implementations will be delivered by highly skilled, forward-looking, boutique managed service providers with a cloud-native, DevOps-centric service delivery approach, like 2nd Watch.  A free download of the report can be found here.

Partners like 2nd Watch can also help reduce your overall management cost by tailoring solutions to manage your infrastructure in the cloud. The best practices mentioned above can be automated in many environments– regular patching, resource isolation, traffic monitoring, etc. – are all done for you so you can focus on your business.

Even more important, companies like 2nd Watch help ensure the ongoing optimization of your workloads, both from a cost and a performance point of view. The life-cycle of optimization and modernization of your cloud environments is perhaps the single grea mechanism to ensure that you never take on and retain high levels of technical debt.

 

-John Lawler, Sr Product Manager


2nd Watch Meets Customer Demands and Prepares for Continued Growth and Acceleration with Amazon Aurora

The Product Development team at 2nd Watch is responsible for many technology environments that support our software and solutions—and ultimately, our customers. These environments need to be easily built, maintained, and kept in sync. In 2016, 2nd Watch performed an analysis on the amount of AWS billing data that we had collected and the number of payer accounts we had processed over the course of the previous year.  Our analysis showed that these measurements had more than tripled from 2015 and projections showed that we will continue to grow at the same, rapid pace with AWS usage and client onboarding increasing daily. Knowing that the storage of data is critical for many systems, our Product Development team underwent an evaluation of the database architecture used to house our company’s billing data—a single SQL Server instance running a Web edition of SQL Server with the maximum number of EBS volumes attached.

During the evaluation, areas such as performance, scaling, availability, maintenance and cost were considered and deemed most important for future success. The evaluation revealed that our current billing database architecture could not meet the criteria laid out to keep pace with growth.  Considerations were made to increase the storage capacity by one VM to the maximum family size or potentially upgrade to MS SQL Enterprise. In either scenario, the cost of the MS SQL instance doubled.  The only option for scaling without substantially increasing our cost was to scale vertically, however, to do so would result in diminishing performance gains. Maintenance of the database had become a full-time job that was increasingly difficult to manage.

Ultimately, we chose the cloud-native solution, Amazon Aurora, for its scalability, low-risk, easy-to-use technology.  Amazon Aurora is a MySQL relational database that provides speed and reliability while being delivered at a lower cost. It offers greater than 99.99% availability and can store up to 64TB of data. Aurora is self-healing and fully managed, which, along with the other key features, made Amazon Aurora an easy choice as we continue to meet the AWS billing usage demands of our customers and prepare for future growth.

The conversion from MS SQL to Amazon Aurora was successfully completed in early 2017 and, with the benefits and features that Amazon Aurora offers, many gains were made in multiple areas. Product Development can now reduce the complexity of database schemas because of the way Aurora stores data. For example, a database with one hundred tables and hundreds of stored procedures was reduced to one table with 10 stored procedures. Gains were made in performance as well. The billing system produces thousands of queries per minute and Amazon Aurora handles the load with the ability to scale to accommodate the increasing number of queries. Maintenance of the Amazon Aurora system is now virtually managed. Tasks such as database backups are automated without the complicated task of managing disks. Additionally, data is copied across six replicas in three availability zones which ensures availability and durability.

With Amazon Aurora, every environment is now easily built and setup using Terraform. All infrastructure is automatically setup—from the web tier to the database tier—with Amazon CloudWatch logs to alert the company when issues occur. Data can easily be imported using automated processes and even anonymized if there is sensitive data or the environment is used to demo to our customers. With the conversion of our database architecture from a single MS SQL Service instance to Amazon Aurora, our Product Development team can now focus on accelerating development instead of maintaining its data storage system.