McDonald’s France Gains Business-Changing Insights from New Data Lake

McDonald’s is famous for cheeseburgers and fries, but with 1.5 million customers a day, and each transaction producing 20 to 30 data points, it has also become a technology organization. With the overarching goal to improve customer experience, and as a byproduct increase conversion and brand loyalty, McDonald’s France partnered with 2nd Watch to build a data lake on AWS.

Customer Priorities Require Industry Shifts

As is common in many industries today, the fast-food industry has shifted from a transaction centric view to a customer centric view. The emphasis is no longer on customer satisfaction, but on customer experience. It’s this variable that impacts conversion rate and instills loyalty. Consequently, McDonald’s wanted to build a complete perspective of a customer’s lifetime value, with visibility into each step of their journey. Understanding likes and dislikes based on data would give McDonald’s the opportunity to improve experience at a variety of intersections across global locations.

McDonald’s is a behemoth in its size, multi-national reach, and the abundance of data it collects. Making sense of that data required a new way of storing and manipulating it, with flexibility and scalability. The technology necessary to accomplish McDonald’s data goals has significantly reduced in cost, while increasing in efficiency – key catalysts for initiating the project within McDonald’s groups, gaining buy-in from key stakeholders, and engaging quickly.

From Datacenter to Data Lake

To meet its data collection and analysis needs, McDonald’s France needed a fault-tolerant data platform equipped with data processing architecture and a loosely coupled distribution system. But, the McDonald’s team needed to focus on data insights rather than data infrastructure, so they partnered with 2nd Watch to move from a traditional data warehouse to a data lake, allowing them to reduce the effort required to analyze or process data sets for different properties and applications.

During the process, McDonald’s emphasized the importance of ongoing data collection from anywhere and everywhere across their many data sources. From revenue numbers and operational statistics to social media streams, kitchen management systems, commercial, regional, and structural data – they wanted everything stored for potential future use. Historical data will help to establish benchmarks, forecast sales projections, and understand customer behavior over time.

The Data Challenges We Expect…And More

With so much data available, and the goal of improving customer experience as motivation, McDonald’s France wanted to prioritize three types of data – sales, speed of service, and customer experience. Targeting specific sets of data helps to reduce the data inconsistencies every organization faces in a data project. While collecting, aggregating, and cleaning data is a huge feat on its own, McDonald’s France also had to navigate a high level of complexity.

As an omnichannel restaurant, McDonald’s juggles information from point of sales systems with sales happening online, offline, and across dozens of different locations. Data sources include multiple data vendors, mobile apps, loyalty programs, customer relationship management (CRM) tools, and other digital interfaces. Combined in one digital ecosystem, this data is the force that drives the entire customer journey. Once it’s all there, the challenge is to find the link for any given customer that transforms the puzzle into a holistic picture.

Endless Opportunities for the Future

McDonald’s France now has visibility into speed of service with a dedicated dashboard and can analyze and provide syntheses of that data. National teams can make data-based, accurate decisions using the dashboard and implement logistical changes in operations. They’re able to impact operational efficiency using knowledge around prep time to influence fulfilment.

The data lake was successful in showing the organization where it was losing opportunities by not taking advantage of the data it had. McDonald’s also proved it was possible, affordable, and advantageous to invest in data. While their data journey has only begun, these initial steps opened the door to new data usage possibilities. The models established by McDonald’s France will be used as an example to expand data investments throughout the McDonald’s corporation.

If your organization is facing a similar of issue of too much data and not enough insight, 2nd Watch can help. Our data and analytics solutions help businesses make better decisions, faster, with a modern data stack in the cloud. Contact Us to start talking about the tools and strategies necessary to reach your goals.

-Ian Willoughby, Chief Architect and Vice President

Listen to the McDonald’s team talk about this project on the 2nd Watch Cloud Crunch podcast.

Facebooktwitterlinkedinmailrss

The Most Popular and Fastest-Growing AWS Products of 2021

Enterprise IT departments are increasing cloud usage at an exponential rate. These tools and technologies enable greater innovation, cost savings, flexibility, productivity and faster-time-to-market, ultimately facilitating business modernization and transformation.

Amazon Web Services (AWS) is a leader among IaaS vendors, and every year around this time, we look back at the most popular AWS products of the past year, based on the percentage of 2nd Watch clients using them. We also evaluate the fastest-growing AWS products, based on how much spend our clients are putting towards various AWS products compared to the year before.

We’ve categorized the lists into the “100%s” and the “Up-and-Comers.” The 100%s are products that were used by all of our clients in 2020 – those products and services that are nearly universal and necessary in a basic cloud environment. The Up-and-Comers are the five fastest-growing products of the past year. We also highlight a few products that didn’t make either list but are noteworthy and worth watching.

12 Essential AWS Products

In 2020, there were 12 AWS products that were used by 100% of our client base:

  • AWS CloudTrail
  • AWS Key Management Service
  • AWS Lambda
  • AWS Secrets Manager
  • Amazon DynamoDB
  • Amazon Elastic Compute Cloud
  • Amazon Relational Database Service
  • Amazon Route 53
  • Amazon Simple Notification Service
  • Amazon Simple Queue Service
  • Amazon Simple Storage Service
  • Amazon CloudWatch

Why were these products so popular in 2020? For the most part, products that are universally adopted reflect the infrastructure that is required to run a modern AWS cloud footprint today.

Products in the 100%s club also demonstrate how AWS has made a strong commitment to the integration and extension of the cloud-native management tools stack, so external customers can have access to many of the same features and capabilities used in their own internal services and infrastructure.

AWS Trending Products and Services

The following AWS products were the fastest growing in 2020:

  • AWS Systems Manager
  • Amazon Transcribe
  • Amazon Comprehend
  • AWS Support BJS (Business)
  • AWS Security Hub

The fastest-growing products in 2020 seem to be squarely focused on digital application in some form, whether text/voice translation using machine learning (Comprehend and Transcribe) or protection of those applications and ensuring better security management overall (Security Hub). This is a bit of a change from 2019, when the fastest-growing products were focused on application orchestration (AWS Step Functions) or infrastructure topics with products like Cost Explorer, Key Management Service or Container Service.

With a huge demand for data analytics and machine learning across enterprise organizations, utilizing services such as Comprehend and Transcribe allows you to gather insights into customer sentiment when examining customer reviews, support tickets, social media, etc. Businesses can use the services to extract key phrases, places, people, brands, or events, and, with the help of machine learning, gain an understanding of how positive or negative conversations were conducted. This provides a company with a lot of power to modify practices, offerings, and marketing messaging to enhance customer relationships and improve sentiment.

Emerging Technology

The following products were new to our Most Popular list in 2020 and therefore are worth watching:

AWS X-Ray allows users to understand how their application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. One factor contributing to its rising popularity is more distributed systems, like microservices, being developed and traceability becoming more important.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Increased use of Athena indicates more analysis is happening using a greater number of data sources, which signifies companies are becoming more data driven in their decision making.

A surge in the number of companies using EC2 Container Service and EC2 Container Registry demonstrates growing interest in containers and greater cloud maturity across the board. Companies are realizing the benefits of consistent/isolated environments, flexibility, better resource utilization, better automation and DevOps practices, and greater control of deployments and scaling.

Looking Ahead

For 2021, we expect there to be a continued focus on adoption of existing and new products focused on security, data, application modernization and cloud management. In our own client interactions, these are the constant topics of discussion and services engagements we are executing as part of cloud modernization across industries.

-Joey Yore, Principal Consultant

Facebooktwitterlinkedinmailrss

2nd Watch Uses Redshift to Improve Client Optimization

Improving our use of Redshift: Then and now

Historically, and common among enterprise IT processes, the 2nd Watch optimization team was pulling in cost usage reports from Amazon and storing them in S3 buckets. The data was then loaded into Redshift, Amazon’s cloud data warehouse, where it could be manipulated and analyzed for client optimization. Unfortunately, the Redshift cluster filled up quickly and regularly, forcing us to spend unnecessary time and resources on maintenance and clean up. Additionally, Redshift requires a large cluster to work with, so the process for accessing and using data became slow and inefficient.

Of course, to solve for this we could have doubled the size, and therefore the cost, of our Redshift usage, but that went against our commitment to provide cost-effective options for our clients. We also could have considered moving to a different type of node that is storage optimized, instead of compute optimized.

Lakehouse Architecture for speed improvements and cost savings

The better solution we uncovered, however, was to follow the Lakehouse Architecture pattern to improve our use of Redshift to move faster and with more visibility, without additional storage fees. The Lakehouse Architecture is a way to strike a balance between cost and agility by selectively moving data in and out of Redshift depending on the processing speed needed for the data. Now, after a data dump to S3, we use AWS Glue crawlers and tables to create external tables in the Glue Data Catalogues. The external tables or schemas are linked to the Redshift cluster, allowing our optimization team to read from S3 to Redshift using Redshift Spectrum.

Our cloud data warehouse remains tidy without dedicated clean-up resources, and we can query the data in S3 via Redshift without having to move anything. Even though we’re using the same warehouse, we’ve optimized its use for the benefit of both our clients and 2nd Watch best practices. In fact, our estimated savings are $15,000 per month, or 100% of our previous Redshift cost.

How we’re using Redshift today

With our new model and the benefits afforded to clients, 2nd Watch is applying Redshift for a variety of optimization opportunities.

Discover new opportunities for optimization. By storing and organizing data related to our clients’ AWS, Azure, and/or Google Cloud usage versus spend data, the 2nd Watch optimization team can see where further optimization is possible. Improved data access and visibility enables a deeper examination of cost history, resource usage, and any known RIs or savings plans.

Increase automation and reduce human error. The new model allows us to use DBT (data build tool) to complete SQL transforms on all data models used to feed reporting. These reports go into our dashboards and are then presented to clients for optimization. DBT empowers analysts to transform warehouse data more efficiently, and with less risk, by relying on automation instead of spreadsheets.

Improve efficiency from raw data to client reporting. Raw data that lives in a data lake in s3 is transformed and organized into a structured data lake that is prepared to be defined in AWS Glue Catalog tables. This gives the analysts access to query the data from Redshift and use DBT to format the data into useful tables. From there, the optimization team can make data-based recommendations and generate complete reports for clients.

In the future, we plan on feeding a power business intelligence dashboard directly from Redshift, further increasing efficiency for both our optimization team and our clients.

Client benefits with Redshift optimization

  • Cost savings: Only pay for the S3 storage you use, without any storage fees from Redshift.
  • Unlimited data access: Large amounts of old data are available in the data lake, which can be joined across tables and brought into Redshift as needed.
  • Increased data visibility: Greater insight into data enables us to provide more optimization opportunities and supports decision making.
  • Improved flexibility and productivity: Analysts can get historical data within one hour, rather than waiting 1-2 weeks for requests to be fulfilled.
  • Reduced compute cost: By shifting the compute cost of loading data into to Amazon EKS.

-Spencer Dorway, Data Engineer

Facebooktwitterlinkedinmailrss

The Most Popular AWS Products of 2018

Big Data and Machine Learning Services Lead the Way

If you’ve been reading this blog, or otherwise following the enterprise tech market, you know that the worldwide cloud services market is strong. According to Gartner, the market is projected to grow by 17% in 2019, to over $206 billion.

Within that market, enterprise IT departments are embracing cloud infrastructure and related services like never before. They’re attracted to tools and technologies that enable innovation, cost savings, faster-time-to-market for new digital products and services, flexibility and productivity. They want to be able to scale their infrastructure up and down as the situation warrants, and they’re enamored with the idea of “digital transformation.”

In its short history, cloud infrastructure has never been more exciting. At 2nd Watch, we are fortunate to have a front-row seat to the show, with more than 400 enterprise workloads under management and over 200,000 instances in our managed public cloud. With 2018 now in our rearview mirror, we thought this a good time for a quick peek back at the most popular Amazon Web Services (AWS) products of the past year. We aggregated and anonymized our AWS customer data from 2018, and here’s what we found:

The top five AWS products of 2018 were: Amazon Virtual Private Cloud (used by 100% of 2nd Watch customers); AWS Data Transfer (100%); Amazon Simple Storage Service (100%); Amazon DynamoDB (100%) and Amazon Elastic Compute Cloud (100%). Frankly, the top five list isn’t surprising. It is, however, indicative of legacy workloads and architectures being run by the enterprise.

Meanwhile, the fastest-growing AWS products of 2018 were: Amazon Athena (68% CAGR, as measured by dollars spent on this service with 2nd Watch in 2018 v. 2017); Amazon Elastic Container Service for Kubernetes (53%); Amazon MQ (37%); AWS OpsWorks (23%); Amazon EC2 Container Service (21%); Amazon SageMaker (21%); AWS Certificate Manager (20%); and AWS Glue (16%).

The growth in data services like Athena and Glue, correlated with Sagemaker, is interesting. Typically, the hype isn’t supported by the data, but clearly, customers are moving forward with big data and machine learning strategies. These three services were also the fastest growing services in Q4 2018.

Looking ahead, I expect EKS to be huge this year, along with Sagemaker and serverless. Based on job postings and demand in the market, Kubernetes is the most requested skill set in the enterprise. For a look at the other AWS products and services that rounded out our list for 2018, download our infographic.

– Chris Garvey, EVP Product

Facebooktwitterlinkedinmailrss

So You Think You Can DevOps?

We recently took a DevOps poll of 1,000 IT professionals to get a pulse for where the industry sits regarding the adoption and completeness of vision around DevOps.  The results were pretty interesting, and overall we are able to deduce that a large majority of the organizations who answered the survey are not truly practicing DevOps.  Part of this may be due to the lack of clarity on what DevOps really is.  I’ll take a second to summarize it as succinctly as possible here.

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. This includes, but is not limited to, the culture, tools, organization, and practices required to accomplish this amalgamated methodology of delivering IT services.

credit: https://theagileadmin.com/what-is-devops/

In order to practice DevOps you must be in a DevOps state of mind and embrace its values and mantras unwaveringly.

The first thing that jumped out at me from our survey was the responses to the question “Within your organization, do separate teams manage infrastructure/operations and application development?”  78.2% of respondents answered “Yes” to that question.  Truly practicing DevOps requires that the infrastructure and applications are managed within the context of the same team, so we can deduce that at least 78.2% of the respondents’ companies are not truly practicing DevOps.  Perhaps they are using some infrastructure-as-code tools, some forms of automation, or even have CI/CD pipelines in place, but those things alone do not define DevOps.

Speaking of infrastructure-as-code… Another question, “How is your infrastructure deployed and managed?” had nearly 60% of respondents answering that they were utilizing infrastructure-as-code tools (e.g. Terraform, Configuration Management, Kubernetes) to manage their infrastructure, which is positive, but shows the disconnect between the use of DevOps tools and actually practicing DevOps (as noted in the previous paragraph). On the other hand, just over 38% of respondents indicated that they are managing infrastructure manually (e.g. through the console), which means not only are they not practicing DevOps they aren’t even managing their infrastructure in a way that will ever be compatible with DevOps… yikes.  The good news is that tools like Terraform allow you to import existing manually deployed infrastructure where it can then be managed as code and handled as “immutable infrastructure.”  Manually deploying anything is a DevOps anti-pattern and must be avoided at all costs.

Aside from infrastructure we had several questions around application development and deployment as it pertains to DevOps.  Testing code appears to be an area where a majority of respondents are staying proactive in a way that would be beneficial to a DevOps practice.  The question “What is your approach to writing tests?” had the following breakdown on its answers:

  • We don’t really test:  10.90%
  • We get to it if/when we have time:  15.20%
  • We require some percentage of code to be covered by tests before it is ready for production:  32.10%
  • We require comprehensive unit and integration testing before code is pushed to production:  31.10%
  • Rigid TDD/BDD/ATDD/STDD approach – write tests first & develop code to meet those test requirements:  10.70%

We can see that around 75% of respondents are doing some form of consistent testing, which will go a long way in helping build out a DevOps practice, but a staggering 25% of respondents have little or no testing of code in place today (ouch!).  Another question “How is application code deployed and managed?” shows that around 30% of respondents are using a completely manual process for application deployment and the remaining 70% are using some form of an automated pipeline.  Again, the 70% is a positive sign for those wanting to embrace DevOps, but there is still a massive chunk at 30% who will have to build out automation around testing, building, and deploying code.

Another important factor in managing services the DevOps way is to have all your environments mirror each other.  In response to the question “How well do your environments (e.g. dev, test, prod) mirror one another?” around 28% of respondents indicated that their environments are managed completely independently of each other.  Another 47% indicated that “they share some portion of code but are not managed through identical code bases and processes,” and the remaining 25% are doing it properly by “managed identically using same code & processes employing variables to differentiate environments.”  Lots of room for improvement in this area when organizations decide they are ready to embrace the DevOps way.

Our last question in the survey was “How are you notified when an application/process/system fails?” and I found the answers a bit staggering.  Over 21% of respondents indicated that they are notified of outages by the end user.  It’s pretty surprising to see that large of a percentage utilizing such a reactionary method of service monitoring.  Another 32% responded that “someone in operation is watching a dashboard,” which isn’t as surprising but will definitely be something that needs to be addressed when shifting to a DevOps approach.  Another 23% are using third-party tools like NewRelic and Pingdom to monitor their apps.  Once again, we have that savvy ~25% group who are currently operating in a way that bodes well for DevOps adoption by answering “Monitoring is built into the pipeline, apps and infrastructure. Notifications are sent immediately.”  The twenty-five-percenters are definitely on the right path if they aren’t already practicing DevOps today.

In summary, we have been able to deduce from our survey that, at best, around 25% of the respondents are actually engaging in a DevOps practice today. For more details on the results of our survey, download our infographic.

-Ryan Kennedy, Principal Cloud Automation Architect

Facebooktwitterlinkedinmailrss

AIG Moves towards DevOps and Cloud

In a blog post this morning, the Head of Enterprise Strategy at AWS, Stephen Orban, shares a personal note he received from Salvatore Saieva, CTO Lead for Public Cloud Projects and Initiatives at American Insurance Group (AIG), about why the company moved away from traditional infrastructure methods and towards DevOps and the Cloud.

In his note to Mr. Orban, Saieva detailed how his infrastructure support team was managing production applications on VCE Vblock equipment and how working with converged technology led them to automation, agile methods, continuous approaches, and, ultimately, DevOps. By using DevOps to collaborate with other teams through the company, Salvatore’s IT team led the charge in becoming cloud-ready.

Read more about AIG’s move to DevOps and the cloud on Stephen Orban’s blog.

-Nicole Maus, Marketing Manager

Facebooktwitterlinkedinmailrss