3 Takeaways from a New Looker Developer

In this blog post, read about this consultant’s experience with Looker, in their own words.

As a data management and analytics consultant, I have developed dashboards in a majority of the popular BI tools such as Tableau and Power BI, as well as their backend data structures. The opportunity to develop dashboards in Looker arose when a new client, Byrider, needed 2nd Watch to help them model their data and develop Looker dashboards for their sales team (more details here).

Based on my limited experience with Looker, I knew that it makes creating quality visuals simple and that coding in LookML is unavoidable. I worried that LookML would be extremely nuanced and I would lose time troubleshooting simple tasks. I could not have been more wrong on that front. Along with this realization, below are my top takeaways from my first Looker project.

Takeaway 1: LookML is easy to learn and ensures consistent metrics across reports.

Given the vast amount of documentation provided by Looker and the straightforward format of LookML code, I quickly caught on. This learning curve may be slightly different for report developers who have minimal experience with SQL. LookML adds transparency into what happens with data presented in visuals by directly showing how the code translates into the SQL queries that run against the source data. This makes it much easier to trust the results of dashboards and QA as you develop.

More importantly, LookML allows users to ensure their metric definitions are consistent across dashboards and reports. Establishing this single source of truth is key for the success of any reporting efforts. Within the semantic layer of the reporting tool, users can create SQL queries or harness LookML functions to develop custom measures and include descriptions to define them. Transforming the source data into predefined measures in the back end of the reporting tool ensures that report developers access the same metrics for every dashboard business users will see. This is a clear contrast from tools like Power BI and Tableau where the custom measures are created in each workbook and can vary. Furthermore, by using roles, administrators can limit who has access to change this code.

Takeaway 2: Creating dashboards and visuals is super intuitive for about 95% of use cases.

After setting up your data connections and LookML, developing a visual (“Look”) in Looker only requires a simple point and click process. Once you select the filters, measures, and dimensions to include in a visual, you can click through the visualization options to determine the best possible way to present the data. From there, you can easily adjust colors and stylistic options in settings using drop-down menus. Compared to other BI tools, these visuals are fairly standard across the board. That being said, Looker greatly stands out when it comes to table visualizations. It allows for conditional formatting similar to that in Excel and a wide range of visual options in comparison to other BI tools. This makes Looker a great selection for companies that often require tables to meet reporting requirements.

Although detailed documentation and the simple interface meet most reporting needs, there are limitations when it comes to easy customization in visuals. This includes the inability to set drill-ins by a visual rather than a field. In Looker, any demographic used across reports has to drill into the same fields (unlike those set per visual in a Tableau Tool Tip, for example). Additionally, you cannot format visuals based on customized metrics (e.g., color bands, conditional formatting for Field A based on the value of Field B, etc.). The caveat here is that you can unlock many customized visuals by writing custom code, a skill not always handy for report developers.

Takeaway 3: Looker is extremely collaborative, something not often seen in BI tools.

With most BI tools, developers are forced to work independently because two people cannot easily contribute to a single workbook at the same time. Looker’s web-based format seems to have been built with collaborative development in mind, making this tool stand out when it comes to teamwork. Business users can also easily contribute because the web-based tool makes sharing dashboards and embedding them within websites easy. While this may seem minor to some, it significantly enhances productivity and yields a better result.

The following features ensure that your team can iterate on each other’s work, edit the same dashboards, and develop LookML without accidentally overwriting work or creating multiple versions of the same report:

  • Version control and deployment processes built into the “Development” window where users can modify and add LookML code
  • Ability to duplicate Looks developed by others and iterate on them, and Looks can then be added to dashboards
  • Shared folders where Looks and Dashboards used by multiple people can be stored and reused (if needed)
  • Ability to “Explore” a Look created by someone else to investigate underlying data
  • Ability to edit a dashboard at the same time others can make changes
  • Sharing dashboards using a link and the ease of embedding dashboards, which allows for seamless collaboration with business users as well

With a properly modeled data source, Looker impressed in terms of its performance and ability to provide highly drillable dashboards. This enabled us to dramatically reduce the number of reports needed to address the wide range of detail that business users within a department required. While the visuals were not as flashy as other BI tools, Looker’s highly customizable table visualizations, row-level security, and drill-in options were a perfect fit for Byrider’s use cases.

2nd Watch specializes in advising companies on how to gain the most business value possible from their analytics tools. We assist organizations with everything from selecting which tool best suits your needs to developing dashboards for various departments or structuring data to enable quick reporting results. Contact us if you need help determining if Looker is the tool you need or if want guidance on how to get started.


3 Data Integration Best Practices Every Successful Business Adopts

Here’s a hypothetical situation: Your leadership team is on a conference call, and the topic of conversation turns to operational reports. The head of each line of business (LOB) presents a conflicting set of insights, but each one is convinced that the findings from their analytics platform are the gospel truth. With data segregated across the LOBs, there’s no clear way to determine which insights are correct or make an informed, unbiased decision.

What Do You Do?

In our experience, the best course of action is to create a single source of truth for all enterprise analytics. Organizations that do so achieve greater data consistency and quality data sources, increasing the accuracy of their insights – no matter who is conducting analysis. Since the average organization draws from 400 different data sources (and one in five needs to integrate more than 1,000 disparate data sources), it’s no surprise that many organizations struggle to integrate their data. Yet with these data integration best practices, you’ll find fewer challenges as you create a golden source of insight.

Take a Holistic Approach

The complexity of different data sources and niche analytical needs within the average organization makes it difficult for many to hone in on their master plan for data integration. As a result, there are plenty of instances in which the tail ends up wagging the dog.

Maybe it’s an LOB with greater data maturity pushing for an analytics layer that aligns with their existing analytics platform to the detriment of others. Or maybe the organization is familiar with a particular stack or solution and is trying to force the resulting data warehouse to match those source schema. Whatever the reason, a non-comprehensive approach to data integration will hamstring your reporting.

In our experience, organizations see the best results when they design their reporting capabilities around their desired insight – not a specific technology. Take our collaboration with a higher education business. They knew from the outset that they wanted to use their data to convert qualified prospects into more enrollees. They trusted us with the logistics of consolidating their more than 90 disparate data sources (from a variety of business units across more than 10 managed institutions) into reports that helped them analyze the student journey and improve their enrollment rate as a whole.

With their vision in mind, we used an Alooma data pipeline to move the data to the target cloud data warehouse, where we transformed the data into a unified format. From there, we created dashboards that allowed users to obtain clear and actionable insight from queries capable of impacting the larger business. By working toward an analytical goal rather than conforming to their patchwork of source systems, we helped our client lay the groundwork to increase qualified student applications, reduce the time from inquiry to enrollment, and even increase student satisfaction.

Win Quickly with a Manageable Scope

When people hear the phrase “single source of truth” in relation to their data, they imagine their data repository needs to enter the world fully formed with an enterprise-wide scope. For mid-to-large organizations, that end-to-end data integration process can take months (if not years) before they receive any direct ROI from their actions.

One particular client of ours entered the engagement with that boil-the-ocean mentality. A previous vendor had proposed a three-year timeline, suggesting a data integration strategy that would:

  • Map their data ecosystem
  • Integrate disparate data sources into a centralized hub
  • Create dashboards for essential reporting
  • Implement advanced analytics and data science capabilities

Though we didn’t necessarily disagree with the projected capability, the waiting period before they experienced any ROI undercut the potential value. Instead, we’re planning out a quick win for their business, focusing on a mission-critical component that can provide a rapid ROI. From there, we will scale up the breadth of their target data system and the depth of their analytics.

This approach has two added benefits. One, you can test the functionality and accessibility of your data system in real time, making enhancements and adjustments before you expand to the enterprise level. Two, you can develop a strong and clear use case early in the process, lowering the difficulty bar as you try to obtain buy-in from the rest of the leadership team.

Identify Your Data Champion

The shift from dispersed data silos to a centralized data system is not a turnkey process. Your organization is undergoing a monumental change. As a result, you need a champion within the organization to foster the type of data-driven culture that ensures your single source of truth lives up to the comprehensiveness and accuracy you expect.

What does a data champion do? They act as an advocate for your new data-driven paradigm. They communicate the value of your centralized data system to different stakeholders and end users, encouraging them to transition from older systems to more efficient dashboards. Plus, they motivate users across departments and LOBs to follow data quality best practices that maintain the accuracy of insights enterprise wide.

It’s not essential that this person be a technical expert. This person needs to be passionate and build trust with members of the team, showcasing the new possibilities capable through your data integration solution. All of the technical elements of data integration or navigating your ELT/ETL tool can be handled by a trusted partner like 2nd Watch.

Schedule a whiteboard session with our team to discuss your goals, source systems, and data integration solutions.


McDonald’s France Gains Business-Changing Insights from New Data Lake

McDonald’s is famous for cheeseburgers and fries, but with 1.5 million customers a day, and each transaction producing 20 to 30 data points, it has also become a technology organization. With the overarching goal to improve customer experience, and as a byproduct increase conversion and brand loyalty, McDonald’s France partnered with 2nd Watch to build a data lake on AWS.

Customer Priorities Require Industry Shifts

As is common in many industries today, the fast-food industry has shifted from a transaction centric view to a customer centric view. The emphasis is no longer on customer satisfaction, but on customer experience. It’s this variable that impacts conversion rate and instills loyalty. Consequently, McDonald’s wanted to build a complete perspective of a customer’s lifetime value, with visibility into each step of their journey. Understanding likes and dislikes based on data would give McDonald’s the opportunity to improve experience at a variety of intersections across global locations.

McDonald’s is a behemoth in its size, multi-national reach, and the abundance of data it collects. Making sense of that data required a new way of storing and manipulating it, with flexibility and scalability. The technology necessary to accomplish McDonald’s data goals has significantly reduced in cost, while increasing in efficiency – key catalysts for initiating the project within McDonald’s groups, gaining buy-in from key stakeholders, and engaging quickly.

From Datacenter to Data Lake

To meet its data collection and analysis needs, McDonald’s France needed a fault-tolerant data platform equipped with data processing architecture and a loosely coupled distribution system. But, the McDonald’s team needed to focus on data insights rather than data infrastructure, so they partnered with 2nd Watch to move from a traditional data warehouse to a data lake, allowing them to reduce the effort required to analyze or process data sets for different properties and applications.

During the process, McDonald’s emphasized the importance of ongoing data collection from anywhere and everywhere across their many data sources. From revenue numbers and operational statistics to social media streams, kitchen management systems, commercial, regional, and structural data – they wanted everything stored for potential future use. Historical data will help to establish benchmarks, forecast sales projections, and understand customer behavior over time.

The Data Challenges We Expect…And More

With so much data available, and the goal of improving customer experience as motivation, McDonald’s France wanted to prioritize three types of data – sales, speed of service, and customer experience. Targeting specific sets of data helps to reduce the data inconsistencies every organization faces in a data project. While collecting, aggregating, and cleaning data is a huge feat on its own, McDonald’s France also had to navigate a high level of complexity.

As an omnichannel restaurant, McDonald’s juggles information from point of sales systems with sales happening online, offline, and across dozens of different locations. Data sources include multiple data vendors, mobile apps, loyalty programs, customer relationship management (CRM) tools, and other digital interfaces. Combined in one digital ecosystem, this data is the force that drives the entire customer journey. Once it’s all there, the challenge is to find the link for any given customer that transforms the puzzle into a holistic picture.

Endless Opportunities for the Future

McDonald’s France now has visibility into speed of service with a dedicated dashboard and can analyze and provide syntheses of that data. National teams can make data-based, accurate decisions using the dashboard and implement logistical changes in operations. They’re able to impact operational efficiency using knowledge around prep time to influence fulfilment.

The data lake was successful in showing the organization where it was losing opportunities by not taking advantage of the data it had. McDonald’s also proved it was possible, affordable, and advantageous to invest in data. While their data journey has only begun, these initial steps opened the door to new data usage possibilities. The models established by McDonald’s France will be used as an example to expand data investments throughout the McDonald’s corporation.

If your organization is facing a similar of issue of too much data and not enough insight, 2nd Watch can help. Our data and analytics solutions help businesses make better decisions, faster, with a modern data stack in the cloud. Contact Us to start talking about the tools and strategies necessary to reach your goals.

-Ian Willoughby, Chief Architect and Vice President

Listen to the McDonald’s team talk about this project on the 2nd Watch Cloud Crunch podcast.


The Most Popular and Fastest-Growing AWS Products of 2021

Enterprise IT departments are increasing cloud usage at an exponential rate. These tools and technologies enable greater innovation, cost savings, flexibility, productivity and faster-time-to-market, ultimately facilitating business modernization and transformation.

Amazon Web Services (AWS) is a leader among IaaS vendors, and every year around this time, we look back at the most popular AWS products of the past year, based on the percentage of 2nd Watch clients using them. We also evaluate the fastest-growing AWS products, based on how much spend our clients are putting towards various AWS products compared to the year before.

We’ve categorized the lists into the “100%s” and the “Up-and-Comers.” The 100%s are products that were used by all of our clients in 2020 – those products and services that are nearly universal and necessary in a basic cloud environment. The Up-and-Comers are the five fastest-growing products of the past year. We also highlight a few products that didn’t make either list but are noteworthy and worth watching.

12 Essential AWS Products

In 2020, there were 12 AWS products that were used by 100% of our client base:

  • AWS CloudTrail
  • AWS Key Management Service
  • AWS Lambda
  • AWS Secrets Manager
  • Amazon DynamoDB
  • Amazon Elastic Compute Cloud
  • Amazon Relational Database Service
  • Amazon Route 53
  • Amazon Simple Notification Service
  • Amazon Simple Queue Service
  • Amazon Simple Storage Service
  • Amazon CloudWatch

Why were these products so popular in 2020? For the most part, products that are universally adopted reflect the infrastructure that is required to run a modern AWS cloud footprint today.

Products in the 100%s club also demonstrate how AWS has made a strong commitment to the integration and extension of the cloud-native management tools stack, so external customers can have access to many of the same features and capabilities used in their own internal services and infrastructure.

AWS Trending Products and Services

The following AWS products were the fastest growing in 2020:

  • AWS Systems Manager
  • Amazon Transcribe
  • Amazon Comprehend
  • AWS Support BJS (Business)
  • AWS Security Hub

The fastest-growing products in 2020 seem to be squarely focused on digital application in some form, whether text/voice translation using machine learning (Comprehend and Transcribe) or protection of those applications and ensuring better security management overall (Security Hub). This is a bit of a change from 2019, when the fastest-growing products were focused on application orchestration (AWS Step Functions) or infrastructure topics with products like Cost Explorer, Key Management Service or Container Service.

With a huge demand for data analytics and machine learning across enterprise organizations, utilizing services such as Comprehend and Transcribe allows you to gather insights into customer sentiment when examining customer reviews, support tickets, social media, etc. Businesses can use the services to extract key phrases, places, people, brands, or events, and, with the help of machine learning, gain an understanding of how positive or negative conversations were conducted. This provides a company with a lot of power to modify practices, offerings, and marketing messaging to enhance customer relationships and improve sentiment.

Emerging Technology

The following products were new to our Most Popular list in 2020 and therefore are worth watching:

AWS X-Ray allows users to understand how their application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. One factor contributing to its rising popularity is more distributed systems, like microservices, being developed and traceability becoming more important.

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run. Increased use of Athena indicates more analysis is happening using a greater number of data sources, which signifies companies are becoming more data driven in their decision making.

A surge in the number of companies using EC2 Container Service and EC2 Container Registry demonstrates growing interest in containers and greater cloud maturity across the board. Companies are realizing the benefits of consistent/isolated environments, flexibility, better resource utilization, better automation and DevOps practices, and greater control of deployments and scaling.

Looking Ahead

For 2021, we expect there to be a continued focus on adoption of existing and new products focused on security, data, application modernization and cloud management. In our own client interactions, these are the constant topics of discussion and services engagements we are executing as part of cloud modernization across industries.

-Joey Yore, Principal Consultant


2nd Watch Uses Redshift to Improve Client Optimization

Improving our use of Redshift: Then and now

Historically, and common among enterprise IT processes, the 2nd Watch optimization team was pulling in cost usage reports from Amazon and storing them in S3 buckets. The data was then loaded into Redshift, Amazon’s cloud data warehouse, where it could be manipulated and analyzed for client optimization. Unfortunately, the Redshift cluster filled up quickly and regularly, forcing us to spend unnecessary time and resources on maintenance and clean up. Additionally, Redshift requires a large cluster to work with, so the process for accessing and using data became slow and inefficient.

Of course, to solve for this we could have doubled the size, and therefore the cost, of our Redshift usage, but that went against our commitment to provide cost-effective options for our clients. We also could have considered moving to a different type of node that is storage optimized, instead of compute optimized.

Lakehouse Architecture for speed improvements and cost savings

The better solution we uncovered, however, was to follow the Lakehouse Architecture pattern to improve our use of Redshift to move faster and with more visibility, without additional storage fees. The Lakehouse Architecture is a way to strike a balance between cost and agility by selectively moving data in and out of Redshift depending on the processing speed needed for the data. Now, after a data dump to S3, we use AWS Glue crawlers and tables to create external tables in the Glue Data Catalogues. The external tables or schemas are linked to the Redshift cluster, allowing our optimization team to read from S3 to Redshift using Redshift Spectrum.

Our cloud data warehouse remains tidy without dedicated clean-up resources, and we can query the data in S3 via Redshift without having to move anything. Even though we’re using the same warehouse, we’ve optimized its use for the benefit of both our clients and 2nd Watch best practices. In fact, our estimated savings are $15,000 per month, or 100% of our previous Redshift cost.

How we’re using Redshift today

With our new model and the benefits afforded to clients, 2nd Watch is applying Redshift for a variety of optimization opportunities.

Discover new opportunities for optimization. By storing and organizing data related to our clients’ AWS, Azure, and/or Google Cloud usage versus spend data, the 2nd Watch optimization team can see where further optimization is possible. Improved data access and visibility enables a deeper examination of cost history, resource usage, and any known RIs or savings plans.

Increase automation and reduce human error. The new model allows us to use DBT (data build tool) to complete SQL transforms on all data models used to feed reporting. These reports go into our dashboards and are then presented to clients for optimization. DBT empowers analysts to transform warehouse data more efficiently, and with less risk, by relying on automation instead of spreadsheets.

Improve efficiency from raw data to client reporting. Raw data that lives in a data lake in s3 is transformed and organized into a structured data lake that is prepared to be defined in AWS Glue Catalog tables. This gives the analysts access to query the data from Redshift and use DBT to format the data into useful tables. From there, the optimization team can make data-based recommendations and generate complete reports for clients.

In the future, we plan on feeding a power business intelligence dashboard directly from Redshift, further increasing efficiency for both our optimization team and our clients.

Client benefits with Redshift optimization

  • Cost savings: Only pay for the S3 storage you use, without any storage fees from Redshift.
  • Unlimited data access: Large amounts of old data are available in the data lake, which can be joined across tables and brought into Redshift as needed.
  • Increased data visibility: Greater insight into data enables us to provide more optimization opportunities and supports decision making.
  • Improved flexibility and productivity: Analysts can get historical data within one hour, rather than waiting 1-2 weeks for requests to be fulfilled.
  • Reduced compute cost: By shifting the compute cost of loading data into to Amazon EKS.

-Spencer Dorway, Data Engineer


The Most Popular AWS Products of 2018

Big Data and Machine Learning Services Lead the Way

If you’ve been reading this blog, or otherwise following the enterprise tech market, you know that the worldwide cloud services market is strong. According to Gartner, the market is projected to grow by 17% in 2019, to over $206 billion.

Within that market, enterprise IT departments are embracing cloud infrastructure and related services like never before. They’re attracted to tools and technologies that enable innovation, cost savings, faster-time-to-market for new digital products and services, flexibility and productivity. They want to be able to scale their infrastructure up and down as the situation warrants, and they’re enamored with the idea of “digital transformation.”

In its short history, cloud infrastructure has never been more exciting. At 2nd Watch, we are fortunate to have a front-row seat to the show, with more than 400 enterprise workloads under management and over 200,000 instances in our managed public cloud. With 2018 now in our rearview mirror, we thought this a good time for a quick peek back at the most popular Amazon Web Services (AWS) products of the past year. We aggregated and anonymized our AWS customer data from 2018, and here’s what we found:

The top five AWS products of 2018 were: Amazon Virtual Private Cloud (used by 100% of 2nd Watch customers); AWS Data Transfer (100%); Amazon Simple Storage Service (100%); Amazon DynamoDB (100%) and Amazon Elastic Compute Cloud (100%). Frankly, the top five list isn’t surprising. It is, however, indicative of legacy workloads and architectures being run by the enterprise.

Meanwhile, the fastest-growing AWS products of 2018 were: Amazon Athena (68% CAGR, as measured by dollars spent on this service with 2nd Watch in 2018 v. 2017); Amazon Elastic Container Service for Kubernetes (53%); Amazon MQ (37%); AWS OpsWorks (23%); Amazon EC2 Container Service (21%); Amazon SageMaker (21%); AWS Certificate Manager (20%); and AWS Glue (16%).

The growth in data services like Athena and Glue, correlated with Sagemaker, is interesting. Typically, the hype isn’t supported by the data, but clearly, customers are moving forward with big data and machine learning strategies. These three services were also the fastest growing services in Q4 2018.

Looking ahead, I expect EKS to be huge this year, along with Sagemaker and serverless. Based on job postings and demand in the market, Kubernetes is the most requested skill set in the enterprise. For a look at the other AWS products and services that rounded out our list for 2018, download our infographic.

– Chris Garvey, EVP Product


So You Think You Can DevOps?

We recently took a DevOps poll of 1,000 IT professionals to get a pulse for where the industry sits regarding the adoption and completeness of vision around DevOps.  The results were pretty interesting, and overall we are able to deduce that a large majority of the organizations who answered the survey are not truly practicing DevOps.  Part of this may be due to the lack of clarity on what DevOps really is.  I’ll take a second to summarize it as succinctly as possible here.

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. This includes, but is not limited to, the culture, tools, organization, and practices required to accomplish this amalgamated methodology of delivering IT services.

credit: https://theagileadmin.com/what-is-devops/

In order to practice DevOps you must be in a DevOps state of mind and embrace its values and mantras unwaveringly.

The first thing that jumped out at me from our survey was the responses to the question “Within your organization, do separate teams manage infrastructure/operations and application development?”  78.2% of respondents answered “Yes” to that question.  Truly practicing DevOps requires that the infrastructure and applications are managed within the context of the same team, so we can deduce that at least 78.2% of the respondents’ companies are not truly practicing DevOps.  Perhaps they are using some infrastructure-as-code tools, some forms of automation, or even have CI/CD pipelines in place, but those things alone do not define DevOps.

Speaking of infrastructure-as-code… Another question, “How is your infrastructure deployed and managed?” had nearly 60% of respondents answering that they were utilizing infrastructure-as-code tools (e.g. Terraform, Configuration Management, Kubernetes) to manage their infrastructure, which is positive, but shows the disconnect between the use of DevOps tools and actually practicing DevOps (as noted in the previous paragraph).

On the other hand, just over 38% of respondents indicated that they are managing infrastructure manually (e.g. through the console), which means not only are they not practicing DevOps they aren’t even managing their infrastructure in a way that will ever be compatible with DevOps… yikes.  The good news is that tools like Terraform allow you to import existing manually deployed infrastructure where it can then be managed as code and handled as “immutable infrastructure.”  Manually deploying anything is a DevOps anti-pattern and must be avoided at all costs.

Aside from infrastructure we had several questions around application development and deployment as it pertains to DevOps.  Testing code appears to be an area where a majority of respondents are staying proactive in a way that would be beneficial to a DevOps practice.  The question “What is your approach to writing tests?” had the following breakdown on its answers:

  • We don’t really test:  10.90%
  • We get to it if/when we have time:  15.20%
  • We require some percentage of code to be covered by tests before it is ready for production:  32.10%
  • We require comprehensive unit and integration testing before code is pushed to production:  31.10%
  • Rigid TDD/BDD/ATDD/STDD approach – write tests first & develop code to meet those test requirements:  10.70%

We can see that around 75% of respondents are doing some form of consistent testing, which will go a long way in helping build out a DevOps practice, but a staggering 25% of respondents have little or no testing of code in place today (ouch!).  Another question “How is application code deployed and managed?” shows that around 30% of respondents are using a completely manual process for application deployment and the remaining 70% are using some form of an automated pipeline.  Again, the 70% is a positive sign for those wanting to embrace DevOps, but there is still a massive chunk at 30% who will have to build out automation around testing, building, and deploying code.

Another important factor in managing services the DevOps way is to have all your environments mirror each other.  In response to the question “How well do your environments (e.g. dev, test, prod) mirror one another?” around 28% of respondents indicated that their environments are managed completely independently of each other.  Another 47% indicated that “they share some portion of code but are not managed through identical code bases and processes,” and the remaining 25% are doing it properly by “managed identically using same code & processes employing variables to differentiate environments.”  Lots of room for improvement in this area when organizations decide they are ready to embrace the DevOps way.

Our last question in the survey was “How are you notified when an application/process/system fails?” and I found the answers a bit staggering.  Over 21% of respondents indicated that they are notified of outages by the end user.  It’s pretty surprising to see that large of a percentage utilizing such a reactionary method of service monitoring.

Another 32% responded that “someone in operation is watching a dashboard,” which isn’t as surprising but will definitely be something that needs to be addressed when shifting to a DevOps approach.  Another 23% are using third-party tools like NewRelic and Pingdom to monitor their apps.  Once again, we have that savvy ~25% group who are currently operating in a way that bodes well for DevOps adoption by answering “Monitoring is built into the pipeline, apps and infrastructure. Notifications are sent immediately.”  The twenty-five-percenters are definitely on the right path if they aren’t already practicing DevOps today.

In summary, we have been able to deduce from our survey that, at best, around 25% of the respondents are actually engaging in a DevOps practice today. For more details on the results of our survey, download our infographic.

-Ryan Kennedy, Principal Cloud Automation Architect


AIG Moves towards DevOps and Cloud

In a blog post this morning, the Head of Enterprise Strategy at AWS, Stephen Orban, shares a personal note he received from Salvatore Saieva, CTO Lead for Public Cloud Projects and Initiatives at American Insurance Group (AIG), about why the company moved away from traditional infrastructure methods and towards DevOps and the Cloud.

In his note to Mr. Orban, Saieva detailed how his infrastructure support team was managing production applications on VCE Vblock equipment and how working with converged technology led them to automation, agile methods, continuous approaches, and, ultimately, DevOps. By using DevOps to collaborate with other teams through the company, Salvatore’s IT team led the charge in becoming cloud-ready.

Read more about AIG’s move to DevOps and the cloud on Stephen Orban’s blog.

-Nicole Maus, Marketing Manager


The Infrastructure Side of Digital Marketing – Interview

Pam Scheideler is Partner and Chief Digital Officer with Deutsch, an advertising and digital marketing agency with offices in New York and Los Angeles. The agency’s clients include Volkswagen, Taco Bell, Target, Snapple and many other global brands.

2nd Watch: When working with clients, are IT infrastructure issues overlooked or misunderstood?

Pam Scheideler: One of the trends we have seen as website and ecommerce projects have transitioned from a Waterfall to an Agile software development methodology is that we need more participation from IT and infrastructure providers during the requirements definition and architecture phases. Because the UI and features are evolving based on iterative user ing and business feedback, our infrastructure partners are not working with a static set of specifications. Instead, at the beginning and end of each sprint, we continually validate our infrastructure assumptions with our partners. 2nd Watch understands our iterative design and development process and is able to provide guidance throughout development.

2nd Watch: Recently, your agency helped Taco Bell launch online ordering. How did you choose the technology partners to pull it off? 

Scheideler: Dynamic auto scaling was a big reason we selected AWS and 2nd Watch to be our partners in the solution for Taco Bell. When @katyperry tweets, her 91 million followers are listening, and we have seen huge bursts of unanticipated traffic come from social media mentions for our brands. With large media investments like Super Bowl placements and multiple product launches that can garner billions of media impressions, Taco Bell’s infrastructure is put to the on a daily basis. So we knew we needed a very flexible and reliable cloud platform and an expert partner like 2nd Watch to design the optimal environment on AWS for these demands.

2nd Watch: How have these challenges become greater in recent years, as customer experience demands become more complex?

Scheideler: Customer expectations are at an all-time high. If you asked the average person if they expect four 9’s in uptime, they probably wouldn’t understand the question. But if you asked them if they expect to be able to order a taco or shop through a messenger bot 24/7, they would say “Of course.”

2nd Watch: What role does the cloud play in digital marketing now?

Scheideler: Cloud-based hosting has absolutely changed our clients’ expectations and put a lot of pressure on IT organizations to deliver. Marketers are expecting systems to scale. It’s the job of marketing to acquire customers and generate demand and it’s the role of IT to help meet the demand and ensure business continuity. Simultaneously, digital business innovation has been exploding, which is great for consumers and the brands we serve. It’s putting IT infrastructure in the middle of emerging products and services.

2nd Watch: What other key technologies are pivotal to help marketing organizations be nimble and also efficient?

Scheideler: System monitoring has really changed the game, especially in companies with complex architectures. Finding the right people is equally important. The 2nd Watch team is always one step ahead and can bring diverse stakeholders together to troubleshoot system performance issues.


New Survey: Enterprise IT Procurement Patterns Favor Cloud Technologies

We’re back with more survey results! Our la survey of more than 400 IT executives shows that enterprise IT procurement patterns favor cloud technologies, although most execs polled still see themselves as operating “Mode 1” type IT organizations – we’ll get into an explanation of this below. Our Public Cloud Procurement: Packaging, Consumption and Management survey sought to understand the organizational emphasis and strategic focus of modern enterprise IT departments based on the tech services they’re consuming and how much they’re spending.

Gartner refers to Mode 1 organizations as traditional and sequential, emphasizing safety and accuracy and preferring entire solutions over self-service, while Mode 2 organizations are exploratory and nonlinear, emphasize agility and speed, prefer self-service and have a higher tolerance for risk. Going into the survey, we expected most enterprise IT organizations to be bimodal, with their focus split between stability and agility. The results confirmed our expectations – bimodal IT is common for modern IT organizations.

Here are some of our findings:

  • 71% of respondents reported being a Mode 1 IT organizations.
  • 72% of respondents emphasize sequential processes and linear relationships (Mode 1) over short feedback loops and clustered relationships (Mode 2) for IT delivery.
  • 65% said plan-driven / top-down decision making best represented their planning processes – a Mode 1 viewpoint.

However, respondents also showed considerable interest in public cloud technologies and outsourced management for those services:

  • 89% of respondents use AWS, Google Compute Engine or Microsoft Azure.
  • 39% have dedicated up to 25% of total IT spend to public cloud.
  • 43% spend at least half of their cloud service budget on AWS.

Many respondents found the process of buying, consuming and managing public cloud services difficult. A large majority would pay a premium if thatprocess of buying public cloud was easier, and 40% went so far as to say they’d be willing to pay 15% over cost for the benefit of an easier process.

Read the full survey results or download the infographic for a visual representation.