Big Data and Machine Learning Services Lead the Way
If you’ve been reading this blog, or otherwise following the enterprise tech market, you know that the worldwide cloud services market is strong. According to Gartner, the market is projected to grow by 17% in 2019, to over $206 billion.
Within that market, enterprise IT departments are embracing cloud infrastructure and related services like never before. They’re attracted to tools and technologies that enable innovation, cost savings, faster-time-to-market for new digital products and services, flexibility and productivity. They want to be able to scale their infrastructure up and down as the situation warrants, and they’re enamored with the idea of “digital transformation.”
In its short history, cloud infrastructure has never been more exciting. At 2nd Watch, we are fortunate to have a front-row seat to the show, with more than 400 enterprise workloads under management and over 200,000 instances in our managed public cloud. With 2018 now in our rearview mirror, we thought this a good time for a quick peek back at the most popular Amazon Web Services (AWS) products of the past year. We aggregated and anonymized our AWS customer data from 2018, and here’s what we found:
The top five AWS products of 2018 were: Amazon Virtual Private Cloud (used by 100% of 2nd Watch customers); AWS Data Transfer (100%); Amazon Simple Storage Service (100%); Amazon DynamoDB (100%) and Amazon Elastic Compute Cloud (100%). Frankly, the top five list isn’t surprising. It is, however, indicative of legacy workloads and architectures being run by the enterprise.
Meanwhile, the fastest-growing AWS products of 2018 were: Amazon Athena (68% CAGR, as measured by dollars spent on this service with 2nd Watch in 2018 v. 2017); Amazon Elastic Container Service for Kubernetes (53%); Amazon MQ (37%); AWS OpsWorks (23%); Amazon EC2 Container Service (21%); Amazon SageMaker (21%); AWS Certificate Manager (20%); and AWS Glue (16%).
The growth in data services like Athena and Glue, correlated with Sagemaker, is interesting. Typically, the hype isn’t supported by the data, but clearly, customers are moving forward with big data and machine learning strategies. These three services were also the fastest growing services in Q4 2018.
Looking ahead, I expect EKS to be huge this year, along with Sagemaker and serverless. Based on job postings and demand in the market, Kubernetes is the most requested skill set in the enterprise. For a look at the other AWS products and services that rounded out our list for 2018, download our infographic.
– Chris Garvey, EVP Product
We recently took a DevOps poll of 1,000 IT professionals to get a pulse for where the industry sits regarding the adoption and completeness of vision around DevOps. The results were pretty interesting, and overall we are able to deduce that a large majority of the organizations who answered the survey are not truly practicing DevOps. Part of this may be due to the lack of clarity on what DevOps really is. I’ll take a second to summarize it as succinctly as possible here.
DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. This includes, but is not limited to, the culture, tools, organization, and practices required to accomplish this amalgamated methodology of delivering IT services.
In order to practice DevOps you must be in a DevOps state of mind and embrace its values and mantras unwaveringly.
The first thing that jumped out at me from our survey was the responses to the question “Within your organization, do separate teams manage infrastructure/operations and application development?” 78.2% of respondents answered “Yes” to that question. Truly practicing DevOps requires that the infrastructure and applications are managed within the context of the same team, so we can deduce that at least 78.2% of the respondents’ companies are not truly practicing DevOps. Perhaps they are using some infrastructure-as-code tools, some forms of automation, or even have CI/CD pipelines in place, but those things alone do not define DevOps.
Speaking of infrastructure-as-code… Another question, “How is your infrastructure deployed and managed?” had nearly 60% of respondents answering that they were utilizing infrastructure-as-code tools (e.g. Terraform, Configuration Management, Kubernetes) to manage their infrastructure, which is positive, but shows the disconnect between the use of DevOps tools and actually practicing DevOps (as noted in the previous paragraph). On the other hand, just over 38% of respondents indicated that they are managing infrastructure manually (e.g. through the console), which means not only are they not practicing DevOps they aren’t even managing their infrastructure in a way that will ever be compatible with DevOps… yikes. The good news is that tools like Terraform allow you to import existing manually deployed infrastructure where it can then be managed as code and handled as “immutable infrastructure.” Manually deploying anything is a DevOps anti-pattern and must be avoided at all costs.
Aside from infrastructure we had several questions around application development and deployment as it pertains to DevOps. Testing code appears to be an area where a majority of respondents are staying proactive in a way that would be beneficial to a DevOps practice. The question “What is your approach to writing tests?” had the following breakdown on its answers:
- We don’t really test: 10.90%
- We get to it if/when we have time: 15.20%
- We require some percentage of code to be covered by tests before it is ready for production: 32.10%
- We require comprehensive unit and integration testing before code is pushed to production: 31.10%
- Rigid TDD/BDD/ATDD/STDD approach – write tests first & develop code to meet those test requirements: 10.70%
We can see that around 75% of respondents are doing some form of consistent testing, which will go a long way in helping build out a DevOps practice, but a staggering 25% of respondents have little or no testing of code in place today (ouch!). Another question “How is application code deployed and managed?” shows that around 30% of respondents are using a completely manual process for application deployment and the remaining 70% are using some form of an automated pipeline. Again, the 70% is a positive sign for those wanting to embrace DevOps, but there is still a massive chunk at 30% who will have to build out automation around testing, building, and deploying code.
Another important factor in managing services the DevOps way is to have all your environments mirror each other. In response to the question “How well do your environments (e.g. dev, test, prod) mirror one another?” around 28% of respondents indicated that their environments are managed completely independently of each other. Another 47% indicated that “they share some portion of code but are not managed through identical code bases and processes,” and the remaining 25% are doing it properly by “managed identically using same code & processes employing variables to differentiate environments.” Lots of room for improvement in this area when organizations decide they are ready to embrace the DevOps way.
Our last question in the survey was “How are you notified when an application/process/system fails?” and I found the answers a bit staggering. Over 21% of respondents indicated that they are notified of outages by the end user. It’s pretty surprising to see that large of a percentage utilizing such a reactionary method of service monitoring. Another 32% responded that “someone in operation is watching a dashboard,” which isn’t as surprising but will definitely be something that needs to be addressed when shifting to a DevOps approach. Another 23% are using third-party tools like NewRelic and Pingdom to monitor their apps. Once again, we have that savvy ~25% group who are currently operating in a way that bodes well for DevOps adoption by answering “Monitoring is built into the pipeline, apps and infrastructure. Notifications are sent immediately.” The twenty-five-percenters are definitely on the right path if they aren’t already practicing DevOps today.
In summary, we have been able to deduce from our survey that, at best, around 25% of the respondents are actually engaging in a DevOps practice today. For more details on the results of our survey, download our infographic.
-Ryan Kennedy, Principal Cloud Automation Architect
In a blog post this morning, the Head of Enterprise Strategy at AWS, Stephen Orban, shares a personal note he received from Salvatore Saieva, CTO Lead for Public Cloud Projects and Initiatives at American Insurance Group (AIG), about why the company moved away from traditional infrastructure methods and towards DevOps and the Cloud.
In his note to Mr. Orban, Saieva detailed how his infrastructure support team was managing production applications on VCE Vblock equipment and how working with converged technology led them to automation, agile methods, continuous approaches, and, ultimately, DevOps. By using DevOps to collaborate with other teams through the company, Salvatore’s IT team led the charge in becoming cloud-ready.
Read more about AIG’s move to DevOps and the cloud on Stephen Orban’s blog.
-Nicole Maus, Marketing Manager
Pam Scheideler is Partner and Chief Digital Officer with Deutsch, an advertising and digital marketing agency with offices in New York and Los Angeles. The agency’s clients include Volkswagen, Taco Bell, Target, Snapple and many other global brands.
2nd Watch: When working with clients, are IT infrastructure issues overlooked or misunderstood?
Pam Scheideler: One of the trends we have seen as website and ecommerce projects have transitioned from a Waterfall to an Agile software development methodology is that we need more participation from IT and infrastructure providers during the requirements definition and architecture phases. Because the UI and features are evolving based on iterative user ing and business feedback, our infrastructure partners are not working with a static set of specifications. Instead, at the beginning and end of each sprint, we continually validate our infrastructure assumptions with our partners. 2nd Watch understands our iterative design and development process and is able to provide guidance throughout development.
2nd Watch: Recently, your agency helped Taco Bell launch online ordering. How did you choose the technology partners to pull it off?
Scheideler: Dynamic auto scaling was a big reason we selected AWS and 2nd Watch to be our partners in the solution for Taco Bell. When @katyperry tweets, her 91 million followers are listening, and we have seen huge bursts of unanticipated traffic come from social media mentions for our brands. With large media investments like Super Bowl placements and multiple product launches that can garner billions of media impressions, Taco Bell’s infrastructure is put to the on a daily basis. So we knew we needed a very flexible and reliable cloud platform and an expert partner like 2nd Watch to design the optimal environment on AWS for these demands.
2nd Watch: How have these challenges become greater in recent years, as customer experience demands become more complex?
Scheideler: Customer expectations are at an all-time high. If you asked the average person if they expect four 9’s in uptime, they probably wouldn’t understand the question. But if you asked them if they expect to be able to order a taco or shop through a messenger bot 24/7, they would say “Of course.”
2nd Watch: What role does the cloud play in digital marketing now?
Scheideler: Cloud-based hosting has absolutely changed our clients’ expectations and put a lot of pressure on IT organizations to deliver. Marketers are expecting systems to scale. It’s the job of marketing to acquire customers and generate demand and it’s the role of IT to help meet the demand and ensure business continuity. Simultaneously, digital business innovation has been exploding, which is great for consumers and the brands we serve. It’s putting IT infrastructure in the middle of emerging products and services.
2nd Watch: What other key technologies are pivotal to help marketing organizations be nimble and also efficient?
Scheideler: System monitoring has really changed the game, especially in companies with complex architectures. Finding the right people is equally important. The 2nd Watch team is always one step ahead and can bring diverse stakeholders together to troubleshoot system performance issues.
We’re back with more survey results! Our la survey of more than 400 IT executives shows that enterprise IT procurement patterns favor cloud technologies, although most execs polled still see themselves as operating “Mode 1” type IT organizations – we’ll get into an explanation of this below. Our Public Cloud Procurement: Packaging, Consumption and Management survey sought to understand the organizational emphasis and strategic focus of modern enterprise IT departments based on the tech services they’re consuming and how much they’re spending.
Gartner refers to Mode 1 organizations as traditional and sequential, emphasizing safety and accuracy and preferring entire solutions over self-service, while Mode 2 organizations are exploratory and nonlinear, emphasize agility and speed, prefer self-service and have a higher tolerance for risk. Going into the survey, we expected most enterprise IT organizations to be bimodal, with their focus split between stability and agility. The results confirmed our expectations – bimodal IT is common for modern IT organizations.
Here are some of our findings:
- 71% of respondents reported being a Mode 1 IT organizations.
- 72% of respondents emphasize sequential processes and linear relationships (Mode 1) over short feedback loops and clustered relationships (Mode 2) for IT delivery.
- 65% said plan-driven / top-down decision making best represented their planning processes – a Mode 1 viewpoint.
However, respondents also showed considerable interest in public cloud technologies and outsourced management for those services:
- 89% of respondents use AWS, Google Compute Engine or Microsoft Azure.
- 39% have dedicated up to 25% of total IT spend to public cloud.
- 43% spend at least half of their cloud service budget on AWS.
Many respondents found the process of buying, consuming and managing public cloud services difficult. A large majority would pay a premium if thatprocess of buying public cloud was easier, and 40% went so far as to say they’d be willing to pay 15% over cost for the benefit of an easier process.
Read the full survey results or download the infographic for a visual representation.
If you’re an old hand in IT (that is, someone with at least 5 years under your belt), making a switch to the cloud could be the best decision you’ll ever make. The question is, do you have what it takes?
Job openings in the cloud computing industry are everywhere – Amazon alone lists 17 different categories of positions related to AWS on its website, and by one estimate there are nearly four million cloud computing jobs just in the United States. But cloud experts, especially architects, are a rare breed. Cloud is a relatively new platform, which limits the number of qualified personnel. Finding people with the right skillsets can be hard.
Cloud architects are experts in designing, running and managing applications in the cloud, leading DevOps teams and working with multiple cloud vendors. A senior cloud architect oversees the broad strategy of a company’s investment in the cloud, and is responsible for managing and delivering ROI from cloud investments and continually aligning with business objectives.
Yet being a cloud architect is not simply about understanding new technologies and features as they come off the conveyor belt. Beyond dealing with rapid technological change, you’ve got to have some creativity and business acumen. If you are fiercely independent and don’t enjoy a little schmoozing with business colleagues and chatting up vendors, this probably is not a good career choice for you. If you don’t like things changing frequently and problem-solving, you may suffer from recurring anxiety attacks.
In talking to customers, we’ve come up with a list of the top non-techie skills that every cloud architect should have. Here are the top 10:
- Strategic problem-solving skills
- Security & compliance experience
- Ability to balance trade-offs with agility
- Business and accounting knowledge
- Customer experience focus
- Deploy & destroy mentality
- Adept negotiation and communications skills
- Ability to solve problems with an eye for the future
- Understanding of platform integrations
- Ability to evolve with the business
In short, cloud architects are like great companions: once you have one, hold on and never let him or her go. Check out the infographic for a complete mapping of the perfect cloud architect!
-Jeff Aden, EVP Business Development & Marketing