Data is the lifeblood of business. To help companies visualize their data, guide business decisions, and enhance their business operations requires employing machine learning services. But where to begin. Today, tremendous amounts of data are created by companies worldwide, often in disparate systems.
These large amounts of data, while helpful, don’t necessarily need to be processed immediately, yet need to be consolidated into a single source of truth to enable business value. Companies are faced with the issue of finding the best way to securely store their raw data for later use. One popular type of data store is referred to as a “data lake, and is very different from the traditional data warehouse.
Use Case: Data Lakes and McDonald’s
McDonald’s brings in about 1.5 million customers each day, creating 20-30 new data points with each of their transactions. The restaurant’s data comes from multiple data sources including a variety of data vendors, mobile apps, loyalty programs, CRM systems, etc. With all this data to use from various sources, the company wanted to build a complete perspective of a CLV and other useful analytics. To meet their needs for data collection and analytics, McDonald’s France partnered with 2nd Watch. The data lake allowed McDonald’s to ingest data into one source, reducing the effort required to manage and analyze their large amounts of data.
Due to their transition from a data warehouse to a data lake, McDonald’s France has greater visibility into the speed of service, customer lifetime value, and conversion rates. With an enhanced view of their data, the company can make better business decisions to improve their customers’ experience. So, what exactly is a data lake, how does it differ from a data warehouse, and how do they store data for companies like McDonald’s France?
What is a Data Lake?
A data lake is a centralized storage repository that holds a vast amount of raw data in its native format until it is needed for use. A data lake can include any combination of:
- Structured data: highly organized data from relational databases
- Semi-structured data: data with some organizational properties, such as HTML
- Unstructured data: data without a predefined data model, such as email
Data Lakes are often mistaken for Data Warehouses, but the two data stores cannot be used interchangeably. Data Warehouses, the more traditional data store, process and store your data for analytical purposes. Filtering data through data warehouses occurs automatically, and the data can arrive from multiple locations. Data lakes, on the other hand, store and centralize data that comes in without processing it. Thus, there is no need to identify a specific purpose for the data as with a data warehouse environment. Your data, whether in its original form or curated form, can be stored in a data lake. Companies often choose a data lake for their flexibility in supporting any type of data, their scalability, analytics, machine learning capabilities, and low costs.
While Data Warehouses are appealing for their element of automatically curated data and fast results, data lakes can lead to several areas of improvement for your data and business including:
- Improved customer interactions
- Improved R&D innovation choices
- Increase operational efficiencies
Essentially, a piece of information stored in a data lake will seem like a small drop in a big lake. Due to the lack of organization and security that tends to occur when storing large quantities of data in data lakes, this storing method has received some criticism. Additionally, setting up a data lake can be time and labor intensive, often taking months to complete. This is because, when built the traditional way, there are a series of steps that need to be completed and then repeated for different data sets.
Even once fully architected, there can be errors in the setup due to your data lakes being manually configured over an extended period. An important piece to your data lake is a data catalog, which uses machine learning capabilities to recognize data and create a universal schema when new datasets come into your data lake. Without defined mechanisms and proper governance, your data lake can quickly become a “data swamp”, where your data becomes hard to manage, analyze, and ultimately becomes unusable. Fortunately, there is a solution to all these problems. You can build a well-architected data lake in a short amount of time with AWS Lake Formation.
AWS Lake Formation & its Benefits
Traditionally, data lakes were set up as on-premises deployments before people realized the value and security provided by the cloud. These on-premises environments required continual adjustments for things like optimization and capacity planning—which is now easier due to cloud services like AWS Lake Formation. Deploying data lakes in the cloud provides scalability, availability, security, and faster time to build and deploy your data lake.
AWS Lake Formation is a service that makes it easy to set up a secure data lake in days, saving your business a lot of time and effort to focus on other aspects of your business. While AWS Lake Formation significantly cuts down the time it takes to setup your data lake, it is built and deployed securely. Additionally, AWS Lake Formation enables you to break down data silos and combine a variety of analytics to gain data insights and ultimately guide better business decisions. The benefits delivered by this AWS service are:
- Build data lakes quickly: To build a data lake in Lake Formation, you simply need to import data from databases already in AWS, other AWS sources, or from other external sources. Data stored in Amazon S3, for example, can be moved into your data lake, where your crawl, catalog, and prepare your data for analytics. Lake Formation also helps transform data with AWS Glue to prepare for it for quality analytics. Additionally, with AWS’s FindMatches, data can be cleaned and deduplicated to simplify your data.
- Simplify security management: Security management is simpler with Lake Formation because it provides automatic server-side encryption, providing a secure foundation for your data. Security settings and access controls can also be configured to ensure high-level security. Ones configured with rules, Lake formation enforces your access controls. With Lake Formation, your security and governance standards will be met.
- Provide self-service access to data: With large amounts of data in your data lake, finding the data you need for a specific purpose can be difficult. Through Lake Formation, your users can search for relevant data using custom fields such as name, contents, and sensitivity to make discovering data easier. Lake Formation can also be paired with AWS analytics services, such as Amazon Athena, Amazon Redshift, and Amazon EMR. For example, queries can be run through Amazon Athena using data that is registered with Lake Formation.
Building a data lake is one hurdle but building a well-architected and secure data lake is another. With Lake Formation, building and managing data lakes is much easier. On a secure cloud environment, your data will be safe and easy to access.
2nd Watch has been recognized as a Premier Consulting Partner by AWS for nearly a decade and our engineers are 100% certified on AWS. Contact us to learn more about AWS Lake Formation or to get assistance building your data lake.
-Tessa Foley, Marketing
Post 2020, how are you approaching the cloud? The rapid and unexpected digital transformation of 2020 forced enterprises worldwide to quickly mobilize workers using cloud resources. Now, as the world returns to an altered normal, it’s time for organizations to revisit their cloud infrastructure components with a fresh perspective. Hybrid work environments, industry transformations, changing consumer behavior, and growing cyber threats have all effected the way we do business. Now it might be time to change your cloud.
Risk mitigation at scale
Avoiding potential missteps in your strategy requires both wide and narrow insights. With the right cloud computing infrastructure, network equipment, and operating systems, organizations can achieve better risk mitigation and management with cloud scalability. As you continue to pursue business outcomes, you have to solve existing problems, as well as plan for the future. Some of these problems include:
- Scaling your cloud platform and infrastructure services quickly to keep up with increasing and/or unexpected demand.
- Maximizing cloud computing services and computing power to accommodate storage, speed, and resource demands.
- Prioritizing new and necessary investments and delivery models within a fixed budget.
- Innovating faster to remain, or gain, competitive advantage.
Overall, to avoid risk, you need to gain efficiency, and that’s what the cloud can do. Cloud infrastructure, applications, and Software as a Service (SaaS) solutions are designed to decrease input, and increase output and effectiveness. The scalability of cloud services allows enterprises to continue growing and innovating, without requiring heavy investments. With continuous cloud optimization, you’re positioned to adapt, innovate, and succeed regardless of the unknown future.
Application modernization for data leverage
Much of the digital transformation started with infrastructure modernization and the development of IaaS as a base line. Now, application modernization is accelerating alongside a changing migration pattern. What used to be simply ‘lift and shift’ is now ‘lift and evolve.’ Enterprises want to collaborate with cloud experts to gain a deeper understanding of applications as they become more cloud native. With a constant pipeline of new applications and services, organizations need guidance to avoid cloud cost sprawl and streamline environment integration.
As application modernization continues, organizations are gaining access to massive amounts of data that are enabling brand new opportunities. This requires a new look at database architectures to make sure you’re unlocking value internally and potentially, externally. While application modernization and database architecture are interconnected, they can also transform separately. We’re starting to see people recognize the importance of strategic cloud transformations that include the entire data footprint – whether it’s the underlying architecture, or the top level analytics.
Organizations are getting out of long-term licensing agreements, monetizing their data, gaining flexibility, cutting costs, and driving innovation, customer value, and revenue. Data is pulled from, and fed into, a lot of different applications within constantly changing cloud environments, which brings both challenges and opportunities. Enterprises must transform from this to that, but the end goal is constantly changing as well. Therefore continuous motion is necessary within the digital transformation.
Changing core business strategies
One thing is for sure about the digital transformation – it’s not slowing down. Most experts agree that even after pandemic safety precautions are eliminated, the digital transformation will continue to accelerate. After seeing the speed of adoption and opportunities in the cloud, many enterprises are reevaluating the future with new eyes. Budgets for IT are expanding, but so is the IT skills gap and cybersecurity incidents. These transitions present questions in a new light, and enterprises should revisit their answers.
- Why do you still have your own physical data center?
- What is the value in outsourcing? And insourcing?
- How has your risk profile changed?
- How does data allow you to focus on your core business strategy?
Answering these questions has more enterprises looking to partner with, and learn from, cloud experts – as opposed to just receiving services. Organizations want and need to work alongside cloud partners to close the skills gap within their enterprise, gain skills for internal expansion in the future, and better understand how virtualized resources can improve their business. It’s also a way to invest in your employees to reduce turn-over and encourage long-term loyalty.
Security and compliance
At this point with security, compliance, and ensuring business continuity, enterprises must have solutions in place. There is no other way. Ransomware and phishing attacks have been rising in sophistication and frequency year-over-year, with a noticeable spike since remote work became mainstream. Not only does your internal team need constant training and regular enforcement of governance policies, but there’s a larger emphasis on how your network protections are set up.
Regardless of automation and controls, people will make mistakes and there is an inherent risk in any human activity. In fact, human error is the leading cause of data loss with approximately 88% of all data breaches caused by an employee mistake. Unfortunately, the possibility of a breaches is often made possible because of your internal team. Typically, it’s the manner in which the cloud is configured or architected that creates a loophole for bad actors. It’s not that the public cloud isn’t secure or compliant, it’s that it’s not set up properly. This is where many enterprises are outsourcing data protection to avoid damaging compliance penalties, guarantee uninterrupted business continuity, and maintain the security of sensitive data after malicious or accidental deletion, natural disaster, or in the event that a device is lost, stolen or damaged.
Next steps: Think about day two
Enterprises who think of cloud migration as a one-and-done project – we were there, and now we’re here – aren’t ready to make the move. The cloud is not the answer. The cloud is an enabler to help organizations get the answers necessary to move in the direction they desire. There are risks associated with moving to the cloud – tools can distract from goals, system platforms need support, load balancers have to be implemented, and the cloud has to be leveraged and optimized to be beneficial long-term. Without strategizing past the migration, you won’t get the anticipated results.
It can seem overwhelming to take on the constantly changing cloud (and it certainly can be), but you don’t have to do it alone! Keep up with the pace and innovation of the digital transformation, while focusing on what you do best – growing your enterprise – by letting the experts help. 2nd Watch has a team of trusted cloud advisors to help you navigate cloud complexities for successful and ongoing cloud modernization. As an Amazon Web Services (AWS) Premier Partner, a Microsoft Azure Gold Partner, and a Google Cloud Partner with over 10 years’ experience, 2nd Watch provides ongoing advisory services to some of the largest companies in the world. Contact Us to take the next step in your cloud journey!
-Michael Elliott, Director of Marketing
Cloud adoption is becoming more popular across all industries, as it has proven to be reliable, efficient, and more secure as a software service. As cloud adoption increases, companies are faced with the issue of managing these new environments and their operations, ultimately impacting day-to-day business operations. Not only are IT professionals faced with the challenge of juggling their everyday work activities with managing their company’s cloud platforms but must do so in an timely, cost-efficient manner. Often, this requires hiring and training additional IT people—resources that are getting more and more difficult to find.
This is where a managed cloud service provider, like 2nd Watch, comes in.
What is a Managed Cloud Service Provider?
Managing your cloud operations on your own can seem like a daunting, tedious task that distracts from strategic business goals. A cloud managed service provider (MSP) monitors and maintains your cloud environments relieving IT from the day-to-day cloud operations, ensuring your business operates efficiently. This is not to say IT professionals are incapable of performing these responsibilities, but rather, outsourcing allows the IT professionals within your company to concentrate on the strategic operations of the business. In other words, you do what you do best, and the service provider takes care of the rest.
The alternative to an MSP is hiring and developing within your company the expertise necessary to keep up with the rapidly evolving cloud environment and cloud native technologies. Doing it yourself factors in a hiring process, training, and payroll costs. While possible, maintaining your cloud environments internally might not be the most feasible option in the long run. Additionally, a private cloud environment can be costly and requires your applications are handled internally. Migrating to the public cloud or adopting hybrid cloud model allows companies flexibility, as they allow a service provider either partial or full control of their network infrastructure.
What are Managed Cloud Services?
Managed cloud services are the IT functions you give your service provider to handle, while still allowing you to handle the functions you want. Some examples of the management that service providers offer include:
- Managed cloud database: A managed database puts some of your company’s most valuable assets and information into the hands of a complete team of experienced Database Administrators (DBAs). DBAs are available 24/7/365 to perform tasks such as database health monitoring, database user management, capacity planning and management, etc.
- Managed cloud security services: The public cloud has many benefits, but with it also comes security risks. Security management is another important MSP service to consider for your business. A cloud managed service provider can prevent and detect security threats before they occur, while fully optimizing the benefits provided by a cloud environment.
- Managed cloud optimization: The cloud can be costly, but only as costly as you allow it to be. An MSP can optimize cloud spend through consulting, implementation, tools, reporting services, and remediation.
- Managed governance & compliance: Without proper governance, your organization can be exposed to security vulnerabilities. Should a disaster occur within your business, such as a cyberattack on a data center, MSPs offer disaster recovery services to minimize recovery downtime and data loss. A managed governance and compliance service with 2nd Watch helps your Chief Security and Compliance Officers maintain visibility and control over your public cloud environment to help achieve on-going, continuous compliance.
At 2nd Watch, our foundational services include a fully managed cloud environment with 24/7/365 support and industry leading SLAs. Our foundational services address the key needs to better manage spend, utilization, and operations.
What are the Benefits of a Cloud Managed Service Provider?
Using a Cloud Managed Service Provider comes with many benefits if you choose the right one.
Some of these benefits include, but are not limited to:
- Cost savings: MSPs have experts that know how to efficiently utilize the cloud, so you get the most out of your resources while reducing cloud computing costs.
- Increased data security: MSPs ensure proper safeguards are utilized while proactively monitoring and preventing potential threats to your security.
- Increased employee production: With less time spent managing the cloud, your IT managers can focus on the strategic business operations.
- 24/7/365 management: Not only do MSPs take care of cloud management for you but do so 100% of the time.
- Overall business improvement: When your cloud infrastructure is managed by a trusted cloud advisor, they can optimize your environments while simultaneously allowing time for you to focus on core business operations. They can also recommend cloud native solutions to further support the business agility required to compete.
Why Our Cloud Management Platform?
With cloud adoption increasing in popularity, choosing a managed cloud service provider to help with this process can be overwhelming. While there are many options, choosing one you can trust is important to the success of your business. 2nd Watch provides multi-cloud management across AWS, Azure, and GCP, and has a special emphasis of putting our customers before the cloud. Additionally, we use industry standard, cloud native tooling to prevent platform lock in.
The solutions we create at 2nd Watch are tailored to your business needs, creating a large and lasting impact on our clients. For example:
- On average, 2nd Watch saves customers 41% more than if they managed the cloud themselves (based on customer data)
- Customers experience increased efficiency in launching applications, adding an average 240 hours of productivity per year for your business
- On average, we save customers 21% more than our competitors
2nd Watch helps customers at every step in their cloud journey, whether that’s cloud adoption or optimizing your current cloud environment to reduce costs. We can effectively manage your cloud, so you don’t have to. Contact us to get the most out of your cloud environment with a managed cloud service provider you can trust.
-Tessa Foley, Marketing
A lot of enterprises migrate to the public cloud because they see everyone else doing it. And while you should stay up on the latest and greatest innovations – which often happen in the cloud – you need to be aware of the realities of the cloud. You need to know why you’re moving to the cloud. What’s your goal? And what outcomes are you seeking? Make sure you know what you’re getting your enterprise into before moving forward in your cloud journey.
1. Cloud technology is not a project, it’s a constant
Be aware that while there is a starting point to becoming more cloud native – the migration – there is no stopping point. The migration occurs, but the transformation, development, innovation, and optimization is never over.
There are endless applications and tools to consider, your organization will evolve over time, technology changes regularly, and user preferences change even faster. Fueled by your new operating system, cloud computing puts you into continuous motion. While continuous motion is positive for outcomes, you need to be ready to ride the wave regardless of where it goes. Once you get on, success requires that you stay there.
2. Flex-agility is necessary to survival
Flexibility + agility = flex-agility, and you need it in the cloud. Flex-agility enables enterprises to adapt to the risks and unknowns occurring in the world. The pandemic continues to highlight the need for flex-agility in business. Organizations further along in their cloud journeys were able to quickly establish remote workforces, adjust customer interactions, communicate completely and effectively, and ultimately, continue running. While the pandemic was unprecedented, more commonly, flex-agility is necessary in natural disasters like floods, hurricanes, and tornadoes; after a ransomware or phishing attack; or when an employee’s device is lost, stolen, or destroyed.
3. You still have to move faster than the competition
Gaining or maintaining your competitive edge in the cloud has a lot to do with speed. Whether it’s the dog-eat-dog nature of your industry, macroeconomics, or a political environment, these are the things that speed up innovation. You might not have any control over these things, but they’re shaping the way consumers interact with brands. Again, when you think about how the digital transformation evolved during the pandemic, you saw winning business move the fastest. The cloud is an amazing opportunity to meet all the demands of your environment, but if you’re not looking forward, forecasting trends, and moving faster than the competition, you could fall behind.
4. People are riskier than technology
In many ways, the technology is the easiest part of an enterprise cloud strategy. It’s the people where a lot of risk comes into play. You may have a great strategy with clean processes and tactics, but if the execution is poor, the business can’t succeed. A recent survey revealed that 85% of organizations report deficits in cloud expertise, with the top three areas being cloud platforms, cloud native engineering, and security. While business owners acknowledge the importance of these skills, they’re still struggling to attract the caliber of talent necessary.
In addition to partnering with cloud service experts to ensure a capable team, organizations are also reinventing their technical culture to work more like a startup. This can incentivize the cloud-capable with hybrid work environments, an emphasis on collaboration, use of the agile framework, and fostering innovation.
5. Cost-savings is not the best reason to migrate to the cloud
Buy-in from executives is key for any enterprise transitioning to the cloud. Budget and resources are necessary to continue moving forward, but the business value of a cloud transformation isn’t cost savings. Really, it’s about repurposing dollars to achieve other things. At the end of the day, companies are focused on getting customers, keeping customers, and growing customers, and that’s what the cloud helps to support.
By innovating products and services in a cloud environment, an organization is able to give customers new experiences, sell them new things, and delight them with helpful customer service and a solid user experience. The cloud isn’t a cost center, it’s a business enabler, and that’s what leadership needs to hear.
6. Cloud migration isn’t always the right answer
Many enterprises believe that the process of moving to the cloud will solve all of their problems. Unfortunately, the cloud is just the most popular technology operating system platform today. Sure, it can help you reach your goals with easy-to-use functionality, automated tools, and modern business solutions, but it takes effort to utilize and apply those resources for success.
For most organizations, moving to the cloud is the right answer, but it could be the wrong time. The organization might not know how it wants to utilize cloud functionality. Maybe outcomes haven’t been identified yet, the business strategy doesn’t have buy-in from leadership, or technicians aren’t aware of the potential opportunities. Another issue stalling cloud migration is internal cloud-based expertise. If your technicians aren’t cloud savvy enough to handle all the moving parts, bring on a collaborative cloud advisor to ensure success.
Ready for the next step in your cloud journey?
Cloud Advisory Services at 2nd Watch provide you with the cloud solution experts necessary to reduce complexity and provide impartial guidance throughout migration, implementation, and adoption. Whether you’re just curious about the cloud, or you’re already there, our advanced capabilities support everything from platform selection and cost modeling, to app classification, and migrating workloads from your on-premises data center. Contact us to learn more!
Lisa Culbert, Marketing
If the pandemic and our business applications have one thing in common, it’s the difficulty in preparing for the future. Just as we could not foresee the oncome of the virus, we cannot always precisely determine the capacity required to run our applications effectively, no matter how much we plan.
When demand exceeds your application’s ability and capacity to run efficiently, it’s time to scale.
What is scalability?
Scalability is an application’s ability to increase or decrease overall support and performance in response to the changes in demand. For example, how your company’s website might respond to an increase in visitors is dependent on your application’s scalability. When met with this demand, you want to make sure your application can handle the increase so that it functions properly. Scalability has its limits, and scaling is increasing the capacity of those limits.
The question is: is scaling up or scaling out the right choice for your business?
What is vertical vs. horizontal scaling?
There are two different ways to scale: vertical scaling and horizontal scaling. Vertical scaling, also known as scaling up, is adding more power, or increasing the capacity of a single machine or server for better performance.
For example, you can scale up by adding resources, such as CPU, RAM, or disk capacity to add more processing power to your existing machine. In cloud terms, this translates into increasing the instance type for your application. In the short term, vertical scaling creates a bigger, better machine for an application to run on. Additionally, vertical scaling is data consistent, as your data is stored on a single node / instance. One caveat to scaling up, however, is that it comes with limits to the amount of hardware that can be added to a single machine. Vertical scaling also introduces potential for hardware failures. Vertical scaling is easy in the sense that there is no need for as additions only are made to the machine, but is easier better? Not necessarily.
Horizontal scaling, or scaling out, is when you add more machines or servers to your existing pool of resources. In cloud terms, this is referred to as Auto Scaling where the cloud OS can adjust capacity to demand needs. Rather than adding to a single machine as in scaling up, scaling out is duplicating a current set up and breaking it into separate resources.
Instead of changing the capacity of your existing server you are decreasing the load of the server through additional, duplicate servers. More resources might come appear more complex for your business but scaling out pays off in the long run, especially for larger enterprises. Instead of worrying about upgrading hardware as with vertical scaling, horizontal scaling provides a more continuous and seamless upgrading process.
Which type of scaling is right for your business?
There are pros and cons to both horizontal and vertical scaling, however, horizontal scaling is currently trending due to its reliability and efficiency. Vertical scaling is simpler, while horizontal scaling may prove to optimize your business operations in the long run. Most commonly, business choose to scale out. Regardless of the environment a business operates in, scaling up requires downtime, which can be inefficient for a business’s operations.
There are a several factors to consider when determining the scaling method right for you:
Flexibility: Horizontal scaling allows for flexibility because you can determine the configuration for your setup that optimizes cost and performance for your business needs. Costs are not optimized when scaling up, as you pay for the set price of the hardware.
Upgrades: With vertical scaling, hardware additions can only be upgraded to a limited extent. Horizontal scaling allows for continuous upgrades since you are not dependent on a single piece of equipment.
Redundancy: Another benefit that comes with horizontal scaling is there is no single point of failure distributed with a cloud environment. If your servers fail, the load balancer redirects the request to a different one of your servicers. Vertical scaling, on the other hand, has a single point of failure meaning if the machine goes down, the application goes down with it. Transitioning to the cloud through horizontal scaling eliminates the potential for this problem.
Cost: While vertical scaling may come with a lower upfront cost compared to horizontal scaling, horizontal scaling optimizes cost over time.
Choosing a scaling method that meets your business needs may seem like a complicated choice, but it does not have to be. 2nd Watch is an AWS Premier Partner, a Microsoft Azure Gold Partner, and a Google Cloud Partner providing professional and managed cloud services to enterprises. Contact Us to take the next step in your cloud journey.
-Tessa Foley, Marketing
As a cloud consulting company, we witness enterprise clients with a lot of data; and typical for most of these clients is that data is siloed with universal access to the information not easily transparent. Client libraries are essentially islands of misfit toys.
During an internal hackathon, Nick Centola and I decided to take up the challenge of creating an enterprise class solution that would extract, transform and load (ETL) data from multiple sources to a data warehouse, with the capability of performing advanced forecasting and in turn be 100% serverless by design that inherently keeps running cost to a minimum. We decided to keep the scope relatively simple and used the publicly available Citi Bike NYC dataset. The Citi Bike NYC dataset has monthly trip data exported as CSV files and public and a near real-time API, which from our experience is a pattern we often see in enterprises. The diagram below represents what we were trying to achieve.
At 2nd Watch, we love Functions-as-a-Service (FaaS) and Cloud Functions as we can create very scalable solutions, have no infrastructure to manage, and in most instances, we will not have to worry about the cost associated with the Cloud Functions.
There were two ETL jobs to write. One was to take the zipped CSV data from the public S3 trip data bucket and land it in our Google Cloud Storage Bucket for an automated daily import into BigQuery. The other function was to grab data from the stations’ near real-time restful API endpoint and insert it into our BigQuery table.
Nick is most efficient with Python; I am most efficient with NodeJS. As both languages are acceptable production code languages for most organizations we work with, we decided to write a function in our respected preferred languages.
The data that we pulled into BigQuery was already clean. We did not need to enrich or transform the data for our purpose – this is not always the case, and cleaning and enriching data are areas where we usually spend most of our time when building similar solutions for our customers.
We wanted to enable a relatively simple forecast on bike demand on individual stations across New York City. BigQuery ML is incredibly powerful and has more than 30 built-in machine learning models. The model of choice for our use case would be the ARIMA model, which takes time series data as an input. I won’t go into too much in detail on why the ARIMA model is a good model for this as compared to the multitude of google cloud functions; the full form of the acronym describes why; Auto Regressive (AR) Integrated (I) Moving Average (MA).
Bringing it all together, we created our LookML models in Looker and interacted with the data exceptionally easily. We made a couple of heat map-based visualizations of New York City to easily visualize the popular routes and stations and a station dashboard to monitor expected supply and demand over the next hour. With the bike stations API data flowing into BQ every 5 seconds, we get a close-to-real-time dashboard that we can use for the basis of alerting staff of an inadequate number of bikes at any station across NYC.
The station forecast shows the upper and the lower bound forecast for each hour over the next month. We use the upper bound forecast for our predicted “amount of bikes in the next hour” and pull in our available bikes from the real-time API. If you use your imagination, you can think of other use cases where a similar prediction could be relevant; franchise restaurant ingredient forecasting or forecasting at retailers for inventory or staffing needs to service customers – the possibilities are endless.
One of the coolest things we did from Nick and my perspective was to drive model training and forecasting straight from Looker and LookML allowing us to essentially kick off our model training every time we receive new data in BigQuery – all from the convenient interface of Looker.
As this was a quick prototyping effort, we took a few shortcuts compared to our delivery standards at 2nd Watch. We did not use infrastructure as code, a best practice we implement for all production-ready customer engagements. Second, we decided not to worry about data quality, which would be something we would clean, enrich, and transform based on your documented business requirements. Third, we did not set up telemetry that would allow us to respond to things like slow queries and broken ETL jobs or visualizations.
Is this hard?
Yes and no. For us it was not – Nick and my combined experience accumulates to thousands of hours building and documenting data pipelines and distributed systems. If you are new to this and your data footprint includes more than a few data sources, we highly recommend that you ask for enterprise expertise in building out your pipeline. You’ll need a team with in-depth experience to help you set up LookML as this will be the foundation for self-service within your organization. Ultimately though, experiments like this can serve to create both business intelligence and allow your organizations to proactively respond to events to meet your corporate and digital transformation initiatives.
Do you want to see a demo of our solution, check out our webinars below:
Aleksander Hansson, 2nd Watch Google Cloud Specialist
The COVID-19 pandemic has driven change throughout industry. For healthcare and healthcare professionals in general, attention on digital transformation and use of cloud technology has come to the forefront in a convergence of regulatory requirements, changing modes of care, data security threats, and evolving patient expectations.
Accelerating digitalization and identifying best practices for carrying data through to actionable insight has become a necessity. Many organizations have been forced to adjust their I.T. strategies and priorities. So, what is digital transformation in healthcare and what will it look like? According to a 2021 Stoltenburg/CHIME survey, at the top of the list of rearranged priorities among CIOs focused on digital transformation include:
- Using digital health to improve patient engagement
- Updating or modernizing EHRs for better data flow
- Strengthening cybersecurity
- Improving data analytics
Of course, healthcare is inherently data driven. “Data drives nearly every aspect of our healthcare industry. It helps identify longitudinal treatment trends, socioeconomic risks, missed payment opportunities and more. The broad scope of all of these inputs illustrates how mission critical it is to use the right information sources, tools and expertise” says James Bohnsack, Sr. VP and Chief Strategy Office of global information company TransUnion Healthcare.
Hospitals and health systems collect and store volumes of medical records and patient health data estimated to be increasing 48% annually. Data collected across admissions, diagnostics, treatment and discharge as well as patient-provider communications represent a rich resource to those looking to improve care and reduce cost. The opportunities for organizational improvement owing to use of this “big data” have never been better. But how many healthcare organizations have designed future-ready systems to effectively move beyond the collection, storage and simple exchanges of data to leverage all the value of that collective data in the form of actionable insights?
In the past, organizations developed reporting capabilities (operational, regulatory, quality, financial) with disparate paths and processing of data. Although we’ve seen improved data collection, processing and access to data analytics among providers, the pandemic forced healthcare providers to quickly leverage new technologies and see a brighter light. Cloud-enabled supply chain tracking and patient scheduling for vaccines provided a clear example of the value of technology in improving workflow efficiency and management.
Digital Healthcare Systems and Software Development
Moving forward, the good news is that healthcare providers and payers today have access to numerous solutions for efficient and effective data sharing, managing, mining and integrating to guide and improve their operational and clinical decision making. As digital transformation in Healthcare accelerates in 2021, cloud computing and data lakes that enable machine learning and artificial intelligence will enhance the way organizations gain value from their vast amounts of data and health information.
Data standardization through use of HL7 FHIR is helping to build secure interoperability. Transforming from disparate sources and types of data to real-time actionable intelligence and insight has the potential for increased cost efficiency.
Data heavy trends such as telehealth and increased patient interaction through mobile applications are poised to push the need for more efficient technology and data flow further. As healthcare organizations continue to advance along the digitalization journey, we’re seeing a number of consistently mentioned priorities and needs in 2021. These include:
- Increased digital transformation initiatives with emphasis on flexibility and future readiness
- Greater investment in digital technologies that are transformative
- Rising attention to compliance and security
- Identification of strategies to best manage multi-cloud environments
- Application modernization to improve patient engagement and boost productivity
- Post-pandemic revisits of business continuity plans
- Continued increases in use of telehealth
- Continued interest in leveraging Artificial Intelligence (AI) and Machine Learning (ML)
- Increased resource needs for technology initiatives
- Desire to work with expert partners, not generic suppliers of services
While “89% of healthcare leaders surveyed are accelerating their digital transformation” (MIT Technology Review Study, 2020), most provider organizations do not have the internal resources and expertise to support their initiatives. It becomes critical to success to find expert partners that can provide a premium level of service and engagement.
2nd Watch is ready for your next step in digital transformation
As a partner for AWS, Google Cloud, and Microsoft Azure, 2nd Watch is a trusted cloud advisor. With digital transformation in healthcare accelerating in 2021, we can enable you to achieve your digital transformation objectives and fuel performance improvement while reducing cloud complexity. Whether you’re embracing cloud data for the first time, strengthening compliance and security, or seeking improvements through advanced analytics, our team of data scientists can help. Contact us to discuss your current priorities and explore our full suite of advanced cloud-native capabilities.
-Tom James, Sr. Marketing Manager, Healthcare
Finding the perfect cloud platform for your business isn’t black and white. Nothing is 100% accurate or can guarantee a right fit, and no two organizations are the same. However, there are practical ways to think about the structure as your enterprise evolves. Introducing a hybrid cloud solution into your overall computing environment offers enterprises a number of benefits from innovation and enablement, to cybersecurity and application.
Choice and Flexibility
Different departments and employees are going to view cloud platforms through the perspective of their responsibilities, tasks, and goals. This typically results in a variety of input as to which type of cloud infrastructure is best. For example, the marketing team might be drawn to Salesforce because of their 360-degree customer view. Some techs might favor Azure for consistency and mobility between on-prem and public cloud environment, while others like the resources and apps available within Amazon Web Services (AWS).
More than ever before, companies are taking advantage of the seemingly endless opportunities with a hybrid cloud strategy. And that is something to embrace. You don’t want to get stuck on a single cloud vendor and miss out on the competitive drive of the market. Competition moves technology forward with new applications, customer-based cost structure, service delivery, and so on. With a hybrid approach, you can take advantage of those innovations to build the best system for your business.
Since the digital transformation fast-tracked and remote work became the ‘new norm,’ bad actors have been having a field day. Ransomware attacks continue to spike, and human error remains the number one cause of data loss. Hybrid cloud environments offer enterprises the backup and recovery tools necessary to keep business moving.
If you’re using the cloud for the bulk of your operations, you can backup and restore from an on-premises environment. If you’re focusing on-premises, you can use the cloud as your backup and restore. With both systems able to work interchangeably as a hybrid cloud architecture, you get an ideal model for data protection and disaster recovery.
Technology requires enterprises to always look ahead in order to remain competitive. Data science, AI, and machine learning are the latest developments for business enablement using data-based decision making. Key to implementing AI is both having the capacity necessary to collect incoming and historical data, as well as the tools to make it operational. AWS provides a huge amount of storage, while Google Cloud Platform (GCP) maximizes data with a variety of services and AI access.
A hybrid infrastructure lets you leverage the best resources and innovation available in the dynamic cloud marketplace. You’re better equipped to meet targeted AI functionalities and goals with more opportunities. Aware of the benefits and customer preference for hybrid environments, cloud providers are making it easier to ingest data from platform to platform. While interoperability can induce analysis paralysis, the hybrid environment removes a lot of the risks of a single cloud environment. If something doesn’t work as expected, you can easily consume data in a different cloud, using different services and tools. With hybrid cloud, It’s ok to use 100 applications and 100 different cloud-based sources to achieve your desired functionality.
A service-oriented architecture (SOA) calls on enterprises to build IT granularly and responsively. According to the SOA manifesto, a set of guiding principles similar to the agile manifesto, IT should not be a monolith. Instead, let business needs be the focus and stay close to those as you evolve. SOA is really the foundation of a hybrid cloud environment that allows you to ebb and flow as necessary. It’s common to get distracted by shiny new features – especially in a hybrid cloud environment – but the business needs to drive strategy, direction, and implementation. If you stay focused, you can both leverage hybrid cloud opportunities, and follow SOA to accomplish enterprise goals.
Next Step Toward a Hybrid Cloud Infrastructure Environment
If you agree with the tile of this article, then it’s time to see what a hybrid cloud could look like in your enterprise. 2nd Watch is an AWS Premier Partner, a Microsoft Azure Gold Partner, and a Google Cloud Partner with 10 years of experience in cloud. Our experts and industry veterans are here to help you build your environment for lasting success.
Contact Us to discuss picking your public cloud provider, or providers; utilizing on-prem resources; ensuring financial transparency and efficiency; and to get impartial advice on how best to approach your cloud modernization strategy.
Multi-cloud strategies suggest that enterprises run their applications and workloads in whatever cloud environment makes the most sense from a cost, performance and functionality perspective. That’s the theory anyway. In practice however, a multi-cloud environment requires a fair amount of tooling. Many enterprises grapple with integrating technologies created by competing suppliers in the hopes of achieving the elusive, single pane of glass.
Regardless of the challenges, a multi-cloud strategy has inherent benefits. Understanding problems upfront, and mitigating the consequences, is step one to realizing those benefits.
Problem: Cloud cost sprawl
Typically, the first migration in a company is initiated as a cost saving measure. Maybe you’re moving from CAPEX to OPEX, or joining a monthly subscription plan, so there’s a clear strategy and goal. As you move deeper into the cloud, developers see the potential of new applications, and opportunities for innovation. The shiny new tools and enhanced flexibility of the cloud can lead to unexpected and expensive surprises.
All too often, an organization moves to a monthly subscription model and slowly but surely, everybody increases and expands their use of services. Next thing you know, you’re getting huge bills from your cloud provider that nearly equal or exceed the cost of buying equipment. Cloud cost sprawl is the expensive result of unrestricted and unregulated use of cloud resources. It’s so rampant that an estimated 35% of enterprise cloud spend is wasted via cloud cost sprawl.
Solution: Cloud cost optimization
There’s more than one way to wrangle cloud use, achieve your goals, and maintain your budget while cutting out waste. Cloud cost optimization is a complex organizational process that runs parallel to cloud migration and cloud-based service use.
With simplified cloud billing, an understanding of how cloud cost sprawl happens, why cost optimization is important, and an ongoing optimization effort – large enterprises can save up to 70%. Through a combination of software, services and strategy, cost optimization helps businesses immediately achieve significant cost savings.
Learn more in the links below and schedule your free, 2-hour, Cloud Cost Optimization Discovery Session with 2nd Watch cloud experts to discover how best to get started.
Problem: Environment integration
With various types of environments coming together in multi-cloud, it can be hard to integrate, interoperate, and move data across the infrastructure for performance and use. Each environment has its own managing and monitoring systems that require certain expertise.
Infrastructure as a service (IaaS), including cloud providers like AWS, GCP and Azure, are one layer of the environment – and higher level services, or platform as a service (PaaS), is another layer. Platforms like Salesforce and NetSuite offer additional tools to build within specific domains, but the challenge is bringing everything together.
Solution: Expertise and tools
If you don’t have the in-house knowledge for a multi-cloud environment, outsourcing cloud management to an expert who also provides guidance and direction for cloud growth is a cost-effective solution. Regardless of who is in charge of integration, cloud providers offer tools and services to help monitor and manage the infrastructure as a whole. Recently, services have been introduced to directly address integration with other environments in a multi-cloud infrastructure.
For example, Google’s latest service, BigQuery Omni, lets you connect, combine, and query data from outside GCP without having to learn a new language. AWS is taking a more multi-cloud approach with ECS and EKS Anywhere. There’s also Anthos on GCP, and Arc on Azure – all services that allow organizations to run containers in the environments of their choosing.
See how cloud providers are embracing the movement toward multi-cloud to make multi-cloud integration easier.
Managed Cloud Services
Accelerate the delivery of innovative solutions and gain competitive advantage, without impacting current operations, using 2nd Watch Managed Cloud Services. Your dedicated cloud experts provide a simple, flexible set of Day 2 operational services to support cost optimization, management, monitoring, and integration within a multi-cloud infrastructure. Contact Us to make multi-cloud a success at your organization.
At the end of summer 2020, 2nd Watch surveyed over 100 cloud-focused IT directors about their cloud use. Now in 2021, we’re looking back at the 2020 Enterprise Cloud Trends Report to highlight six situations to anticipate going forward. As you would expect, COVID-19 vaccination availability, loosening of restrictions, and personal comfort levels continue to be an influential focus of cloud growth, and a significant factor in the acceleration of digital transformation.
1. Remote work adoption
The forced experiment of work from home, work from anywhere, and remote work in general has proven effective for many organizations. Employees are happy with the flexibility, and many businesses are enjoying increased productivity, a larger talent pool, and significant cost savings. While just about half (46%) of survey respondents said more than 50% of their employees were working remote in summer 2020, that number is expected to grow 14%. Rather than pulling back on remote work enablement, 60% of companies say almost 60% of employees will work away from the office.
2. Remote work challenges
It’s anticipated that as the number of remote workers grows, so do the challenges of managing the distributed work environment. Remote access, specifically into a corporate system, is the highest ranked challenge by survey respondents. Other issues include the capacity of conferencing and collaboration tools, and user competence. The complexities of both managing remote workers, and being a remote worker continue to evolve.
Business and IT leaders are constantly having to revisit the cloud infrastructure established in 2020, to provide flexibility, access, and business continuity during 2021 and beyond.
3. Cloud services and business collaboration
The cloud services market is maintaining their COVID-19 growth spurt, but the relationship between provider and client is shifting. The digital transformation and increasing reliance on cloud-based services is creating a new level of engagement and desire for collaboration from businesses. Organizations want to work alongside cloud providers and service providers so they can upskill along the way, and for the future. Businesses are using providers to build their cloud foundation and establish some pipelines – particularly around data migration – and in effect, learning on the job. As the business gets more involved in each project, and continues to build skills and evolve their DevOps culture, they can ultimately reduce their dependence, and associated costs, on cloud partners.
4. Growing cloud budgets
Surviving and thriving organizations have been, and continue to, position themselves for the long haul. Just over 64% of survey respondents said their cloud budgets had either remained the same, or increased. And almost 60% say their cloud budget will grow over the next 12 months.
Many are utilizing this time to gain competitive advantage by improving mobile app development, customer experience, and operations. The expectation of a payback period has businesses focused on boosting ROI using cloud-based services. 2020 forced business leaders to re-adjust how they see IT within their organization. It’s no longer a cost center, but something that can propel and enable the company forward.
5. Cloud security and data governance
As everyone moved out of the office in 2020, hackers took notice. Since then, ransomware attacks have been steadily increasing and there’s no signs of slowing down. The majority of survey respondents, 75%, agreed that cloud security and data governance are their number one concern and priority.
High profile breaches and remote work risks are grabbing headlines, causing organizations to question their security posture, and the tools necessary to prevent incidents. The role of proactive AI in cloud security is enabling faster response times and higher visibility into recovery and prevention. While tools are getting better, threats are also getting bigger.
Overall, the majority of respondents are leaning in to today’s circumstances and have a positive perspective on the future. With many organizations responding by accelerating cloud use to support the current environment, the most successful are also thinking ahead. Increasing cloud budgets, fostering external cloud collaboration for skill growth, and relying on the cloud to support remote employees showcases how the business landscape has changed. Data is more critical than ever as organizations accelerate toward the digital transformation – and the future looks bright.
Stay tuned for the 2021 Enterprise Cloud Trends Report for a focus on IT departments, see what security improvements have been made, and how organizations are continuing to use and support remote environments. Until then, if you’re moving applications to the cloud, modernizing your licensed database systems, and optimizing cloud use, let 2nd Watch help! With premier services across cloud platforms, and as a trusted cloud advisor, we embrace your unique journey. Contact Us to take the next step.
Nicole Maus – Director of Marketing