AWS Media2Cloud: Efficient Digital Archive Transformation and Migration

In our previous post, we introduced the concept of Digital Archive Transformation and Migration (OTL) and its significance in managing digital assets. In this follow-up post, we’re taking a technical deep dive into Media2Cloud on AWS, which plays a crucial role in streamlining Old Tape Library (OTL) migrations by leveraging artificial intelligence. Keep reading to understand the four key technical components of Media2Cloud on AWS, see seven valuable reasons for migrating your media archive to AWS, and discover five ways that 2nd Watch can help make it a success.

What is Media2Cloud on AWS?

Media2Cloud on AWS is a serverless solution built on the AWS platform that automates digital media content ingestion, analysis, and organization within the media supply chain. By leveraging AWS services like AWS Step Functions, AWS Lambda, Amazon S3, Amazon Rekognition, Amazon Transcribe, and Amazon Comprehend, Media2Cloud enables efficient cloud migration and management of digital assets.

4 Key Technical Components of Media2Cloud on AWS

  1. Ingestion Workflow: The ingestion workflow is initiated by uploading media files to an Amazon Simple Storage Service (S3) bucket. AWS Lambda triggers the processing pipeline, which includes Amazon Rekognition and Amazon Transcribe for metadata extraction using machine learning. AWS Step Functions coordinate the overall workflow, ensuring the extracted metadata is indexed in Amazon OpenSearch Service for easy search and retrieval.
  2. Analysis Workflow: The analysis workflow is powered by AWS Lambda and Amazon AI services. Some examples include using Amazon Rekognition to verify identity with facial analysis, applying Amazon Transcribe to convert speech to text, and implementing Amazon Comprehend, a natural-language processing (NLP) service for sentiment analysis. The metadata is indexed in Amazon OpenSearch Service, allowing you to build custom applications for advanced media search and retrieval.
  3. Media Asset Management: Media2Cloud on AWS relies on Amazon S3 for storing media assets and their associated metadata. This provides a scalable, durable, and cost-effective storage solution, ensuring your digital assets are always accessible and protected as part of an effective media management system.
  4. AWS CloudFormation: Media2Cloud on AWS utilizes AWS CloudFormation to automate the deployment of the solution and its components. This makes deploying and configuring the solution easy, enabling you to focus on migrating and managing your digital assets.

7 Reasons to Migrate Your Media Archive to AWS

  1. Enhanced Data Protection: Storing valuable media assets on hard drives or old tapes poses significant risks, such as data loss due to hardware failure, damage, or obsolescence. By migrating to Media2Cloud on AWS, you safeguard your assets in a dependable and secure infrastructure with built-in redundancies, encryption, and access control mechanisms.
  2. Improved Accessibility and Collaboration: Migrating your media archive to AWS allows you to centralize your assets, making them easily accessible to teams across the globe. Without the limitations imposed by physical storage, you enhance collaboration, ensuring that the right content is available for the right people at the right time.
  3. Scalability and Flexibility: Media2Cloud on AWS offers virtually unlimited storage, allowing your media library to grow without constraints. Moreover, its flexible infrastructure accommodates changing business needs and technology advancements, so assets remain relevant and accessible long-term.
  4. Cost Efficiency: By migrating to Media2Cloud on AWS, you eliminate the overhead costs associated with maintaining physical storage infrastructure. With the pay-as-you-go model, you only pay for the storage and services you use, delivering to significant cost savings.
  5. Advanced AI-driven Workflows: Media2Cloud on AWS integrates with powerful AI-driven AWS services, such as Amazon Rekognition, Amazon Transcribe, and Amazon Comprehend. These services enable advanced workflows for image recognition and video analysis, speech-to-text conversion, and sentiment extraction, thereby unlocking new levels of value and insights from your media assets.
  6. Revenue Generation through Monetization of Assets: By enabling your team to search and find relevant assets easily, you maximize the utilization of your media library and monetize more of your content. This can include licensing, repurposing, or distributing previously underutilized assets, ultimately contributing to your organization’s bottom line.
  7. AWS Migration Acceleration Program (MAP): AWS’s MAP is designed to help organizations offset the costs associated with cloud migration. By participating in the program, you gain access to AWS credits, specialized training, and expert support to accelerate the migration process. This makes moving to Media2Cloud on AWS even more cost-effective and feasible for organizations of all sizes.

5 Ways 2nd Watch Can Help

As an experienced AWS Premier Partner, 2nd Watch can manage the entire Digital Archive Transformation and Migration process for you. Our team of experts handle all the heavy lifting, including leveraging specialized equipment to transfer your media assets from tapes or hard drives to the AWS Cloud. Our end-to-end migration services include the following:

  1. Assessment and Strategy Development: We analyze your existing media archive and develop a comprehensive migration strategy tailored to your organization’s needs and goals.
  2. AWS Landing Zone Deployment: Managing the intricacies of AWS can be daunting. To adhere to best practices, we devised an architectural blueprint called the AWS Landing Zone. This framework creates a secure, multi-account,  AWS environment that lays a robust foundation for media asset migration. As part of the Landing Zone, we can deploy a tailored and customized version of Media2Cloud solution on AWS with additional features to meet your specific requirements.
  3. Physical-to-Digital Migration: Our team handles the physical-to-digital conversion of your media assets using specialized equipment and best practices for a smooth and efficient transfer.
  4. Cloud Migration and Integration: Once your assets are digitized, we migrate them to Media2Cloud on AWS and integrate them with your existing systems and workflows.
  5. Optimization and Ongoing Support: After the migration is complete, we help optimize media workflows and provide ongoing support to ensure media assets remain secure, accessible, and valuable.

Digital Archive Transformation and Migration is a complex and technical process, but the benefits of adopting a solution like Media2Cloud on AWS are worth the effort. Leveraging the expertise of a trusted cloud partner like 2nd Watch guarantees a smooth transition to AWS, so your organization can fully harness the power of your digital assets. Contact us today to learn more about how we can help you embrace the future of media management.

Does this stand for Oracle Time and Labor? If so, I don’t understand it being the accronym for Digital Archive Transformation and Migration. Wouldn’t that be (DATM)?

 


Streamlining AWS Cloud Spend for Innovation Investments

Cloud Spend 101: What is it, and why does it matter?

Cloud spend is the amount of money an organization spends in AWS and across all cloud platforms. A common belief is that moving to the cloud will significantly decrease your total cost of ownership (TCO) quickly, easily, and almost by default. Unfortunately, reaping the infrastructure cost savings of AWS is not that simple, but it certainly is obtainable. To achieve a lower TCO while simultaneously boosting productivity and gaining operational resilience, business agility, and sustainability, you must strategize your migration and growth within AWS.

The most common mistake made when migrating from on-prem environments to the cloud is going “like-for-like.” Basically, creating a cookie-cutter image of what existed on-prem in the new cloud environment. Because they are two completely different types of infrastructure, organizations end up way over-provisioned using unnecessary and expensive On-Demand Instance pricing.

Ideally, you want a well-developed game plan before migration starts to avoid losing money in the cloud. With the advice and support of a trusted cloud partner, a comprehensive strategy takes your organization from design to implementation to optimization. That puts you in the best position to stay on top of costs during every step of migration and once you’re established in AWS. Cost savings realized in the cloud can be reinvested in innovation that expands business value and grows your bottom line.

The 6 pillars of cloud spend optimization.

While it’s best to have a comprehensive strategy before migrating to the cloud, cloud spend optimization is an ongoing necessity in any cloud environment. With hundreds of thousands of different options for cloud services today, choosing the right tools is overwhelming and leaves much room for missteps. At the same time, there are also a lot of opportunities available. Regardless of where you are in your cloud journey, the six pillars of cloud spend optimization provide a framework for targeted interventions.

#1: Reserved Instances (RIs)

RIs deliver meaningful savings on Amazon EC2 costs compared to On-demand Instance pricing. RIs aren’t physical instances but a billing discount for using On-Demand Instances in your account. Pricing is based on the instance type, region, tenancy, and platform; term commitment; payment cadence; and offering class.

#2: Auto-Parking

A significant benefit of the cloud is scalability, but the other side of that is individual control. Often, an organization’s team members forget or are not prompted or incentivized to terminate resources when they aren’t being used. Auto-Parking schedules and automates the spin-up/spin-down process depending on hours of use to prevent paying for idle resources. This is an especially helpful tool for development and test environments.

#3: Right-Sizing

Making sure you have exactly what you need and nothing you don’t requires an analysis of resource consumption, chargebacks, auto-parked resources, and available RIs. Using those insights, organizations can implement policies and guardrails to reduce overprovisioning by tagging resources for department-level chargebacks and properly monitoring CPU, memory, and I/O (input/output).

#4: Family Refresh

Instance types, VM series, and Instance Families all describe the methods cloud providers use to package instances depending on the hardware. When instance types are retired and replaced with new technology, cloud pricing changes based on compute memory and storage parameters – this process is referred to as “Family Refresh.” Organizations must closely monitor instances and expected costs to manage these price fluctuations and prevent redundancies.

#5: Waste

Inherent in optimization is waste reduction. You need the checks and balances we’ve discussed to prevent unnecessary costs and reap the financial benefits of a cloud environment. Identifying waste and stopping the leaks takes time and regular, accurate reporting within each business unit. For example, when developers are testing, make sure they’re only spinning up new environments for a specific purpose. Once those environments are no longer used, they should be decamped to avoid waste.

#6: Storage

Storage catalyzes many organizations to move to the cloud because it’s a valuable way to reduce on-prem hardware spend. Again, to realize those savings, businesses must keep a watchful eye on what is being stored, why it’s being stored, and how much it will cost. There are typically four components impacting storage costs:

  1. Size – How much storage do you need?
  2. Data transfer (bandwidth) – How often does data move from one location to another?
  3. Retrieval time – How quickly do you need to access the data?
  4. Retrieval requests – How often do you need access to the data?

Depending on your answers to these questions, there are different ways to manage your environment using file storage, databases, data backup, and data archives. Organizations can estimate storage costs with a solid data lifecycle policy while right sizing and amplifying storage capacity and bandwidth.

Private Pricing Agreements

Another way to control your AWS spend is with a PPA or a Private Pricing Agreement – formally known as an EDP or Enterprise Discount Program. A PPA is a business-led pricing agreement with AWS that considers a specific term and commit amount. Organizations that are already in the cloud and love the service can use their expected growth over the next three or five years to get a discount for committing to that amount of time with AWS. In addition to expected compute services, reservations for reserved instances, and existing savings plans, the business also includes software purchases from the marketplace in the agreement to get further discounts.

Choosing a cloud optimization partner.

It’s easy to know what to do to control spend, but it’s a whole other beast to integrate cloud optimization into business initiatives and the culture of both IT teams and finance teams. Of course, you can go it alone if you have the internal cloud expertise required for optimization, but most businesses partner with an external cloud expert to avoid the expenses, risk, and time needed to see results. Attempting these strategies without an experienced partner can cost you more in the long run without achieving the ROI you expected.

In fact, when going it alone, businesses gain about 18% savings on average. While that may sound satisfying, companies that partner with the cloud experts at 2nd Watch average 40% savings on their compute expenses alone. How? We aim high, and so should you. Regardless of how you or your cloud optimization partner tackles cloud spend, target 90% or greater coverage in reserved instances and savings plans. In addition to the six pillars of optimization and PPAs, you or your partner also need to…

  • Know how to pick the right services and products for your business from the hundreds of thousands of options available.
  • Develop a comprehensive cloud strategy that goes beyond just optimizing cost.
  • Assess the overall infrastructure footprint to determine the effectiveness of serverless or containerization for higher efficiency.
  • Evaluate applications running on EC2 instances to identify opportunities for application modernization.

Take the next step in your cloud journey.

2nd Watch is a great choice for cloud spend optimization with AWS because we specialize in this area. With our extensive experience and in-depth knowledge of AWS services and pricing models, we can help you maximize your AWS investments. Our comprehensive solutions include cost analysis, budgeting, forecasting, and ongoing monitoring. We have a proven track record of delivering significant cost savings for our clients across various industries.

We leverage automation and advanced tools to identify cost-saving opportunities, eliminate waste, and optimize your AWS resources. This ensures efficiency and allows you to focus on innovation and growth. We provide continuous optimization and support, proactively identifying potential cost-saving measures and recommending adjustments based on your changing business needs.

With us, you’ll gain transparency into your AWS spend through detailed reports and analytics. This visibility empowers you to make informed decisions and manage your budgets effectively. Choose 2nd Watch for cloud spend optimization with AWS and experience the expertise, solutions, and track record that will help you achieve cost savings while driving innovation and growth.

2nd Watch saves organizations hundreds of thousands of dollars in the cloud every year, and we’d love to help you reallocate your cloud spend toward business innovation. Our experienced cloud experts work with your team to teach cloud optimization strategies that can be carried out independently in the future. As an AWS Premier Partner with 10 years of experience, 2nd Watch advisors know how to maximize your environment within budget so you can grow your business. Contact Us to learn more and get started!

 

 

 

 

 

 


Delivering Live Broadcasts and Playout with AWS

Live broadcasts and playouts are critical requirements for media companies to satisfy their audiences. More customers are cutting cords and watching live channels on the web, mobile, tablets, and smart TVs. As a result, media companies are pressured to bring additional channels to market and scale up their delivery capabilities.

Amazon Web Services (AWS) is a solution for media companies to deliver live broadcasts and playouts. AWS offers various services to help media companies design, build, and manage a scalable, reliable, cost-effective live broadcast and playout solution. These services include AWS Elemental MediaLive, a real-time video encoding service; AWS CloudFront, a content delivery network (CDN); and AWS MediaConnect, a transport service for live video streams. In addition, partnerships with leading media and entertainment technology companies – such as Amagi, BCNEXXT, Grass Valley, and Imagine Communications – can provide expertise and support in implementing and managing a live broadcast and playout solution on AWS.

Here is an example Terraform template that creates a MediaLive channel that receives an SRT input and outputs HLS to CloudFront:

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This template creates a MediaLive channel with an SRT input and an HLS output. The HLS output is written to an S3 bucket, and a CloudFront distribution is created to serve the HLS output to users.

This is an example: you will need to customize it for your specific use case. For instance, you must specify the SRT input settings, such as the IP address and port of the SRT source. You must also specify the details of the S3 bucket and CloudFront distribution.

2nd Watch is an experienced AWS partner with the knowledge and resources to support media companies in designing, building, and managing an effective live broadcast and playout solution on AWS. Our team is familiar with a range of AWS services, including AWS Elemental MediaLive, AWS CloudFront, and AWS MediaConnect, as well as partnerships with leading media and entertainment technology companies. Contact us today to learn more about our consulting services for live broadcast and playout on AWS.

Aleksander Hansson | Specialist Solutions Architect | 2ND Watch


Snowflake vs Amazon Redshift: What Is the Difference Between Snowflake and Amazon Redshift?

The modern business world is data-centric. As more businesses turn to cloud computing, they must evaluate and choose the right data warehouse to support their digital modernization efforts and business outcomes. Data warehouses can increase the bottom line, improve analytics, enhance the customer experience, and optimize decision-making. 

A data warehouse is a large repository of data businesses utilize for deep analytical insights and business intelligence. This data is collected from multiple data sources. A high-performing data warehouse can collect data from different operational databases and apply a uniform format for better analysis and quicker insights.

Two of the most popular data warehouse solutions are Snowflake and Amazon Web Services (AWS) Redshift. Let’s look at how these two data warehouses stack up against one another. 

What is Snowflake?

Snowflake is a cloud-based data warehousing solution that uses third-party cloud-compute resources, such as Azure, Google Cloud Platform, or Amazon Web Services (AWS.) It is designed to provide users with a fully managed, cloud-native database solution that can scale up or down as needed for different workloads. Snowflake separates compute from storage: a non-traditional approach to data warehousing. With this method, data remains in a central repository while compute instances are managed, sized, and scaled independently. 

Snowflake is a good choice for companies that are conscious about their operational overhead and need to quickly deploy applications into production without worrying about managing hardware or software. It is also the ideal platform to use when query loads are lighter, and the workload requires frequent scaling. 

The benefits of Snowflake include:

  • Easy integration with most components of data ecosystems
  • Minimal operational overhead: companies are not responsible for installing, configuring, or managing the underlying warehouse platform
  • Simple setup and use
  • Abstracted configuration for storage and compute instances
  • Robust and intuitive SQL interface

What is Amazon Redshift?

Amazon Redshift is an enterprise data warehouse built on Amazon Web Services (AWS). It provides organizations with a scalable, secure, and cost-effective way to store and analyze large amounts of data in the cloud. Its cloud-based compute nodes enable businesses to perform large-scale data analysis and storage. 

Amazon Redshift is ideal for enterprises that require quick query outputs on large data sets. Additionally, Redshift has several options for efficiently managing its clusters using AWS CLI/Amazon Redshift Console, Amazon Redshift Query API, and AWS Software Development Kit. Redshift is a great solution for companies already using AWS services and running applications with a high query load. 

The benefits of Amazon Redshift include:

  • Seamless integration with the AWS ecosystem
  • Multiple data output formatting support
  • Easy console to extract analytics and run queries
  • Customizable data and security models

Comparing Data Warehouse Solutions

Snowflake and Amazon Redshift both offer impressive performance capabilities, like scalability across multiple servers and high availability with minimal downtime. There are some differences between the two that will determine which one is the best fit for your business.

Performance

Both data warehouse solutions harness massively parallel processing (MPP) and columnar storage, which enables advanced analytics and efficiency on massive jobs. Snowflake boasts a unique architecture that supports structured and semi-structured data. Storage, computing, and cloud services are abstracted to optimize independent performance. Redshift recently unveiled concurrency scaling coupled with machine learning to compete with Snowflake’s concurrency scaling. 

Maintenance

Snowflake is a pure SaaS platform that doesn’t require any maintenance. All software and hardware maintenance is handled by Snowflake. Amazon Redshift’s clusters require manual maintenance from the user.

Data and Security Customization

Snowflake supports fewer customization choices in data and security. Snowflake’s security utilizes always-on encryption enforcing strict security checks. Redshift supports data flexibility via partitioning and distribution. Additionally, Redshift allows you to tailor its end-to-end encryption and set up your own identity management system to manage user authentication and authorization.

Pricing

Both platforms offer on-demand pricing but are packaged differently. Snowflake doesn’t bundle usage and storage in its pricing structure and treats them as separate entities. Redshift bundles the two in its pricing. Snowflake tiers its pricing based on what features you need. Your company can select a tier that best fits your feature needs. Redshift rewards businesses with discounts when they commit to longer-term contracts. 

Which data warehouse is best for my business?

To determine the best fit for your business, ask yourself the following questions in these specific areas:

  • Do I want to bundle my features? Snowflake splits compute and storage, and its tiered pricing provides more flexibility to your business to purchase only the features you require. Redshift bundles compute and storage to unlock the immediate potential to scale for enterprise data warehouses. 
  • Do I want a customizable security model? Snowflake grants security and compliance options geared toward each tier, so your company’s level of protection is relevant to your data strategy. Redshift provides fully customizable encryption solutions, so you can build a highly tailored security model. 
  • Do I need JSON storage? Snowflake’s JSON storage support wins over Redshift’s support. With Snowflake, you can store and query JSON with native functions. With Redshift, JSON is split into strings, making it difficult to query and work with. 
  • Do I need more automation? Snowflake automates issues like data vacuuming and compression. Redshift requires hands-on maintenance for these sorts of tasks. 

Conclusion

A data warehouse is necessary to stay competitive in the modern business world. The two major data warehouse players – Snowflake and Amazon Redshift – are both best-in-class solutions. One product is not superior to the other, so choosing the right one for your business means identifying the one best for your data strategy.

2nd Watch is an AWS Certified Partner and an Elite Snowflake Consulting Partner. We can help you choose the right data warehouse solution for you and support your business regardless of which data warehouse your choose.

We have been recognized by AWS as a Premier Partner since 2012, as well as an audited and approved Managed Service Provider and Data and Analytics Competency partner for our outstanding customer experiences, depth and breadth of our products and services, and our ability to scale to meet customer demand. Our engineers and architects are 100% certified on AWS, holding more than 200 AWS certifications.

Our full team of certified SnowPros has proven expertise to help businesses implement modern data solutions using Snowflake. From creating a simple proof of concept to developing an enterprise data warehouse to customized Snowflake training programs, 2nd Watch will help you to utilize Snowflake’s powerful cloud-based data warehouse for all of your data needs.

Contact 2nd Watch today to help you choose the right data warehouse for your business!


Why You Need AWS Consulting Services for Your Digital Transformation 

Organizations are looking to leverage the power of cloud computing to create new digital business models and drive operational efficiency. Amazon Web Services (AWS) is one of the leading cloud computing platforms that help companies become more agile and cost-efficient to deliver on business objectives and priorities. As companies embrace cloud adoption for more reliable, cost-effective, and secure storage solutions, investing in AWS consultants has become a crucial part of the process.

AWS cloud services provide a tremendous opportunity for business development and growth, especially when combined with a reputable cloud consultant. AWS cloud computing allows organizations to safely and securely store vast amounts of data while ingesting and processing that data to make it actionable. It also enables IT teams to increase scalability and capability quickly and efficiently, so businesses can stay ahead of their competition. 

A cloud consultant experienced in working with AWS cloud technology can ensure that cloud strategies align with organizational needs, providing tailored solutions for maximum innovation and profit potential. By utilizing cloud computing consulting services from AWS, businesses can leverage cutting-edge cloud solutions to drive competitive advantage.

Working with AWS consultants like 2nd Watch can help your organization harness the technology that AWS offers, modernizing your business operations and transforming your IT infrastructure to solve your business challenges and hit critical business goals. 

What is AWS Consulting?

AWS consulting services offer businesses a range of solutions and expertise to facilitate your organization’s success with cloud migration and in-cloud processes. These services include advising on best practices, migrating existing applications, building new applications, and optimizing overall performance. AWS consultants are highly experienced professionals who have a deep understanding of the AWS platform and provide invaluable insights into how it can be used within an organization to support digital transformation. 

The value of working with these experts is that they understand the complexities of managing AWS’s infrastructure, cloud-native solutions, database services, and developer tools. AWS consultants analyze an organization’s existing setup and recommend practices for improved efficiency, scalability, security, cost optimization, reliability, and other aspects of their cloud environment. For example, they can help to ensure that an organization’s environment is properly configured and securely managed, so data remains safe and accessible to users. They can also provide expert guidance on which additional resources may be needed for more robust performance. 

The Benefits of Working With an AWS Consultant 

The main goal of any AWS consultant is to help businesses make the most of their investments in AWS products and services. This includes helping them choose appropriate services (such as Amazon EC2, Amazon S3, or Amazon RDS,) providing advice on configuring their environment correctly, ensuring their applications run efficiently in the cloud, and optimizing costs by scaling up or down resources as needed. Moreover, AWS consultants can help set up automated processes, such as deploying new versions of software or monitoring performance metrics. Most importantly, they can also offer advice on security best practices when working with sensitive information in the cloud.  

By taking advantage of an experienced AWS consulting team, businesses can benefit from numerous advantages, including reduced costs, improved performance, scalability, and flexibility. With a tailored approach, organizations can positively impact their bottom line by leveraging the right combination of services and resources offered by AWS, such as compute power, storage capacity, database solutions, content delivery networks (CDNs), analytics tools, and machine learning capabilities, among others.

Working with an AWS consultant will improve performance by quickly spinning up new resources and scaling their existing infrastructure as needed without worrying about additional hardware or software investments. Additionally, AWS consulting teams will assist with migrations and optimization processes so that businesses can ensure their operations are running smoothly with minimal disruption or downtime. 

Investing in an Experienced Team 

When it comes to leveraging cloud computing platforms like AWS, organizations need to invest in an experienced team that understands both the technology as well as the business objectives of the organization. Good consultants should provide best practices while understanding the strategies that will work best for an organization’s specific needs. Organizations will increase their chances of success and ROI with the cloud when they invest in an experienced team. 

Partnering with the right experts will maximize the opportunities and services offered by AWS.

Companies can learn how to utilize AWS database services better and develop tools that meet their IT and business objectives. This can help businesses optimize their IT infrastructure while reducing capital expenditures, which will maximize profit potential. Through AWS consulting, organizations have access to cloud-based solutions that allow them to run their applications faster, scale easily and cost-effectively, increase security, provide insights into customer behavior and create innovative products. Ultimately, beyond the technical aspects, AWS consulting can help organizations achieve peak performance both operationally and financially.

Conclusion

Harnessing the power of AWS provides businesses with many benefits, including cost savings, improved performance, scalability, and flexibility. However, these benefits can only be realized if you have an experienced team guiding your transformation efforts. Investing in a qualified AWS consulting team allows organizations to take full advantage of this technology while ensuring that their operations remain secure and compliant with industry standards. Having access to experts who understand both your business objectives as well as how best to leverage this technology will give you peace of mind knowing that your transformation project is being handled correctly from start to finish.

2nd Watch employs a cloud transformation framework and methodology for every engagement guaranteeing quality, consistency, and completeness. We start by listening to identify and strike a balance between innovation, self-sufficiency, risk, and cost. We then work with you to determine where you are in your cloud journey and assemble a tailored bundle of services to meet your IT business objectives.

We have been recognized by AWS as a Premier Partner since 2012 and as an audited and approved Managed Service Provider for our outstanding customer experiences, the depth and breadth of our products and services, and our ability to scale to meet customer demand. Our engineers and architects are 100% certified on AWS, holding over 200 AWS certifications.

Contact us today to learn how 2nd Watch takes a phased approach to modernization with AWS!


A High-Level Overview of Looker: An Excerpt from Our BI Tool Comparison Guide

Looker is one of several leading business intelligence (BI tools) that can help your organization harness the power of your data and glean impactful insights that allow you to make the best decisions for your business.

Keep reading for a high-level overview of Looker’s key features, pros and cons of Looker versus competitors, and a list of tools and technologies that easily integrate with Looker to augment your reporting.

Overview of Looker

Looker is a powerful BI tool that can help a business develop insightful visualizations. Among other benefits, users can create interactive and dynamic dashboards, schedule and automate the distribution of reports, set custom parameters to receive alerts, and utilize embedded analytics.

Why Use Looker

If you’re looking for a single source of truth, customized visuals, collaborative dashboards, and top-of-the-line customer support, Looker might be the best BI platform for you. Being fully browser-based cuts down on confusion as your team gets going, and customized pricing means you get exactly what you need.

Pros of Looker

  • Looker offers performant and scalable analytics on a near-real-time basis.
  • Because you need to define logic before creating visuals, it enforces a single-source-of-truth semantic layer.
  • Looker is completely browser-based, eliminating the need for desktop software.
  • It facilitates dashboard collaboration, allowing parallel development and publishing with out-of-the-box git integration.

Cons of Looker

  • Looker can be more expensive than competitors like Microsoft Power BI; so while adding Looker to an existing BI ecosystem can be beneficial, you will need to take costs into consideration.
  • Compared to Tableau, visuals aren’t as elegant and the platform isn’t as intuitive.
  • Coding in LookML is unavoidable, which may present a roadblock for report developers who have minimal experience with SQL.

Select Complementary Tools and Technologies for Looker

  • Any SQL database
  • Amazon Redshift
  • AWS
  • Azure
  • Fivetran
  • Google Cloud
  • Snowflake

Was this high-level overview of Looker helpful? If you’d like to learn more about Looker reporting or discuss how other leading BI tools, like Tableau and Power BI, may best fit your organization, contact us to learn more.

The content of this blog is an excerpt of our Business Intelligence Tool Comparison Guide. Click here to download a copy of the guide.


Evolving Operations to Maximize AWS Cloud Native Services

As a Practice Director of Managed Cloud Services, my team and I see well-intentioned organizations fall victim to this very common scenario… Despite the business migrating from its data center to Amazon Web Services (AWS), its system operations team doesn’t make adjustments for the new environment. The team attempts to continue performing the same activities they did when their physical hardware resided in a data center or at another hosting provider.

The truth is, that modernizing your monolithic applications and infrastructure requires new skill sets, knowledge, expertise, and understanding to get desired results. Unless you’re a sophisticated, well-funded, start-up, most established organizations don’t know where to begin after the migration is complete. The transition from deploying legacy software in your own data center, to utilizing Elastic Kubernetes Service (EKS) and micro-services, while deploying code through an automated Continuous Integration and Continuous Delivery (CI/CD) pipeline, is a whole new ballgame. Not to mention how to keep it functioning after it is deployed.

In this article, I’m providing some insight on how to overcome the stagnation that hits post-migration. With forethought, AWS understanding, and a reality check on your internal capabilities, organizations can thrive with cloud-native services. At the same time, kicking issues downstream, maintaining inefficiencies, and failing to address new system requirements will compromise the ROI and assumed payoffs of modernization.

Is Your Team Prepared?

Sure, going serverless with Lambda might be all the buzz right now, but it’s not something you can effectively accomplish overnight. Running workloads on cloud-native services and platforms requires a different way of operating. New operational demands require that your internal teams are equipped with these new skill sets. Unfortunately, a team that may have mastered the old data center or dedicated hosting provider environment, may not be able to jump in on AWS.

The appeal of AWS is the great flexibility to drive your business and solve unique challenges.  However, because of the ability to provision and decommission on demand, it also introduces new complexities. If these new challenges are not addressed early on, you will definitely see friction between teams which can damage collaboration and adoption, the potential for system sprawl increases, and cost overruns can compromise the legitimacy and longevity of modernization.

Due to the high cost and small talent pool of technically efficient cloud professionals, many organizations struggle to nab the attention of these highly desired employees. Luckily, modern cloud-managed service providers can help you wade through the multitude of services AWS introduces. With a trusted and experienced partner by your side, businesses are able to gain the knowledge necessary to drive business efficiencies and solve unique challenges. Depending on the level of interaction, existing team members may be able to level up to better manage AWS growth going forward. In the meantime, involving a third-party cloud expert is a quick and efficient way to make sure post-migration change management evolves with your goals, design, timeline, and promised outcomes.

Are You Implementing DevOps?

Modern cloud operations and optimizations address the day two necessities that go into the long-term management of AWS. DevOps principles and automation need to be heavily incorporated into how the AWS environment operates. With hundreds of thousands of distinct prices and technical combinations, even the most experienced IT organizations can get overwhelmed.

Consider traditional operations management versus cloud-based DevOps. One is a physical hardware deployment that requires logging into the system to perform configurations, and then deploying software on top. It’s slow, tedious, and causes a lag for developers as they wait for feature delivery, which negatively impacts productivity. Instead of system administrators performing monthly security patching, and having to log into each instance separately, a modern cloud operation can efficiently utilize a pipeline ­with infrastructure as code. Now, you can update your configuration files to use a new image and then use infrastructure automation to redeploy. This treats each one as an ephemeral instance, minimizing any friction or delay on the developer teams.

This is just one example of how DevOps can and should be used to achieve strong availability, agility, and profitability. Measuring DevOps with the CALMS model provides a guideline for addressing the five fundamental elements of DevOps: Culture, Automation, Lean, Measurement, and Sharing. Learn more about DevOps in our eBook, 7 Major Roadblocks in DevOps Adoption and How to Address Them.

Do You Continue With The Same Behavior?

Monitoring CPU, memory, and disk at the traditional thresholds used on legacy hardware are not necessarily appropriate when utilizing AWS EC2. To achieve the financial and performance benefits of the cloud, you purposely design systems and applications to use and pay for the number of resources required. As you increasingly deploy new cloud-native technology, such as Kubernetes and serverless operations, require that you monitor in different ways so as to reduce an abundance of unactionable alerts that eventually become noise.

For example, when running a Kubernetes cluster, you should implement monitoring that alerts on desired pods. If there’s a big difference between the number of desired pods and currently running pods, this might point to resource problems where your nodes lack the capacity to launch new pods. With a modern managed cloud service provider, cloud operations engineers receive the alert and begin investigating the cause to ensure uptime and continuity for application users. With fewer unnecessary alerts and an escalation protocol for the appropriate parties, triage of the issue can be done more quickly. In many cases remediation efforts can be automated, allowing for more efficient resource allocation.

How Are You Cutting Costs?

Many organizations initiate cloud migration and modernization to gain cost-efficiency. Of course, these financial benefits are only accessible when modern cloud operations are fully in place.

Considering that anyone can create an AWS account but not everyone has visibility or concerns for budgetary costs, it can result in costs exceeding expectations quickly. This is where establishing a strong governance model and expanding automation can help you to achieve your cost-cutting goals. You can limit instance size deployment using IAM policies to insure larger, more expensive instances are not unnecessarily utilized. Another cost that can quickly grow without the proper controls is your S3 storage. Enabling policies to have objects expire and automatically be deleted can help to curb an explosion in storage costs. Enacting policies like these to control costs requires that your organization take the time to think through the governance approach and implement it.

Evolving in the cloud can reduce computing costs by 40-60% while increasing efficiency and performance. However, those results are not guaranteed. Download our eBook, A Holistic Approach to Cloud Cost Optimization, to ensure a cost-effective cloud experience.

How Will You Start Evolving Now?

Time is of the essence when it comes to post-migration outcomes – and the board and business leaders around you will be expecting results. As your organization looks to leverage AWS cloud-native services, your development practices will become more agile and require a more modern approach to managing the environment. To keep up with these business drivers, you need a team to serve as your foundation for evolution.

2nd Watch works alongside organizations to help start or accelerate your cloud journey to become fully cloud native on AWS. With more than 10 years of migrating, operating, and effectively managing workloads on AWS, 2nd Watch can help your operations staff evolve to operate in a modern way with significant goal achievement. Are you ready for the next step in your cloud journey? Contact us and let’s get started.

 


A High-Level Overview of Amazon Redshift

Modern data warehouses, like Amazon Redshift, can improve the way you access your organization’s data and dramatically improve your analytics. Paired with a BI tool, like Tableau, or a data science platform, like Dataiku, your organization can increase speed-to-insight, fuel innovation, and drive business decisions throughout your organization.

In this post, we’ll provide a high-level overview of Amazon Redshift, including a description of the tool, why you should use it, pros and cons, and complementary tools and technologies.

Overview of Amazon Redshift

Amazon’s flagship data warehouse service, acquired from ParAccel originally, is a columnar database forked from Postgres. Similar to AWS RDS databases, pricing for Amazon Redshift is charged by size of the instance, along with how long it’s up and running.

Value Prop:

  • Increased performance of queries and reports with automatic indexing and sort keys
  • Easy integration with other AWS products
  • Most established data warehouse

Scalability:

  • Flexibility to pay for compute independently of storage by specifying the number of instances needed
  • With Amazon Redshift Serverless, automatic and intelligent scaling of data warehouse capacity

Performance:

  • Instances maximize speed for performance-intensive workloads that require large amounts of compute capacity.
  • Distribution and sort keys are more intuitive than traditional RDBMS indexes, allowing for more user-friendly performance tuning of queries.

Features:

  • Easy to spin up and integrate with other AWS services for a seamless cloud experience
  • Native integration with the AWS analytics ecosystem makes it easier to handle end-to-end analytics workflows with minimal issues

Security:

  • Can be set up to use SSL to secure data in transit and hardware-accelerated AES-256 encryption for data at rest

Why Use Amazon Redshift

It’s easy to spin up as an AWS customer, without needing to sign any additional contracts. This is ideal for more predictable pricing and starting out. Amazon Redshift Serverless automatically scales data warehouse capacity while only charging for what you use. This enables any user to run analytics without having to manage the data warehouse infrastructure.

Pros of Amazon Redshift

  • It easily spins up and integrates with other AWS services for a seamless cloud experience.
  • The distribution and sort keys are more intuitive than traditional RDBMS indexes, allowing for more user-friendly performance tuning of queries.
  • Materialized views support functionality and options not yet available in other cloud data warehouses, helping improve reporting performance.

Cons of Amazon Redshift

  • It lacks some of the modern features and data types available in other cloud-based data warehouses such as support for separation of compute and storage spending, and automatic partitioning and distribution of data.
  • It requires traditional database administration overhead tasks such as vacuuming and managing of distribution of sort keys to maintain performance and data storage.
  • As data needs grow, it can be difficult to manage costs and scale.

Select Complementary Tools and Technologies for Amazon Redshift

  • AWS Glue
  • AWS QuickSight
  • AWS SageMaker
  • Tableau
  • Dataiku

We hope you found this high-level overview of Amazon Redshift helpful. If you’re interested in learning more about Amazon Redshift or other modern data warehouse tools like Google BigQuery, Azure Synapse, and Snowflake, contact us to learn more.

The content of this blog is an excerpt of our Modern Data Warehouse Comparison Guide. Click here to download a copy of that guide.


Comparing Modern Data Warehouse Options

To remain competitive, organizations are increasingly moving towards modern data warehouses, also known as cloud-based data warehouses or modern data platforms, instead of traditional on-premise systems. Modern data warehouses differ from traditional warehouses in the following ways:

    • There is no need to purchase physical hardware.
    • They are less complex to set up.
    • It is much easier to prototype and provide business value without having to build out the ETL processes right away.
    • There is no capital expenditure and a low operational expenditure.
    • It is quicker and less expensive to scale a modern data warehouse.
    • Modern cloud-based data warehouse architectures can typically perform complex analytical queries much faster because of how the data is stored and their use of massively parallel processing (MPP).

Modern data warehousing is a cost-effective way for companies to take advantage of the latest technology and architectures without the upfront cost to purchase, install, and configure the required hardware, software, and infrastructure.

Comparing Modern Data Warehousing Options

  • Traditional data warehouse deployed on (IaaS): Requires our customers to install traditional data warehouse software on computers provided by a cloud provider (e.g., Azure, AWS, Google).
  • Platform as a service (PaaS): The cloud provider manages the hardware deployment, software installation, and software configuration. However, the customer is responsible for managing the environment, tuning queries, and optimizing the data warehouse software.
  • A true SaaS data warehouse (SaaS): In a SaaS approach, software and hardware upgrades, security, availability, data protection, and optimization are all handled for you. The cloud provider provides all hardware and software as part of its service, as well as aspects of managing the hardware and software.

With all of the above scenarios, the tasks of purchasing, deploying, and configuring the hardware to support the data warehouse environment falls on the cloud provider instead of the customer.

IaaS, PaaS, and SaaS – What Is the Best Option for My Organization?

Infrastructure as a service (IaaS) is an instant computing infrastructure, provisioned and managed over the internet. It helps you avoid the expense and complexity of buying and managing your own physical servers and other data center infrastructure. In other words, if you’re prepared to buy the engine and build the car around it, the IaaS model may be for you.

In the scenario of platform as a service (PaaS), a cloud provider merely supplies the hardware and its traditional software via the cloud; the solution is likely to resemble its original, on-premise architecture and functionality. Many vendors offer a modern data warehouse that was originally designed and deployed for on-premises environments. One such technology is Amazon Redshift. Amazon acquired rights to ParAccel, named it Redshift, and hosted it in the AWS cloud environment. Redshift is a highly successful modern data warehouse service. It is easy in AWS to instantiate a Redshift cluster, but then you need to complete all of the administrative tasks.

You have to reclaim space after rows are deleted or updated (the process of vacuuming in Redshift), manage capacity planning, provisioning compute and storage nodes, determine your distribution keys, etc. All of the things you had to do with ParAccel (or with any traditional architecture), you have to do with Redshift.

Alternatively, any data warehouse solution built for the cloud using a true software as a solution (SaaS) data warehouse architecture allows for the cloud provider to include all hardware and software as part of its service as well as aspects of managing the hardware and software. One such technology, which requires no management and features separate compute, storage, and cloud services that can scale and change independently, is Snowflake. It differentiates itself from IaaS and PaaS cloud data warehouses because it was built from the ground up on cloud architecture.

All administrative tasks, tuning, patching, and management of the environment falls on the vendor. In lieu of the architecture we have seen with IaaS and PaaS solutions in the market today, Snowflake has a new architecture called a multi-clustered shared data that essentially makes the administrative headache of maintaining solutions go away. However, that doesn’t mean it’s the absolute right choice for your organization – that’s where an experienced consulting partner like 2nd Watch comes in.

If you depend on your data to better serve your customers, streamline your operations, and lead (or disrupt) your industry, a modern data platform built on the cloud is a must-have for your organization. Contact us to learn what a modern data warehouse would look like for your organization.