Innovation is Key to Surviving the COVID 19 Pandemic

[et_pb_section admin_label=”section”]
[et_pb_row admin_label=”row”]
[et_pb_column type=”4_4″][et_pb_text admin_label=”Text”]The COVID 19 pandemic has affected all of us in one way or another, but possibly the hardest hit are small and medium businesses (SMBs) that continue to close shop across the country. Yelp estimates that 60% of the closures due to the pandemic are now permanent.

With jobs lost, businesses shut down, and the economic hardship many now face, many consumers are understandably cautious in terms of spending. Besides, the lockdown and its impact will continue to pose operational challenges to businesses for the next few months. Many business are going into 2021 facing lower consumer spending and continued restrictions on the movement of people.

This brings us to the question –what can really make a difference and help businesses survive the pandemic? The short answer – innovation!

The very entrepreneurial spirit of Americans – adapting to change, overcoming challenges in creative ways, and utilization of available technologies to reduce business disruption – has helped businesses stay afloat in a time when many are sinking fast.

Here are some tips for businesses to overcome the potentially debilitating effects of the pandemic – around 6 focus areas – to not only survive, but create an advantage enabling them to surpass the competition:

  • Revolutionalize the Delivery of Services
  • Leverage Remote Workforce
  • Cloud Transformation
  • Automation
  • Service Innovation
  • Upskill and Leap to Next Level

1.      Revolutionize the Delivery of Services

It’s difficult to predict when the lockdown will end and normalcy will return to everyday business operations. That doesn’t mean you have to close shop and go home. It just means that you must find new ways of serving your clients. And, a variety of establishments are already doing it. Consider these:

  • Restaurants are focusing on take-outs and food ordering apps to serve their customers.
  • Schools are conducting their classes via online video apps. Even ivy league institutes have been doing this for some time now.
  • Fitness trainers are hosting out-of-home workouts for their clients.

The list goes on. The good thing about these options is that these businesses are not inventing new technologies; they are just changing how they operate using available technologies.

2.      Leverage Remote Workforce

Since the beginning of the lockdown, you’ve been likely bombarded with news and social media updates on how other businesses are resorting to a remote workforce to ensure business continuity. Perhaps, you’ve given it some thought too, but are wary of the unknown and uncertainties that come with it. So, here’s some motivation to help you take the plunge:

  • The availability of technologies like Zoom, Google Meet, and other team collaboration apps have made remote working seamless for various employees in roles such as accounting, HR, IT support, and so on.
  • As employees spend more time at home with their loved ones, their productivity is increasing. That’s right, the remote workforce is more productive and happier Clearly, the fears of loss in productivity are now proving to be unfounded.
  • Working from home reduces the employees’ expenses on commute. Employees who witness these savings may deliver better results for the company, driven by higher employee satisfaction.

3.      Cloud Transformation

Cloud transformation was already a hot trend among businesses in every industry and economy worldwide before the pandemic. COVID-19 has only accelerated the adoption of the cloud. Here are some ways the cloud is helping businesses in the times of the pandemic:

  • Improved Resilience

With everyone locked in their homes, internet usage has skyrocketed. However, the people who maintain and upkeep digital technologies in organizations are stuck at home too. So, the digital surge combined with limited maintenance has resulted in increased outages.

Cloud systems offer superior resilience against outages – built-in data backups, recovery tools, and robust security features – that limit disruptions significantly.

  • Improved Scalability

The elastic nature of the cloud makes it possible for organizations to manage unexpected changes in consumer demand seamlessly and effortlessly. They can scale their digital resources up or down to match the consumer demand and pay only for the resources they utilize.

  • Improved Data-driven Innovation

The cloud is designed for data. It offers organizations powerful data manipulation tools they can utilize to extract highly useful insights from their consumer data. They can improve their existing products, develop new ones, or target new markets or audiences based on these insights. The potential is unlimited.

There are several more benefits of cloud that have made it a powerful enabler of business, even pre-pandemic:

4.      Automation

Automation is enabling businesses to streamline their operations and simultaneously improve customer satisfaction. Chatbots, for instance, allow businesses to engage their audiences in natural, human-like conversations. They offer quick, accurate, and timely responses to your potential customers, and even guide them through the purchase journey.

Likewise, robot process automation (RPA) can be used to automate a sizeable number of repetitive, low-level tasks, thereby freeing up your precious resources to focus on what really matters.

Another more common example of automation is self-service checkouts. Solutions like chatbots and self-service checkouts offer a win-win solution for both consumers and businesses. The increased customer satisfaction, lower costs of operations, and enhanced performance all add up to fuel your revenues.

5.      Service Innovation

A bit of creativity can go a long way in attracting previously ignored customers. For instance, meal buckets like Family Specials, Friends Specials, Weekend Combos, etc. can fetch high-value, low-volume orders from both current and new customers for restaurants.

McDonalds, for instance, has gone all out on its drive-thru and digital experience push. As part of its digital push, it has installed new point of sale (POS) systems that allow its customers to browse the menu, select items, make payment, and place an order with zero in-store contact. This minimizes the risk of infection, while demonstrating to their customers that the company cares about their health.

Likewise, fitness centers can use fitness apps to track their customers’ exercise regimen daily and offer them personalized advice. Such a personalized experience can go a long way in winning the loyalty of customers.

6.      Upskill and Leap to Next Level

For many businesses that witness frenzied activity during non-pandemic times, the lockdown can be a terrific opportunity to undertake the business improvement projects they’ve been putting off due to time crunch.

Free and premium online learning resources like Coursera, Datacamp, Codecademy, EDX, FuturLearn, and several others offer basic to specialized courses on almost everything under the sun. Upskilling your staff allows you to target new customers, new markets, and new needs of the consumers. You may emerge out of this lockdown stronger.

Conclusion

These are not normal times. Businesses cannot continue to operate as they did under normal times and expect to come out unscathed. As it goes, they must undertake desperate measures, exercise their creativity, and transform their business to generate new avenues of revenue. They must reinvent how they do business.

To examine more cloud-driven opportunities for your business and identify new sources of revenue, get in touch.

-Mir Ali, Field CTO[/et_pb_text][/et_pb_column]
[/et_pb_row]
[/et_pb_section]

The Democratization of IT – Madden NFL for the Technology Industry

A colleague of mine postulated that the IT department would eventually go the way of the dinosaur. He put forward that as Everything-as-a-Service model becomes the norm, IT would no longer provide meaningful value to the business. My flippant response was to point out that they have been saying that mainframes are dead for decades.

This of course doesn’t get to the heart of the conversation. What is the future role of IT as we move towards the use of Everything-as-a-Service? Will marketing, customer services, finance and other departments continue to look to IT for their application deployment? Will developers and engineers move to containerization to build and release code, turning to a DevOps model where the Ops are simply a cloud provider?

We’ve already proven that consumers can adapt to very complex applications. Every day when you deploy and use an application on your phone, you are operating at a level of complexity that once required IT assistance. And yes, the development of intuitive UXs has enabled this trend, however the same principal is occurring at the enterprise level. Cloud, in many ways, has already brought this simplification forward. It has democratized IT.

So, what is the future of IT? What significant disruptions to operations processes will occur through democratization? I liken it to the evolution of eSports (Madden NFL). You don’t manage each player on the field. You choose the skill players for the team, then run the plays. The only true decision you make is which offensive play to run, or which defensive scheme to set. In IT terms, you review the field (operations), orchestrate the movement of resources, and ensure the continuation of the applications looking for any potential issues and resolving them before they become an issue. This is the future of IT.

What are the implications? I believe IT evolves into a higher order (read more business value) function. They enable digital transformation, not from a resource perspective, but from a strategic business empowerment perspective. They get out of the job that keeps them from being strategic, the tactical day to day of managing resources, to enabling and implementing business strategy. However, that takes a willingness to specifically allocate how IT is contributing to the business value output/increase at some very granular levels. To achieve this, it might require reengineering teams, architectures, and budgets to tightly link specific IT contributions to specific business outputs. The movement to modern cloud technology supports this fundamental shift, and over time, will soon start to solve chronic problems of underfunding or lack of support for ongoing improvement. IT is not going the way of the dinosaur. They’re becoming the fuel that enables business to grow strategically.

Want more tips on how to empower IT to contribute to growing your business strategy? Contact us

-Michael Elliott, Sr Director of Product Marketing

5 Questions You Need to Answer to Maximize Your Data Use

Businesses have been collecting data for decades, but we’re only just starting to understand how best to apply new technologies, like machine learning and AI, for analysis. Fortunately, the cloud offers tools to maximize data use. When starting any data project, the best place to begin is by exploring common data problems to gain valuable insights that will help create a strategy for accomplishing your overall business goal.

Why do businesses need data?

The number one reason enterprise organizations need data is for decision support. Business moves faster today than it ever has, and to keep up, leaders need more than a ‘gut feeling’ on which to base decisions. Data doesn’t make decisions for us, but rather augments and influences which path forward will yield the results we desire.

Another reason we all need data is to align strategic initiatives from the top down. When C-level leaders decide to pursue company wide change, managers need data-based goals and incentives that run parallel with the overall objectives. For change to be successful, there needs to be metrics in place to chart progress. Benchmarks, monthly or quarterly goals, department-specific stats, and so on are all used to facilitate achievement and identify intervention points.

We’ve never before had more data available to us than we do today. While making the now necessary decision to utilize your data for insights is the first step, finding data, cleaning it, understanding why you want it, and analyzing the value and application can be intensive. Ask yourself these five questions before diving into a data project to gain clarity and avoid productivity-killing data issues.

1. Is your data relevant?

  • What kind of value are you getting from your data?
  • How will you apply the data to influence your decision?

2. Can you see your data?

  • Are you aware of all the data you have access to?
  • What data do you need that you can’t see?

3. Can you trust your data?

  • Do you feel confident making decisions based on the data you have?
  • If you’re hesitant to use your data, why do you doubt its authenticity?

4. Do you know the recency of your data?

  • When was the data collected? How does that influence relevancy?
  • Are you getting the data you need, when you need it?

5. Where is your data siloed?

  • What SaaS applications do different departments use? (For example: Workday for HR, HubSpot for marketing, Salesforce for Sales, MailChimp, Trello, Atlassian, and so on.)
  • Do you know where all of your data is being collected and stored?

Cloud to the rescue! But only with accurate data

The cloud is the most conducive environment for data analysis because of its plethora of analysis tools available. More and more tools, like plug-and-play machine learning algorithms, are developed every day, and they are widely and easily available in the cloud.

But tools can’t do all the work for you. Tools cannot unearth the value of data. It’s up to you to know why you’re doing what you’re doing. What is the business objective you’re trying to get to? Why do you care about the data you’re seeking? What do you need to get out of it?

A clearly defined business objective is incredibly important to any cloud initiative involving data. Once that’s been identified, it’s important for that goal to serve as the guiding force behind the tools you use in the cloud. Because tools are really for developers and engineers, you want to pair them with someone engaging in the business value of the effort as well. Maybe it’s a business analyst or a project manager, but the team should include someone who is in touch with the business objective.

However, you can’t completely rely on cloud tools to solve data problems because you probably have dirty data, or data that isn’t correct or in the specified format. If your data isn’t accurate, all the tools in the world won’t help you accomplish your objectives. Dirty data interferes with analysis and creates a barrier to your data providing any value.

To cleanse your data, you need to validate the data coming in with quality checks. Typically, there are issues with dates and time stamps, spelling errors from form fields, and other human error in data entry. Formatting date-entry fields and using calendar pickers can help users uniformly complete date information. Drop down menus on form fields will reduce spelling errors and allow you to filter more easily. Small design changes like these can significantly help the cleanliness of your data and your ability to maximize the impact of cloud tools.

Are you ready for data-driven decision making? Access and act on trustworthy data with the Data and Analytics services provided by 2nd Watch to enable smart, fast, and effective decisions that support your business goals. Contact Us to learn more about how to maximize your data use.

-Robert Whelan, Data Engineering & Analytics Practice Manager

The 'Agile Digital Transformation Roadmap' Poster

Intellyx’s new Agile Digital Transformation Roadmap poster is here! The poster lays out the steps necessary for enterprises to align with customer preferences by implementing change as a core competency and features five main focus areas: customer experience, enterprise IT, agile architecture, devops, and big data.

“While digital transformation begins with a customer-focused technology transformation, in reality, it represents end-to-end business transformation as organizations establish change as a core competency,” says Jason Bloomberg, president of Intellyx and contributor to Forbes. “The Agile Digital Transformation Roadmap poster illustrates the complex, intertwined steps enterprises must take to achieve the benefits of digital transformation.”

The poster is the companion to Jason Bloomberg’s forthcoming book, Agile Digital Transformation, due in 2017. This book will lay out a practical approach for digitally transforming organizations to be more agile and innovative.

As an official sponsor of the poster, we’re giving you the download for free – enjoy!

Download the Poster

-Nicole Maus, Marketing Manager

AWS Device Farm Simplifies Mobile App Testing

Easier Customized Mobile Device Digital Marketing

When marketers think digital, they think mobile, but the best way to reach people on their smartphones is an app, not a website. Still, mobile apps are a double-edged sword for companies. They deliver more users with higher engagement but are also harder and more costly to develop and . Given that mobile devices are inherently connected, the first cloud services emerged to simplify app development.

Mobile backends and SDKs like Facebook Parse, Kumulos or AWS Mobile Services tackled the backend services data management, synchronization, notification and analytics. Real world ing is the la service, courtesy of the AWS Device Farm, which provides virtual access to myriad mobile devices and operating environments. Device Farm, released in July, allows developers to easily apps on hundreds of combinations of hardware and OS (with a constantly growing list) using either custom scripts or a standard AWS compatibility suite. Although the service launched targeting the most acute problem, on fragmented Android, it now supports iOS as well.

But the cloud service isn’t just able to provide instant access to a multitude of devices for hardware-specific s – it also allows ing on multiple devices in parallel, which greatly cuts time.

Growth of Digital Content Consumption

Bootstrapping mobile development with cloud services can yield huge dividends for organizations wanting to better connect with customers, employees and partners. Not only are there more mobile than desktop users, but their usage is heavier. The average adult in the US spends almost three hours per day consuming digital content on a mobile device, 11% more than just last year. This means that businesses without a mobile strategy, don’t have any digital strategy.

The problem is that providing a richer, customized, differentiated experience requires building a custom mobile app – a task that’s made more daunting by the cornucopia of devices in use. It means supporting multiple versions of two operating systems and countless hardware variations. Although Apple users generally upgrade to the la iOS release within months, the la Android development stats show four versions with at least 13% usage. Worse yet, a 2015 OpenSignal survey of hundreds of thousands of Android devices found more than 24,000 distinct device types.

Such diversity makes developing and thoroughly ing mobile apps vastly more complex than a website or PC application. One mobile app developer does QA ing on 400 different Android devices for every app – a ing nightmare that’s even worse when you consider that the mobile app release cycle is measured in weeks, not months. If ever a problem was in need of a virtualized cloud service, this is it; and AWS has delivered.

Device Farm

Device Farm takes an app archive (.apk file for Android or .ipa for iOS) and measures it against either custom scripts or an AWS compatibility suite using a fuzz of random events. Test projects are comprised of the actual suite (Device Farm supports five scripting languages), a device pool (specific hardware and OS versions) and any predefined device state such as other installed apps, required local data and device location. Aggregate results are presented on a summary screen with details, including any screenshots, performance data and log file output, available for each device.

Device Farm doesn’t replace the need for in-field beta ing and mobile app instrumentation to measure real world usage, performance and failures, however with thorough, well-crafted suites and a diverse mix of device types, it promises to dramatically improve the end-user experience by eliminating problems that only manifest when running on actual hardware instead of an IDE simulator.

Developers can automate and schedule s using the Device Farm API or via Jenkins using the AWS plugin. Like every AWS service, pricing is usage based, where the metric is the total time for each device at $0.17 per device minute, however by judiciously selecting the device pool, it’s much cheaper than buying and configuring the actual hardware. Developers can automate and schedule s using the Device Farm API or via Jenkins using the AWS plugin. Like every AWS service, pricing is usage based, where the metric is the total time for each device at $0.17 per device minute.

Along with Mobile Services for backend infrastructure, Device Farm makes a compelling mobile app development platform, particularly for organizations already using AWS for website and app development.

To learn more about AWS Device Farm or to get started on your Digital Marketing initiatives, contact us.

-2nd Watch blog by Kurt Marko

Application Development in the Cloud

The Amazon Web Services Cloud platform offers many unique advantages that can improve and expedite application development that traditional solutions cannot offer. Cloud computing eliminates the need for hardware procurement and makes resources available to anyone with fewer financial resources. What once may have taken months to prepare can now be ready in weeks, days, or even hours. A huge advantage to developing in the Cloud is speed. You no longer have to worry about the infrastructure, storage, or computing capacity needed to build and deploy applications. Development teams can focus on what they do best – creating applications.

Server and networking infrastructure

Developing a new application platform from start to finish can be a lengthy process fraught with numerous hurdles from an operations and infrastructure perspective that cause unanticipated delays of all types. Issues such as budget restrictions, hardware procurement, datacenter capacity and network connectivity are some of the usual suspects when it comes to delays in the development process. Developers cannot develop a platform without the requisite server and networking hardware in place, and deployment of those resources traditionally can require a significant investment of money, time and people.

Items that may need to be taken into consideration for preparing an environment include:

  • Servers
  • Networking hardware (e.g. switches and load balancers)
  • Datacenter cabinets
  • Power Distribution Units (PDU)
  • Power circuits
  • Cabling (e.g. power and network)


Delays related to any item on the above list can easily set back timeframes anywhere from a day to a few weeks. A short list of problems that can throw a wrench into plans include:

  • Potentially having to negotiate new agreements to lease additional datacenter space
  • Hardware vendor inventory shortages (servers, switches, memory, disks, etc.)
  • Bad network-cross connects, ports and transceivers
  • Lack of hosting provider/datacenter cabinet space
  • Over provisioned power capacity requiring additional circuits and/or PDU’s
  • Defective hardware requiring RMA processing
  • Long wait times for installation of hardware by remote-hands


In a perfect world, the ideal development and staging environments will exactly mirror production, providing zero variability across the various stacks, with the exception of perhaps endpoint connectivity. Maintenance of multiple environments can be very time consuming. Performing development and ing in the AWS Cloud can help to completely eliminate many of the above headaches.

AWS handles all hardware provisioning, allowing you to select the hardware you need when you want it and pay by the hour as you go. Eliminating up-front costs allows for ing on or near the exact same type of hardware as in production and with the desired capacity. No need to worry about datacenter cabinet capacity and power. No need to have a single server running multiple functions (e.g. database and caching) because there simply isn’t enough hardware to go around. For applications that are anticipated to handle significant amounts of processing, this can be extremely advantageous for ing at scale. This is a key area where compromises are usually made in regards to hardware platforms and capacity. It’s commonplace to have older hardware re-used for development purposes because that’s simply the only hardware available, or to have multiple stacks being developed on a single platform because of a lack of resources, whether from a budget or hardware perspective. In a nutshell, provisioning hardware can be expensive and time consuming.

Utilizing a Cloud provider such as AWS eliminates the above headaches and allows for quickly deploying an infrastructure at scale with the hardware resources you want, when you want them. There are no up-front hardware costs for servers or networking equipment, and you can ‘turn off’ the instances if and when they are not being used for additional savings. Along with the elimination of up-front hardware costs is the adjustment of changing from capital expenses to operating expenses. Viewing hardware and resources from this perspective allows greater insight into expenses on a month-to-month basis, which can increase accountability and help to minimize and control spending. Beyond the issues of hardware and financing, there are other numerous benefits.

Ensuring software uniformity across stacks

This can be achieved by creating custom AMI’s (Amazon Machine Images) allowing for the same OS and software packages to be included on deployed instances. Alternatively, User Data scripts which execute commands (e.g. software installation and configuration) can also be used for this purpose when provisioning instances. Being able to deploy multiple instances in minutes with just a few mouse clicks is now possible. Need to load and find out if your platform can handle 50,000 transactions per second? Simply deploy additional instances that have been created using an AMI built from an existing instance and add them to a load balancer configuration. AWS also features the AWS Marketplace, which helps customers find, buy, and immediately start using the software and services they need to build products and run their businesses. Customers can select software from well-known vendors already packaged into AMI’s that are ready to use upon instance launch.

Data consistency

Trying to duplicate a database store usually involves dumping a database and then re-inserting that data into another server, which can be very time consuming. First you have to dump the data, and then it has to be imported to the destination. A much faster method that can be utilized on AWS is to:

  1. Snapshot the datastore volume
  2. Create an EBS volume from the snapshot in the desired availability zone
  3. Attach the EBS volume to another instance
  4. Mount the volume and then start the database engine.

*Note that snapshots are region specific and will need to be copied if they are to be used in a region that differs from where they were originally created.

If utilizing the AWS RDS (Relational Database Service), the process is even simpler. All that’s needed is to create a snapshot of the RDS instance and deploy another instance from the snapshot. Again, if deploying in a different region, the snapshot will need to be copied between regions.

Infrastructure consistency

Being that AWS is API driven, it allows for the easy deployment of infrastructure as code utilizing the CloudFormation service. It utilizes JSON formatted templates that allow for the provisioning of various resources such as S3 buckets, EC2 instances, auto scale groups, security groups, load balancers and EBS volumes. VPC’s (Virtual Private Cloud) can also be created using this same service, allowing for duplication of network environments between development, staging and production. Utilizing CloudFormation to deploy AWS infrastructure can greatly expedite the process of security validation. Once an environment has passed ing, the same CFT’s (CloudFormation Templates) can be used to deploy an exact copy of your stack in another VPC or region. Properly investing the time during the development and phases to refine CloudFormation code can reduce deployment of additional environments to a few clicks of a mouse button – try accomplishing that with a physical datacenter.

Higher-level services

For those wishing to utilize higher-level services to further simplify the deployment of environments, AWS offers the Elastic Beanstalk and OpsWorks services. Both are designed to reduce the complexity of deploying and managing the hardware and network layers on which applications run. Elastic Beanstalk is an easy-to-use and highly simplified application management service for building web apps and web services with popular application containers such as Java, PHP, Python, Ruby and .NET. Customers upload their code and Elastic Beanstalk automatically does the rest. OpsWorks features an integrated management experience for the entire application lifecycle including resource provisioning (e.g. instances, databases, load balancers), configuration management (Chef), application deployment, monitoring, and access control. It will work with applications of any level of complexity and is independent of any particular architectural pattern. Compared to Elastic Beanstalk, it also provides more control over the various layers comprising application stacks such as:

  • Application Layers: Ruby on Rails, PHP, Node.js, Java, and Nginx
  • Data Layers: MySQL and Memcached
  • Utility Layers: Ganglia and HAProxy


Summary

The examples above highlight some of the notable features and advantages AWS offers that can be utilized to expedite and assist application development. Public Cloud computing is changing the way organizations build, deploy and manage solutions. Notably, operating expenses now replace capital expenses, costs are lowered due to no longer having to guess capacity, and resources are made available on-demand. All of this adds up to reduced costs and shorter development times, which enables products and services to reach end-users much faster.

-Ryan Manikowski, Cloud Engineer

Running your Business Applications on AWS

An oft-held misconception by many individuals and organizations is that AWS is great for Web services, big data processing, DR, and all of the other “Internet facing” applications but not for running your internal business applications.  While AWS is absolutely an excellent fit for the aforementioned purposes, it is also an excellent choice for running the vast majority of business applications.  Everything from email services, to BI applications, to ERP, and even your own internally built applications can be run in AWS with ease while virtually eliminating future IT capex spending.

Laying the foundation
One of the most foundational pieces of architecture for most businesses is the network that applications and services ride upon.  In a traditional model, this will generally look like a varying number of switches in the datacenter that are interconnected with a core switch (e.g. a pair of Cisco Nexus 7000s). Then you have a number of routers and VPN devices (e.g. Cisco ASA 55XX) that interconnect the core datacenter with secondary datacenters and office sites.  This is a gross oversimplification of what really happens on the business’s underlying network (and neglects to mention technologies like Fibre Channel and InfiniBand).  But that further drives the point that migrating to AWS can greatly reduce the complexity and cost of a business in managing a traditional RYO (run your own) datacenter.

Anyone familiar with IT budgeting is more than aware of the massive capex costs associated with continually purchasing new hardware as well as the operational costs associated with managing it – maintenance agreements, salaries of highly skilled engineers, power, leased datacenter and network space, and so forth.  Some of these costs can be mitigated by going to a “hosted” model where you are leasing rack space in someone else’s datacenter, but you are still going to be forking out a wad of cash on a regular basis to support the hosted model.

The AWS VPC (Virtual Private Cloud) is a completely virtual network that allows businesses the ability to create private network spaces within AWS to run all of their applications on, including internal business applications.  Through the VGW (Virtual Private Gateway) the VPC inherently provides a pathway for businesses to interconnect their off-cloud networks with AWS.  This can be done through traditional VPNs or by using the VPC’s Direct Connect.  Direct provides a dedicated private connection from AWS to your off-cloud locations (e.g. on-prem, remote offices, colocation).  The VPC is also flexible enough that it will allow you to run your own VPN gateways on EC2 instances if that is a desired approach.  In addition, interconnecting with most MPLS providers is supported, as long as the MPLS provider hands off VLAN IDs.

Moving up the stack
The prior section showed how the VPC is a low cost and simplified approach to managing network infrastructure. We can proceed up the stack to the server, storage, and application layers.  Another piece of the network layer that is generally heavily intertwined with the application architecture and the server’s hosting is load balancing.  At a minimum, load balancing enables the application to run in a highly available and scalable manner while providing a single namespace/endpoint for the application client to connect.  Amazon’s ELB (Elastic Load Balancer) is a very cost effective, powerful, and easy to use solution to load balancing in AWS.  A lot of businesses have existing load balancing appliances, like F5 BigIP, Citrix Netscaler, or A1, that they use to manage their applications.  Many have also written a plethora of custom rules and configs, like F5 iRules, to do some layer 7 processing and logic on the application.  All of the previously mentioned load balancing solution providers, and quite a few more, have AWS hosted options available, so there is an easy migration path if they decide the ELB is not a good fit for their needs.  However, I have personally written migration tools for our customers to convert well over a thousand F5 Virtual IPs and pools (dumped to a CSV) into ELBs.  It allowed for a quick and scripted migration of the entire infrastructure with an enormous cost savings to the customer.  In addition to off-the-shelf appliances for load balancing, you can also roll your own with tools like HAProxy and Nginx, but we find that for most people the ELB is an excellent solution for meeting their load balancing needs.

Now we have laid the network foundation to run our servers and applications on.  AWS provides several services for this.  If you need, or desire, to manage your own servers and underlying operating system, EC2 (Elastic Compute Cloud) provides the foundational building blocks for spinning up virtual servers you can tailor to suit whatever need you have.  A multitude of Linux and Windows-based Operating Systems are supported.  If your application supports it, there are services like ElasticBeanstalk, OpsWorks, or Lambda, to name a few, that will manage the underlying compute resources for you and simply allow you to “deploy code” on completely managed compute resources in the VPC.

What about my databases?
There are countless examples of people running internal business application databases in AWS.  The RDS (Relational Database Service) provides a comprehensive, robust, and HA capable hosted solution for MySQL, PostgreSQL, Microsoft SQL server, and Oracle.  If your database platform isn’t supported by RDS, you can always run your own DB servers on EC2 instances.

NAS would be nice

AWS has always recommended a very ephemeral approach to application architectures and not storing data directly on an instance.  Sometimes there is no getting away from needing shared storage, though, across multiple instances.  Amazon S3 is a potential solution but is not intended to be used as attached storage, so the application must be capable of addressing and utilizing S3’s endpoints if that is to be a solution.  There are a great many applications that aren’t compatible with that model.

Until recently your options were pretty limited for providing a NAS type of shared storage to Amazon EC2 instances.  You could create a GlusterFS (AKA Redhat Storage Server) or Ceph cluster out of EC2 instances spanned across multiple availability zones, but that is fairly expensive and has several client mounting issues. The Gluster client, for example, is a FUSE (filesystem in user space) client and has sub-optimal performance.  Linux Torvalds has a famous and slightly amusing – depending upon the audience – rant about userspace filesystems (see: https://lkml.org/lkml/2011/6/9/462).  To get around the FUSE problem you could always enable NFS server mode, but that breaks the ability of the client to dynamically connect to another GlusterFS server node if one fails thus introducing a single point of failure.  You could conceivable set up some sort of NFS Server HA cluster using Linux heartbeat, but that is tedious, error prone, and places the burden of the storage ecosystem support on the IT organization, which is not desirable for most IT organizations.  Not to mention that Heartbeat requires a shared static IP address, which could be jury rigged in VPC, but you absolutely cannot share the same IP address across multiple Availability Zones, so you would lose multi-AZ protection.

Yes, there were “solutions” but nothing that was easy and slick like most everything else in AWS is nor anything that is ready for primetime.  Then on April 9th, 2015 Amazon introduced us to EFS (Elastic File System).  The majority of corporate IT AWS users have been clamoring for a shared file system solution in AWS for quite some time, and EFS is set to fill that need.  EFS is a low latency, shared storage solution available to multiple EC2 instances simultaneously via NFSv4.  It is currently in preview mode but should be released to GA in the near future.  See more at https://aws.amazon.com/efs/.

Thinking outside the box
In addition to the AWS tools that are analogs of traditional IT infrastructure (e.g. VPC ≈ Network Layer, EC2 ≈ Physical server or VM) there are a large number of tools and SaaS offerings that add value above and beyond.  Tools like SQS, SWF, SES, RDS – for hosted/managed RDMBS platforms – CloudTrail, CloudWatch, DynamoDB, DirectoryServices, WorkDocs, WorkSpace, and many more make transitioning traditional business applications into the cloud easy, all the while eliminating capex costs, reducing operating costs, and increasing stability and reliability.

A word on architectural best practices
If it is at all possible, there are some guiding principles and best practices that should be followed when designing and implementing solutions in AWS.  First and foremost, design for failure.  The new paradigm in virtualized and cloud computing is that no individual system is sacred and nothing is impervious to potential failure.  Having worked in a wide variety of high tech and IT organizations over the past 20 years, this should really come as no surprise because even when everything is running on highly redundant hardware and networks, equipment and software failures have ALWAYS been prevalent.  IT and software design as a culture would have been much better off adopting this mantra years and years ago.  However, overcoming some of the hurdles designing for failure creates wasn’t a full reality until virtualization and the Cloud were available.

AWS is by far the forerunner in providing services and technologies that allow organizations to decouple the application architecture from the underlying infrastructure.  Tools like Route53, AutoScaling, CloudWatch, SNS, EC2, and configuration management allow you to design a high level of redundancy and automatic recovery into your infrastructure and application architecture.  In addition to designing for failure, decoupling the application state from the architecture as a whole should be strived for.  The application state should not be stored on any individual component in the stack, nor should it be passed around between the layers.  This way the loss of a single component in the chain will not destroy the state of the application.  Having the state of the application store in its own autonomous location, like a distributed NoSQL DB cluster, will allow the application to function without skipping a beat in the event of a component failure.

Finally, a DevOps, Continuous Integration, or Continuous Delivery methodology should be adopted for application development.  This allows changes to be ed automatically before being pushed into production and also provides a high level of business agility.  The same kind of business agility that running in the Cloud is meant to provide.

-Ryan Kennedy, Senior Cloud Architect

High Performance Computing in the Public Cloud

The exponential growth of big data is pushing companies to process massive amounts of information as quickly as possible, which is often times not realistic, practical or down right just not achievable on standard CPI’s. In a nutshell, High Performance Computing (HPC) allows you to scale performance to process and report on the data quicker and can be the solution to many of your big data problems.

However, this still relies on your cluster capabilities. By using AWS for your HPC needs, you no longer have to worry about designing and adjusting your job to meet the capabilities of your cluster. Instead, you can quickly design and change your cluster to meet the needs of your jobs.  There are several tools and services available to help you do this, like the AWS Marketplace, AWS API’s, or AWS CloudFormation Templates.

Today, I’d like to focus on one aspect of running an HPC cluster in AWS that people tend to forget about – placement groups.

Placement groups are a logical grouping of instances in a single availability zone.  This allows you to take full advantage of a low-latency 10 GB network, which in turn will allow you to be able to transfer up to 4TB of data per hour between nodes.  However, because of the low-latency 10 GB network, the placement groups cannot span to multiple availability zones.  This may scare some people away from using them, but it shouldn’t. You can create multiple placement groups in different availability zones as a work-around, and with enhanced networking you can also still connect between the different HPC’s.

One of the grea benefits of AWS HPC is that you can run your High Performance Computing clusters with no up-front costs and scale out to hundreds of thousands of cores within minutes to meet your computing needs. Learn more about Big Data and HPC solutions on AWS or Contact Us to get started with a workload workshop.

-Shawn Bliesner, Cloud Architect

Business Intelligence and Analytics in the Public Cloud

Business intelligence (BI) is an umbrella term that refers to a variety of software applications used to analyze an organization’s raw data. BI as a discipline is made up of several related activities including data mining, online analytical processing, querying and reporting.  Analytics is the discovery and communication of meaningful patterns in data. This blog will look at a few areas of BI that will include data mining and reporting, as well as talk about using analytics to find the answers you need to make better business decisions.

Data Mining

Data Mining is an analytic process designed to explore data.  Companies of all sizes continuously collect data, often times in very large amounts, in order to solve complex business problems.  Data collection can range in purpose from finding out the types of soda your customers like to drink to tracking genome patterns. To process these large amounts of data quickly takes a lot of processing power, and therefore, a system such as Amazon Elastic MapReduce (EMR) is often needed to accomplish this.  AWS EMR can handle most use cases from log analysis to bioinformatics, which are key when collecting data, but AWS EMR can only report on data that is collected, so make sure the collected data is accurate and complete.

Reporting

Reporting accurate and complete data is essential for good BI.  Tools like Splunk’s Hunk and Hive work very well with AWS EMR for modeling, reporting, and analyzing data.  Hive is business intelligence software used for reporting meaningful patterns in the data, while Hunk helps interactively review logs with real-time alerts. Using the correct tools is the difference between data no one can use and data that provides meaningful BI.

Why do we collect all this data? To find answers of course! Finding answers in your data, from marketing data to application debugging, is why we collect the data in the first place.  AWS EMR is great for processing all that data with the right tools reporting on that data.  But more than knowing just what happened, we need to find out how it happened.  Interactive queries on the data are required to drill down and find the root causes or customer trends.  Tools like Impala and Tableau work great with AWS EMR for these needs.

Business Intelligence and Analytics boils down to collecting accurate and complete data.  That includes having a system that can process that data, having the ability to report on that data in a meaningful way, and using that data to find answers.  By provisioning the storage, computation and database services you need to collect big data into the cloud, we can help you manage big data, BI and analytics while reducing costs, increasing speed of innovation, and providing high availability and durability so you can focus on making sense of your data and using it to make better business decisions.  Learn more about our BI and Analytics Solutions here.

-Brent Anderson, Senior Cloud Engineer

Batch Computing in the Cloud with Amazon SQS & SWF

Batch computing isn’t necessarily the most difficult thing to design a solution around, but there are a lot of moving parts to manage, and building in elasticity to handle fluctuations in demand certainly cranks up the complexity.  It might not be particularly exciting, but it is one of those things that almost every business has to deal with in some form or another.

The on-demand and ephemeral nature of the Cloud makes batch computing a pretty logical use of the technology, but how do you best architect a solution that will take care of this?  Thankfully, AWS has a number of services geared towards just that.  Amazon SQS (Simple Queue Services) and SWF (Simple Workflow Service) are both very good tools to assist in managing batch processing jobs in the Cloud.  Elastic Transcoder is another tool that is geared specifically around transcoding media files.  If your workload is geared more towards analytics and processing petabyte scale big data, then tools like EMR (Elastic Map Reduce) and Kinesis could be right up your alley (we’ll cover that in another blog).  In addition to not having to manage any of the infrastructure these services ride on, you also benefit from the streamlined integration with other AWS services like IAM for access control, S3, SNS, DynamoDB, etc.

For this article, we’re going to take a closer look at using SQS and SWF to handle typical batch computing demands.

Simple Queue Services (SQS), as the name suggests, is relatively simple.  It provides a queuing system that allows you to reliably populate and consume queues of data.  Queued items in SQS are called messages and are either a string, number, or binary value.  Messages are variable in size but can be no larger than 256KB (at the time of this writing).  If you need to queue data/messages larger than 256KB in size the best practice is to store the data elsewhere (e.g. S3, DynamoDB, Redis, MySQL) and use the message data field as a linker to the actual data.  Messages are stored redundantly by the SQS service, providing fault tolerance and guaranteed delivery.  SQS doesn’t guarantee delivery order or that a message will be delivered only once, which seems like something that could be problematic except that it provides something called Visibility Timeout that ensures once a message has been retrieved it will not be resent for a given period of time.  You (well, your application really) have to tell SQS when you have consumed a message and issue a delete on that message.  The important thing is to make sure you are doing this within the Visibility Timeout, otherwise you may end up processing single messages multiple times.  The reasoning behind not just deleting a message once it has been read from the queue is that SQS has no visibility into your application and whether the message was actually processed completely, or even just successfully read for that matter.

Where SQS is designed to be data-centric and remove the burden of managing a queuing application and infrastructure, Simple Workflow Service (SWF) takes it a step further and allows you to better manage the entire workflow around the data.  While SWF implies simplicity in its name, it is a bit more complex than SQS (though that added complexity buys you a lot).  With SQS you are responsible for managing the state of your workflow and processing of the messages in the queue, but with SWF, the workflow state and much of its management is abstracted away from the infrastructure and application you have to manage.  The initiators, workers, and deciders have to interface with the SWF API to trigger state changes, but the state and logical flow are all stored and managed on the backend by SWF.  SWF is quite flexible too in that you can use it to work with AWS infrastructure, other public and private cloud providers, or even traditional on-premise infrastructure.  SWF supports both sequential and parallel processing of workflow tasks.

Note: if you are familiar with or are already using JMS, you may be interested to know SQS provides a JMS interface through its java messaging library.

One major thing SWF buys you over using SQS is that the execution state of the entire workflow is stored by SWF extracted from the initiators, workers, and deciders.  So not only do you not have to concern yourself with maintaining the workflow execution state, it is completely abstracted away from your infrastructure.  This makes the SWF architecture highly scalable in nature and inherently very fault-tolerant.

There are a number of good SWF examples and use-cases available on the web.  The SWF Developer Guide uses a classic e-commerce customer order workflow (i.e. place order, process payment, ship order, record completed order).  The SWF console also has a built in demo workflow that processes an image and converts it to either grayscale or sepia (requires AWS account login).  Either of these are good examples to walk through to gain a better understanding of how SWF is designed to work.

Contact 2nd Watch today to get started with your batch computing workloads in the cloud.

-Ryan Kennedy, Sr. Cloud Architect