An oft-held misconception by many individuals and organizations is that AWS is great for Web services, big data processing, DR, and all of the other “Internet facing” applications but not for running your internal business applications. While AWS is absolutely an excellent fit for the aforementioned purposes, it is also an excellent choice for running the vast majority of business applications. Everything from email services, to BI applications, to ERP, and even your own internally built applications can be run in AWS with ease while virtually eliminating future IT capex spending.
Laying the foundation
One of the most foundational pieces of architecture for most businesses is the network that applications and services ride upon. In a traditional model, this will generally look like a varying number of switches in the datacenter that are interconnected with a core switch (e.g. a pair of Cisco Nexus 7000s). Then you have a number of routers and VPN devices (e.g. Cisco ASA 55XX) that interconnect the core datacenter with secondary datacenters and office sites. This is a gross oversimplification of what really happens on the business’s underlying network (and neglects to mention technologies like Fibre Channel and InfiniBand). But that further drives the point that migrating to AWS can greatly reduce the complexity and cost of a business in managing a traditional RYO (run your own) datacenter.
Anyone familiar with IT budgeting is more than aware of the massive capex costs associated with continually purchasing new hardware as well as the operational costs associated with managing it – maintenance agreements, salaries of highly skilled engineers, power, leased datacenter and network space, and so forth. Some of these costs can be mitigated by going to a “hosted” model where you are leasing rack space in someone else’s datacenter, but you are still going to be forking out a wad of cash on a regular basis to support the hosted model.
The AWS VPC (Virtual Private Cloud) is a completely virtual network that allows businesses the ability to create private network spaces within AWS to run all of their applications on, including internal business applications. Through the VGW (Virtual Private Gateway) the VPC inherently provides a pathway for businesses to interconnect their off-cloud networks with AWS. This can be done through traditional VPNs or by using the VPC’s Direct Connect. Direct provides a dedicated private connection from AWS to your off-cloud locations (e.g. on-prem, remote offices, colocation). The VPC is also flexible enough that it will allow you to run your own VPN gateways on EC2 instances if that is a desired approach. In addition, interconnecting with most MPLS providers is supported, as long as the MPLS provider hands off VLAN IDs.
Moving up the stack
The prior section showed how the VPC is a low cost and simplified approach to managing network infrastructure. We can proceed up the stack to the server, storage, and application layers. Another piece of the network layer that is generally heavily intertwined with the application architecture and the server’s hosting is load balancing. At a minimum, load balancing enables the application to run in a highly available and scalable manner while providing a single namespace/endpoint for the application client to connect. Amazon’s ELB (Elastic Load Balancer) is a very cost effective, powerful, and easy to use solution to load balancing in AWS. A lot of businesses have existing load balancing appliances, like F5 BigIP, Citrix Netscaler, or A1, that they use to manage their applications. Many have also written a plethora of custom rules and configs, like F5 iRules, to do some layer 7 processing and logic on the application. All of the previously mentioned load balancing solution providers, and quite a few more, have AWS hosted options available, so there is an easy migration path if they decide the ELB is not a good fit for their needs. However, I have personally written migration tools for our customers to convert well over a thousand F5 Virtual IPs and pools (dumped to a CSV) into ELBs. It allowed for a quick and scripted migration of the entire infrastructure with an enormous cost savings to the customer. In addition to off-the-shelf appliances for load balancing, you can also roll your own with tools like HAProxy and Nginx, but we find that for most people the ELB is an excellent solution for meeting their load balancing needs.
Now we have laid the network foundation to run our servers and applications on. AWS provides several services for this. If you need, or desire, to manage your own servers and underlying operating system, EC2 (Elastic Compute Cloud) provides the foundational building blocks for spinning up virtual servers you can tailor to suit whatever need you have. A multitude of Linux and Windows-based Operating Systems are supported. If your application supports it, there are services like ElasticBeanstalk, OpsWorks, or Lambda, to name a few, that will manage the underlying compute resources for you and simply allow you to “deploy code” on completely managed compute resources in the VPC.
What about my databases?
There are countless examples of people running internal business application databases in AWS. The RDS (Relational Database Service) provides a comprehensive, robust, and HA capable hosted solution for MySQL, PostgreSQL, Microsoft SQL server, and Oracle. If your database platform isn’t supported by RDS, you can always run your own DB servers on EC2 instances.
NAS would be nice
AWS has always recommended a very ephemeral approach to application architectures and not storing data directly on an instance. Sometimes there is no getting away from needing shared storage, though, across multiple instances. Amazon S3 is a potential solution but is not intended to be used as attached storage, so the application must be capable of addressing and utilizing S3’s endpoints if that is to be a solution. There are a great many applications that aren’t compatible with that model.
Until recently your options were pretty limited for providing a NAS type of shared storage to Amazon EC2 instances. You could create a GlusterFS (AKA Redhat Storage Server) or Ceph cluster out of EC2 instances spanned across multiple availability zones, but that is fairly expensive and has several client mounting issues. The Gluster client, for example, is a FUSE (filesystem in user space) client and has sub-optimal performance. Linux Torvalds has a famous and slightly amusing – depending upon the audience – rant about userspace filesystems (see: https://lkml.org/lkml/2011/6/9/462). To get around the FUSE problem you could always enable NFS server mode, but that breaks the ability of the client to dynamically connect to another GlusterFS server node if one fails thus introducing a single point of failure. You could conceivable set up some sort of NFS Server HA cluster using Linux heartbeat, but that is tedious, error prone, and places the burden of the storage ecosystem support on the IT organization, which is not desirable for most IT organizations. Not to mention that Heartbeat requires a shared static IP address, which could be jury rigged in VPC, but you absolutely cannot share the same IP address across multiple Availability Zones, so you would lose multi-AZ protection.
Yes, there were “solutions” but nothing that was easy and slick like most everything else in AWS is nor anything that is ready for primetime. Then on April 9th, 2015 Amazon introduced us to EFS (Elastic File System). The majority of corporate IT AWS users have been clamoring for a shared file system solution in AWS for quite some time, and EFS is set to fill that need. EFS is a low latency, shared storage solution available to multiple EC2 instances simultaneously via NFSv4. It is currently in preview mode but should be released to GA in the near future. See more at https://aws.amazon.com/efs/.
Thinking outside the box
In addition to the AWS tools that are analogs of traditional IT infrastructure (e.g. VPC ≈ Network Layer, EC2 ≈ Physical server or VM) there are a large number of tools and SaaS offerings that add value above and beyond. Tools like SQS, SWF, SES, RDS – for hosted/managed RDMBS platforms – CloudTrail, CloudWatch, DynamoDB, DirectoryServices, WorkDocs, WorkSpace, and many more make transitioning traditional business applications into the cloud easy, all the while eliminating capex costs, reducing operating costs, and increasing stability and reliability.
A word on architectural best practices
If it is at all possible, there are some guiding principles and best practices that should be followed when designing and implementing solutions in AWS. First and foremost, design for failure. The new paradigm in virtualized and cloud computing is that no individual system is sacred and nothing is impervious to potential failure. Having worked in a wide variety of high tech and IT organizations over the past 20 years, this should really come as no surprise because even when everything is running on highly redundant hardware and networks, equipment and software failures have ALWAYS been prevalent. IT and software design as a culture would have been much better off adopting this mantra years and years ago. However, overcoming some of the hurdles designing for failure creates wasn’t a full reality until virtualization and the Cloud were available.
AWS is by far the forerunner in providing services and technologies that allow organizations to decouple the application architecture from the underlying infrastructure. Tools like Route53, AutoScaling, CloudWatch, SNS, EC2, and configuration management allow you to design a high level of redundancy and automatic recovery into your infrastructure and application architecture. In addition to designing for failure, decoupling the application state from the architecture as a whole should be strived for. The application state should not be stored on any individual component in the stack, nor should it be passed around between the layers. This way the loss of a single component in the chain will not destroy the state of the application. Having the state of the application store in its own autonomous location, like a distributed NoSQL DB cluster, will allow the application to function without skipping a beat in the event of a component failure.
Finally, a DevOps, Continuous Integration, or Continuous Delivery methodology should be adopted for application development. This allows changes to be ed automatically before being pushed into production and also provides a high level of business agility. The same kind of business agility that running in the Cloud is meant to provide.
-Ryan Kennedy, Senior Cloud Architect




