Yes, I know, everyone is tired of hearing about the Cloud. It seems like talk about the cloud happens all day, every day, and you know that it’s hit the mainstream when your mom asks you about it. The reality is that we’re still so early in “The Journey” (yes, we call it that because it truly is one.) that it can be impossible to distill the tremendous amount of noise that exists around the topic. Let’s spend a few precious moments identifying the cloud myths that are swirling about and try to myth bust a bit.
Legacy – I have too much invested in my legacy systems, tools and processes that makes moving to the cloud too hard or just not worth it.
That’s partly true. Many companies have a lot of legacy systems and infrastructure out there. So much so that it clouds (no pun intended) their view on what’s possible. It’s like quicksand; the more time and money invested in legacy systems and architectures, the deeper and deeper you get, and it just seems impossible to get out. There is a way out though, and the first step forward is actually to take a step back and understand where you are today. From there, we’d suggest taking stock of what’s in your environment and seeing what’s ready to move to the cloud.
Security – It’s not secure. I’ll be sharing my data with everyone else.
That’s absolutely not true. The public cloud is extremely secure. These environments have been built to adhere to the most stringent security standards on the planet. Cloud providers take an in depth approach, going above and beyond to ensure that security permeates throughout the environment.
Agility – What am I really gaining? There can’t really be as much benefit as people are saying.
When we talk to any business person, lack of agility is typically their number one challenge. Traditional legacy, or even co-location infrastructure, is designed and built so that it doesn’t allow for the flexibility companies need in the constantly changing world. The need to continually evolve and the ability to “fail fast” are so important to businesses today, and the cloud enables you to do just that. You can literally create a global infrastructure in a matter of minutes that runs only when you need it. The benefits are dizzying.
Cost – I hear that it will actually cost me more to run in the cloud.
There are tremendous economies of scale to be gained by building out the massive footprint that the existing public cloud providers have built. It’s enabled them to get such a head start that it’s downright unbelievalbe what you can do today at a fraction of the cost of doing it in a traditional IT world. There are a number of TCO calculators out there that will show you the cost of running infrastructure on-prem vs in the cloud. Take a look at the calculator we built for AWS and see for yourself by plugging in your own numbers.
Best of Breed – I can use any cloud provider. They’re all the same.
There is an entire body of knowledge dedicated to the cloud landscape, how mature each company’s offerings are and where they fit in the overall landscape. I am a firm believer that you build your company to be as agile as possible, trying to eliminate brittle and hard linkages. Please check out the following link for an independent analyst’s view of today’s cloud landscape.
See what Gartner is Saying about the Cloud
Org Structure – I can use cloud as I see fit and keep things the way they’ve normally been internally.
True innovation is happening here. The industry is attracting the absolute best and brigh talent, and the pace of innovation will only accelerate. I’m not saying you need to stay ahead of it. The goal is to keep pace and not fall behind. We can help you do that!
-Mike Triolo, General Manager – Eastern US
The jump to the cloud can be a scary proposition. For an enterprise with systems deeply embedded in traditional infrastructure like back office computer rooms and datacenters the move to the cloud can be daunting. The thought of having all of your data in someone else’s hands can make some IT admins cringe. However, once you start looking into cloud technologies you start seeing some of the great benefits, especially with providers like Amazon Web Services (AWS). The cloud can be cost-effective, elastic and scalable, flexible, and secure. That same IT admin cringing at the thought of their data in someone else’s hands may finally realize that AWS is a bit more secure than a computer rack sitting under an employee’s desk in a remote office. Once the decision is finally made to “try out” the cloud, the planning phase can begin.
Most of the time the biggest question is, “How do we start with the cloud?” The answer is to use a phased approach. By picking applications and workloads that are less mission critical, you can try the newest cloud technologies with less risk. When deciding which workloads to move, you should ask yourself the following questions; Is there a business need for moving this workload to the cloud? Is the technology a natural fit for the cloud? What impact will this have on the business? If all those questions are suitably answered, your workloads will be successful in the cloud.
One great place to start is with archiving and backups. These types of workloads are important, but the data you’re dealing with is likely just a copy of data you already have, so it is considerably less risky. The easiest way to start with archives and backups is to try out S3 and Glacier. Many of today’s backup utilities you may already be using, like Symantec Netbackup and Veeam Backup & Replication, have cloud versions that can directly backup to AWS. This allows you to use start using the cloud without changing much of your embedded backup processes. By moving less critical workloads you are taking the first steps in increasing your cloud footprint.
Now that you have moved your backups to AWS using S3 and Glacier, what’s next? The next logical step would be to try some of the other services AWS offers. Another workload that can often be moved to the cloud is Disaster Recovery. DR is an area that will allow you to more AWS services like VPC, EC2, EBS, RDS, Route53 and ELBs. DR is a perfect way to increase your cloud footprint because it will allow you to construct your current environment, which you should already be very familiar with, in the cloud. A Pilot Light DR solution is one type of DR solution commonly seen in AWS. In the Pilot Light scenario the DR site has minimal systems and resources with the core elements already configured to enable rapid recovery once a disaster happens. To build a Pilot Light DR solution you would create the AWS network infrastructure (VPC), deploy the core AWS building blocks needed for the minimal Pilot Light configuration (EC2, EBS, RDS, and ELBs), and determine the process for recovery (Route53). When it is time for recovery all the other components can be quickly provisioned to give you a fully working environment. By moving DR to the cloud you’ve increased your cloud footprint even more and are on your way to cloud domination!
The next logical step is to move Test and Dev environments into the cloud. Here you can get creative with the way you use the AWS technologies. When building systems on AWS make sure to follow the Architecting Best Practices: Designing for failure means nothing will fail, decouple your components, take advantage of elasticity, build security into every layer, think parallel, and don’t fear constraints! Start with proof-of-concept (POC) to the development environment, and use AWS reference architecture to aid in the learning and planning process. Next your legacy application in the new environment and migrate data. The POC is not complete until you validate that it works and performance is to your expectations. Once you get to this point, you can reevaluate the build and optimize it to exact specifications needed. Finally, you’re one step closer to deploying actual production workloads to the cloud!
Production workloads are obviously the most important, but with the phased approach you’ve taken to increase your cloud footprint, it’s not that far of a jump from the other workloads you now have running in AWS. Some of the important things to remember to be successful with AWS include being aware of the rapid pace of the technology (this includes improved services and price drops), that security is your responsibility as well as Amazon’s, and that there isn’t a one-size-fits-all solution. Lastly, all workloads you implement in the cloud should still have stringent security and comprehensive monitoring as you would on any of your on-premises systems.
Overall, a phased approach is a great way to start using AWS. Start with simple services and traditional workloads that have a natural fit for AWS (e.g. backups and archiving). Next, start to explore other AWS services by building out environments that are familiar to you (e.g. DR). Finally, experiment with POCs and the entire gambit of AWS to benefit for more efficient production operations. Like many new technologies it takes time for adoption. By increasing your cloud footprint over time you can set expectations for cloud technologies in your enterprise and make it a more comfortable proposition for all.
-Derek Baltazar, Senior Cloud Engineer
To solve this problem AWS has a service called Identity and Access Management (IAM). IAM is an AWS feature that allows you to regulate use and access to AWS resources. With IAM you can create and manage users and groups for access to your AWS environment. IAM also gives you the ability to assign permissions to the users and groups to allow or deny access. With IAM you can assign users access keys, passwords and even Multi Factor Authentication devices to access your AWS environment. IAM on AWS even allows you to manage access with federated users, a way to configure access using credentials that expire and are manageable through traditional corporate directories like Microsoft Active Directory.
With IAM you can set permissions based on AWS provided policy templates like “Administrator Access” which allows full access to all AWS resources and services, “Power User Access” which provides full access to all AWS resources and services but does not allow access to managing users and groups, or even “Read Only Access”. These policies can be applied to users and groups. Some policy templates provided can limit users to use certain services like the policy template, “Amazon EC2 Full Access” or “Amazon EC2 Read Only Access”, which gives a user full access to EC2 via the AWS management console and read only access to EC2 via the AWS management console respectively.
IAM also allows you to set your own policies to manage permissions. Say you wanted an employee to be able to just start and stop instances you can use the IAM Policy Generator to create a custom policy to do just that. You would select the effect, Allow or Deny, the specific service, and the action. IAM also gives you the ability to layer the permissions on top of each other by adding additional statements to the policy.
Once you create a policy you can apply it to any user or group and it automatically takes effect. When something changes in the organization, like an employee leaving, AWS IAM simplifies management of access and identity by allowing you to just delete the user or policy attached to that user. If an employee moves from one group to another it is easy to reassign the user to a different group with the appropriate access level. As you can see the variety of policy rules is extensive, allowing you to create very fine grained permissions around your AWS resources and services.
Another great thing about IAM is that it’s a free service that comes with every AWS account, it is surprising to see how many people overlook this powerful tool. It is highly recommended to always use IAM with any AWS account. It gives you the ability to have an organized way to manage users and access to your AWS account and simplifies the management nightmare of maintaining access controls as the environment grows.
Senior Cloud Engineer
There are four main reasons why companies are moving to the cloud. They include: agility, availability, cost and security. When meeting with the CIO of a prominent movie studio in LA earlier this week he said, “The primary area that we need to understand is security. Our CEO does not want any critical information leaving or being stored offsite.” While the CEO’s concern is valid, cloud providers like Amazon Web Services (AWS) are taking extraordinary measures to ensure both privacy and security on their platform. Below is an overview of the measures taken by AWS.
- Accreditations and Certifications – AWS has created a compliance program to help customers understand the substantial practices in place for both data protection and security to meet either government or industry requirements. For example, PCI DSS Level 1, ITAR, etc. for government and HIPAA, MPAA, etc. for industry.
- Data Protection and Privacy – AWS adheres to the stric data protection and privacy standards and regulations, including FISMA, Sarbanes-Oxley, etc. AWS datacenter employees are given limited access to the location of customer systems on an as-needed basis. Discs are also shredded and never re-used by another customer.
- Physical Security – Infrastructure is located in nondescript AWS-controlled datacenters. The location of and access into each datacenter is limited to employees with legitimate business reasons (access is revoked when the business reason ends). Physical access is strictly controlled at both the perimeter and building ingress points.
- Secure Services – AWS infrastructure services are designed and managed in accordance with security best practices, as well as multiple security compliance standards. Infrastructure services contain multiple capabilities that restrict unauthorized access or usage without sacrificing the flexibility that customers demand.
- Shared Responsibility – A shared responsibility exists for compliance and security on the AWS cloud. AWS owns facilities, infrastructure (compute, network and storage), physical security and the virtualization layer. The customer owns applications, firewalls, network configuration, operating system and security groups.
The AWS cloud provides customers with end-to-end privacy and security via its collaboration with validated experts like NASA, industry best practices and its own experience building and managing global datacenters. AWS documents how to leverage these capabilities for customers. To illustrate: I recently met with a VP of Infrastructure for a $1B+ SaaS company in San Francisco. He said, “We are moving more workloads to AWS because it is so secure.” The people, process and technology are in place to achieve the highest level of physical and virtual privacy and security.
-Josh Lowry, General Manager-West
There have been numerous articles, blogs, and whitepapers about the security of the Cloud as a business solution. Amazon Web Services has a site devoted to extolling their security virtues and there are several sites that devote themselves entirely to the ins and outs of AWS security. So rather than try to tell you about each and every security feature of AWS and try to convince you how secure the environment can be, my goal is to share a real world example of security that can be improved by moving from on premise datacenters to AWS.
Many AWS implementations are used for hosting web applications, most of which are Internet accessible. Obviously, if your environment is for internal use only you can lock down security even further, but for the interest of this exercise, we’re assuming Internet facing web applications. The inherent risk, of course, with any Internet accessible application is that accessibility to the Internet provides hackers and malicious users access to your environment as well as honest yet malware/virus/Trojan infected users.
As with on premise and colocation based web farms, AWS offers the standard security practices of isolating customers from one another so that if one customer experiences a security breach, all other customers remain secure. And of course, AWS Security Groups function like traditional firewalls, allowing traffic only through allowed ports to/from specific destinations/sources. AWS moves ahead of traditional datacenters starting with Security Groups and Network ACL’s by offering more flexibility to respond to attacks. Consider the case of a web farm that has components suspected of being compromised; AWS Security Groups can be created in seconds to isolate the suspected components from the rest of the network. In a traditional datacenter environment, those components may require making complex network changes to move them to isolated networks in order to prevent infection to spread over the network, something AWS blocks by default.
AWS often talks about scalability – able to grow and shrink the environment to meet demands. That capability also extends to security features as well! Need another firewall, just add another Security Group, no need to install another device. Adding another subnet, VPN, firewall, all of these things are done in minutes with no action from on premise staff required. No more waiting while network cables are moved, hardware is installed or devices are physically reconfigured when you need security updates.
Finally, no matter how secure an environment, no security plan is complete without a remediation plan. AWS has tools that provide remediation with little to no downtime. Part of standard practices for AWS environments is to take regular snapshots of EC2 instances (servers). These snapshots can be used to re-create a compromised or non-functional component in minutes rather than the lengthy restore process for a traditional server. Additionally, 2nd Watch recommends taking an initial image of each component so that in the event of a failure, there is a fall back point to a known good configuration.
So how secure is secure? With the ability to respond faster, scale as necessary, and recover in minutes – the Amazon Cloud is pretty darn secure! And of course, this is only the tip of the iceberg for AWS Cloud Security, more to follow the rest of December here on our blog and please check out the official link above for Amazon’s Security Center and Whitepapers.
-Keith Homewood, Cloud Architect
Amazon Web Services™ (AWS) released a new service at re:invent a few weeks ago that will have operations and security managers smiling. CloudTrail is a web service that records AWS API calls and stores the logs in S3. This provides organizations the visibility they need to their AWS infrastructure to maintain proper governance of changes to their environment.
2nd Watch was pleased to announce support for CloudTrail in our launch of our 2W Atlas product. 2W Atlas is a product that organizes and visualizes AWS resources and output data. Enterprise organizations need tools and services built for the cloud to properly manage these new architectures. 2W Atlas provides organizations with a tool that enables their divisions and business units to organize and manage the CloudTrail data for their individual group.
2nd Watch is committed to assisting enterprise organizations with the expertise and tools to make the cloud work for them. The tight integration 2nd Watch has developed with CloudTrail and Atlas is further proof of our expertise in bringing enterprise solutions that our customers demand.
To learn more about 2W Atlas or CloudTrail, Contact Us and let us know how we can help.
-Matt Whitney, Sales Executive