1-888-317-7920 info@2ndwatch.com

Understanding the AWS Security Model and Services

Protecting and monitoring networks, applications and data is simple if you know and use the right tools

Security is a stifling fear for organizations considering public clouds, one frequently stoked by IT vendors with vested interests in selling enterprise IT hardware and software using security as a catalyst for overall FUD about cloud services. The fears and misconceptions about cloud security are rooted in unfamiliarity and conjecture. A survey of IT pros with actual cloud experience found the level of security incidents relative to on-premise results quite similar. When asked to compare public cloud versus on-premise security, the difference between those saying the risks are significantly lower versus higher is a mere one percent. Cloud infrastructure is probably more secure than typical enterprise data centers, but cloud users can easily create application vulnerabilities if they don’t understand the available security services and adapt existing processes to the cloud environment.

Cloud-security-survey_int-v-public

Whatever the cause, the data shows that cloud security remains an issue with IT executives. For example, a survey of security professionals found that almost half are very concerned about public cloud security, while a 2014 KPMG survey of global business executives found that security and data privacy are the most important capabilities when evaluating a cloud service and that the most significant cloud implementation challenges center on the risks of data loss, privacy intrusions and intellectual property theft.

KPMG-cloud-2014_security-eval

 

KPMG-cloud-2014_adopt-challenges

Unfortunately, such surveys are fraught with problems since they ask for subjective, comparative evaluation of two very different security models, one (on-premise) that IT pros have years of experience implementing, managing and refining, and the other (public cloud) that is relatively new to enterprise IT, particularly as a production platform, and thus often not well implemented. The ‘problem’ with public cloud security isn’t that it’s worse, no, it’s arguably better. Rather, the problem is that cloud security is different. Public cloud services necessarily use an unfamiliar and more granular security design that accommodates multi-tenant services with many users, from various organizations, mixing and matching services tailored to each one’s specific needs.

AWS Security Model

AWS designs cloud security using a shared security model that bisects security responsibilities, processes and technical implementation between the service provider, i.e. AWS, and customer, namely enterprise IT. In the cloud, IT relinquishes control over low-level infrastructure like data center networks, compute, storage and database implementation and infrastructure management to the cloud provider. The customer, i.e. enterprise IT, has control over abstracted services provided by AWS along with the operating systems, virtual networks, storage containers (object buckets, block stores), applications, data and transactions built upon those services, along with the user and administrator access to those services.

AWS_shared-security-model

The first step to cloud security is mentally relinquishing control: internalizing the fact that AWS (or your IaaS of choice) owns low-level infrastructure and is responsible for securing it, and given their scale and resources is most likely doing better than most enterprise IT organizations. Next, AWS users must understand the various security control points they do have. AWS breaks these down into five categories:

  • Network security: virtual firewalls, network link encryption and VPNs used to build a virtual private cloud (VPC).
  • Inventory and configuration: comprehensive view of AWS resources under use, a catalog of standard configuration templates and machine images (AMIs) and tools for workload deployment and decommissioning.
  • Data encryption: security for stored objects and databases and associated encryption key management.
  • Access control: user identity management (IAM), groups and policies for service access and authentication options including multifactor using one-time passwords.
  • Monitoring and logging: tools like CloudWatch and CloudTrail for tracking service access and use, with ability to aggregate data from all available services into a single pool that feeds comprehensive usage reports, facilitates post-incident forensic analysis and provides real-time application performance alerts (SNS).

Using CloudTrail Activity Logs

Organizations should apply existing IT security policies in each area by focusing first on the objectives, the policy goals and requirements, then mapping these to the available AWS services to create control points in the cloud. For example, comprehensive records of user access and service usage are critical to ensuring policy adherence, identifying security gaps and performing post hoc incident analysis. CloudTrail fills this need acting as something of a stenographer recording all AWS API calls, for every major service, whether accessed programmatically or via the CLI, along with use of the management console. CloudTrail records are written in JSON format to facilitate extraction, filtering and post-processing, including third party log analysis tools like Alert Logic, Loggly and Splunk.

CloudTrail so thoroughly monitors AWS usage that it not only logs changes to other services, but to itself. It records access to logs themselves and can trigger alerts when logs are created or don’t follow established configuration guidelines. For security pros, CloudTrail data is invaluable when used to build reports about abnormal user or application behavior and to detail activity around the time of a particular suspicious event.

The key to AWS security is understanding the division of responsibilities, the cloud control points and available tools. Mastering these can allow cloud-savvy organizations to build security processes that exceed those in many on-site data centers.

-2nd Watch Blog by Kurt Marko

Facebooktwitterlinkedinmailrss

Managing Your Amazon Cloud Deployment to Save Money

Wired Innovation Insights published a blog article written by our own Chris Nolan yesterday. Chris discusses ways you can save money on your AWS cloud deployment in “How to Manage Your Amazon Cloud Deployment to Save Money.” Chris’ top tips incude:

  1. Use CloudFormation or other configuration and orchestration  tool.
  2. Watch out for cloud sprawl.
  3. Use AWS auto scaling.
  4. Turn the lights off when you leave the room.
  5. Use tools to monitor spend.
  6. Build in redundancy.
  7. Planning saves money.

Read the Full Article

Facebooktwitterlinkedinmailrss

Cloud Myth Busters

Yes, I know, everyone is tired of hearing about the Cloud. It seems like talk about the cloud happens all day, every day, and you know that it’s hit the mainstream when your mom asks you about it. The reality is that we’re still so early in “The Journey” (yes, we call it that because it truly is one.) that it can be impossible to distill the tremendous amount of noise that exists around the topic. Let’s spend a few precious moments identifying the cloud myths that are swirling about and try to myth bust a bit.

LegacyI have too much invested in my legacy systems, tools and processes that makes moving to the cloud too hard or just not worth it.

That’s partly true. Many companies have a lot of legacy systems and infrastructure out there. So much so that it clouds (no pun intended) their view on what’s possible. It’s like quicksand; the more time and money invested in legacy systems and architectures, the deeper and deeper you get, and it just seems impossible to get out. There is a way out though, and the first step forward is actually to take a step back and understand where you are today. From there, we’d suggest taking stock of what’s in your environment and seeing what’s ready to move to the cloud.

Security – It’s not secure. I’ll be sharing my data with everyone else.

That’s absolutely not true. The public cloud is extremely secure. These environments have been built to adhere to the most stringent security standards on the planet. Cloud providers take an in depth approach, going above and beyond to ensure that security permeates throughout the environment.

http://aws.amazon.com/security/

Agility – What am I really gaining? There can’t really be as much benefit as people are saying.

When we talk to any business person, lack of agility is typically their number one challenge. Traditional legacy, or even co-location infrastructure, is designed and built so that it doesn’t allow for the flexibility companies need in the constantly changing world. The need to continually evolve and the ability to “fail fast” are so important to businesses today, and the cloud enables you to do just that. You can literally create a global infrastructure in a matter of minutes that runs only when you need it. The benefits are dizzying.

Cost – I hear that it will actually cost me more to run in the cloud.

There are tremendous economies of scale to be gained by building out the massive footprint that the existing public cloud providers have built. It’s enabled them to get such a head start that it’s downright unbelievalbe what you can do today at a fraction of the cost of doing it in a traditional IT world. There are a number of TCO calculators out there that will show you the cost of running infrastructure on-prem vs in the cloud. Take a look at the calculator we built for AWS and see for yourself by plugging in your own numbers.

Best of Breed – I can use any cloud provider. They’re all the same.

There is an entire body of knowledge dedicated to the cloud landscape, how mature each company’s offerings are and where they fit in the overall landscape. I am a firm believer that you build your company to be as agile as possible, trying to eliminate brittle and hard linkages. Please check out the following link for an independent analyst’s view of today’s cloud landscape.

See what Gartner is Saying about the Cloud

Org Structure – I can use cloud as I see fit and keep things the way they’ve normally been internally.

True innovation is happening here. The industry is attracting the absolute best and brigh talent, and the pace of innovation will only accelerate. I’m not saying you need to stay ahead of it. The goal is to keep pace and not fall behind. We can help you do that!

-Mike Triolo, General Manager – Eastern US

 

Facebooktwitterlinkedinmailrss

Increasing Your Cloud Footprint

The jump to the cloud can be a scary proposition.  For an enterprise with systems deeply embedded in traditional infrastructure like back office computer rooms and datacenters the move to the cloud can be daunting. The thought of having all of your data in someone else’s hands can make some IT admins cringe.  However, once you start looking into cloud technologies you start seeing some of the great benefits, especially with providers like Amazon Web Services (AWS).  The cloud can be cost-effective, elastic and scalable, flexible, and secure.  That same IT admin cringing at the thought of their data in someone else’s hands may finally realize that AWS is a bit more secure than a computer rack sitting under an employee’s desk in a remote office.  Once the decision is finally made to “try out” the cloud, the planning phase can begin.

Most of the time the biggest question is, “How do we start with the cloud?”  The answer is to use a phased approach.  By picking applications and workloads that are less mission critical, you can try the newest cloud technologies with less risk.  When deciding which workloads to move, you should ask yourself the following questions; Is there a business need for moving this workload to the cloud?  Is the technology a natural fit for the cloud?  What impact will this have on the business? If all those questions are suitably answered, your workloads will be successful in the cloud.

One great place to start is with archiving and backups.  These types of workloads are important, but the data you’re dealing with is likely just a copy of data you already have, so it is considerably less risky.  The easiest way to start with archives and backups is to try out S3 and Glacier.  Many of today’s backup utilities you may already be using, like Symantec Netbackup  and Veeam Backup & Replication, have cloud versions that can directly backup to AWS. This allows you to use start using the cloud without changing much of your embedded backup processes.  By moving less critical workloads you are taking the first steps in increasing your cloud footprint.

Now that you have moved your backups to AWS using S3 and Glacier, what’s next?  The next logical step would be to try some of the other services AWS offers.  Another workload that can often be moved to the cloud is Disaster Recovery.   DR is an area that will allow you to more AWS services like VPC, EC2, EBS, RDS, Route53 and ELBs.  DR is a perfect way to increase your cloud footprint because it will allow you to construct your current environment, which you should already be very familiar with, in the cloud.  A Pilot Light DR solution is one type of DR solution commonly seen in AWS.  In the Pilot Light scenario the DR site has minimal systems and resources with the core elements already configured to enable rapid recovery once a disaster happens.  To build a Pilot Light DR solution you would create the AWS network infrastructure (VPC), deploy the core AWS building blocks needed for the minimal Pilot Light configuration (EC2, EBS, RDS, and ELBs), and determine the process for recovery (Route53).  When it is time for recovery all the other components can be quickly provisioned to give you a fully working environment. By moving DR to the cloud you’ve increased your cloud footprint even more and are on your way to cloud domination!

The next logical step is to move Test and Dev environments into the cloud. Here you can get creative with the way you use the AWS technologies.  When building systems on AWS make sure to follow the Architecting Best Practices: Designing for failure means nothing will fail, decouple your components, take advantage of elasticity, build security into every layer, think parallel, and don’t fear constraints! Start with proof-of-concept (POC) to the development environment, and use AWS reference architecture to aid in the learning and planning process.  Next your legacy application in the new environment and migrate data.  The POC is not complete until you validate that it works and performance is to your expectations.  Once you get to this point, you can reevaluate the build and optimize it to exact specifications needed. Finally, you’re one step closer to deploying actual production workloads to the cloud!

Production workloads are obviously the most important, but with the phased approach you’ve taken to increase your cloud footprint, it’s not that far of a jump from the other workloads you now have running in AWS.   Some of the important things to remember to be successful with AWS include being aware of the rapid pace of the technology (this includes improved services and price drops), that security is your responsibility as well as Amazon’s, and that there isn’t a one-size-fits-all solution.  Lastly, all workloads you implement in the cloud should still have stringent security and comprehensive monitoring as you would on any of your on-premises systems.

Overall, a phased approach is a great way to start using AWS.  Start with simple services and traditional workloads that have a natural fit for AWS (e.g. backups and archiving).  Next, start to explore other AWS services by building out environments that are familiar to you (e.g. DR). Finally, experiment with POCs and the entire gambit of AWS to benefit for more efficient production operations.  Like many new technologies it takes time for adoption. By increasing your cloud footprint over time you can set expectations for cloud technologies in your enterprise and make it a more comfortable proposition for all.

-Derek Baltazar, Senior Cloud Engineer

Facebooktwitterlinkedinmailrss

AWS Identity and Access Management (IAM)

Dealing with organizational change is a challenge in today’s fast-paced business environment.  Long gone are the days when employees stayed with companies until retirement.  The mindset of many employees is to move around to different companies for a promotion, a better salary, or new challenging opportunities.  Managing organizational change in terms of user access is becoming more and more complex due to the changing technology landscape.  With systems being accessible over the network, IT shops can’t just deny ex-employees physical access to the building, but have to cut their credentials to the network as well. With the proliferation of cloud technologies this can become even more of a challenge because your digital assets are accessible over the internet from anywhere in the world. In many technology centric companies managing login credentials and access are paramount for securing the assets of the business and coping with organizational change.

IAM 1To solve this problem AWS has a service called Identity and Access Management (IAM).  IAM is an AWS feature that allows you to regulate use and access to AWS resources.  With IAM you can create and manage users and groups for access to your AWS environment.  IAM also gives you the ability to assign permissions to the users and groups to allow or deny access.  With IAM you can assign users access keys, passwords and even Multi Factor Authentication devices to access your AWS environment.  IAM on AWS even allows you to manage access with federated users, a way to configure access using credentials that expire and are manageable through traditional corporate directories like Microsoft Active Directory.

With IAM you can set permissions based on AWS provided policy templates like “Administrator Access” which allows full access to all AWS resources and services, “Power User Access” which provides full access to all AWS resources and services but does not allow access to managing users and groups, or even “Read Only Access”.  These policies can be applied to users and groups.  Some policy templates provided can limit users to use certain services like the policy template, “Amazon EC2 Full Access” or “Amazon EC2 Read Only Access”, which gives a user full access to EC2 via the AWS management console and read only access to EC2 via the AWS management console respectively.

User Permissions

IAM also allows you to set your own policies to manage permissions.  Say you wanted an employee to be able to just start and stop instances you can use the IAM Policy Generator to create a custom policy to do just that.  You would select the effect, Allow or Deny, the specific service, and the action.  IAM also gives you the ability to layer the permissions on top of each other by adding additional statements to the policy.

Edit Permissions

Once you create a policy you can apply it to any user or group and it automatically takes effect.  When something changes in the organization, like an employee leaving, AWS IAM simplifies management of access and identity by allowing you to just delete the user or policy attached to that user. If an employee moves from one group to another it is easy to reassign the user to a different group with the appropriate access level.  As you can see the variety of policy rules is extensive, allowing you to create very fine grained permissions around your AWS resources and services.

Another great thing about IAM is that it’s a free service that comes with every AWS account, it is surprising to see how many people overlook this powerful tool.  It is highly recommended to always use IAM with any AWS account.  It gives you the ability to have an organized way to manage users and access to your AWS account and simplifies the management nightmare of maintaining access controls as the environment grows.

-Derek Baltazar

Senior Cloud Engineer

Facebooktwitterlinkedinmailrss

Cloud Security: AWS

There are four main reasons why companies are moving to the cloud. They include: agility, availability, cost and security. When meeting with the CIO of a prominent movie studio in LA earlier this week he said, “The primary area that we need to understand is security. Our CEO does not want any critical information leaving or being stored offsite.” While the CEO’s concern is valid, cloud providers like Amazon Web Services (AWS) are taking extraordinary measures to ensure both privacy and security on their platform. Below is an overview of the measures taken by AWS.

  • Accreditations and Certifications      – AWS has created a compliance program to help customers understand the      substantial practices in place for both data protection and security to      meet either government or industry requirements. For example, PCI DSS      Level 1, ITAR, etc. for government and HIPAA, MPAA, etc. for industry.
  • Data Protection and Privacy – AWS      adheres to the stric data protection and privacy standards and      regulations, including  FISMA, Sarbanes-Oxley, etc. AWS datacenter      employees are given limited access to the location of customer systems on      an as-needed basis. Discs are also shredded and never re-used by another      customer.
  • Physical Security – Infrastructure      is located in nondescript AWS-controlled datacenters. The location of and      access into each datacenter is limited to employees with legitimate      business reasons (access is revoked when the business reason ends).      Physical access is strictly controlled at both the perimeter and      building ingress points.
  • Secure Services – AWS      infrastructure services are designed and managed in accordance with      security best practices, as well as multiple security compliance      standards. Infrastructure services contain multiple capabilities that      restrict unauthorized access or usage without sacrificing the flexibility      that customers demand.
  • Shared Responsibility – A shared      responsibility exists for compliance and security on the AWS cloud. AWS      owns facilities, infrastructure (compute, network and storage), physical      security and the virtualization layer. The customer owns applications,      firewalls, network configuration, operating system and security groups.

The AWS cloud provides customers with end-to-end privacy and security via its collaboration with validated experts like NASA, industry best practices and its own experience building and managing global datacenters. AWS documents how to leverage these capabilities for customers. To illustrate: I recently met with a VP of Infrastructure for a $1B+ SaaS company in San Francisco. He said, “We are moving more workloads to AWS because it is so secure.” The people, process and technology are in place to achieve the highest level of physical and virtual privacy and security.

-Josh Lowry, General Manager-West

Facebooktwitterlinkedinmailrss