Compliance is a constant challenge today. Keeping our system images in a healthy and trusted state of compliance requires time and effort. There are millions of tools and technologies in market to help customers maintain compliance and state, so where do I start?
Amazon has built a rich set of core technologies within the Amazon Web Services console. Systems Manager is a fantastic operations management platform tool that can assist you with setting up and maintaining configuration and state management.
One of the first things we must focus on when we build out our core images in the cloud is the configuration of those images. What is the role of the image, what operating system am I going to utilize and what applications and/or core services do I need to enable, configure and maintain? In the datacenter, we call these Gold Images. The same applies in the cloud.
We define these roles for our images and we place them in different functional areas – Infrastructure, Web Services, Applications. We may have many core image templates for our enterprise workloads, by building these base images and maintaining them continuously – we set in motion a solid foundation for core security and core compliance of our cloud environment.
Amazon Systems Manager looks across my cloud environment and allows me to bring together all the key information around all my operating resources in the cloud. It allows me to centralize the gathering of all core baseline information for my resources in one place. In the past I would have had to look at my AWS CloudWatch information in one area, my AWS CloudTrail information in another area and my configuration information in yet another area. Centralizing this information in one console allows you to see the holistic state of your cloud environment baselines in one console.
AWS Systems Manager provides built-in Insight and Dashboards that allow you to look across your entire cloud environment and see into and act upon your cloud resources. AWS Systems Manager allows you to see the configuration compliance of all your resources as well as the state management and associations across your resources. It provides a rich ability to customize configuration and state management for your workloads, applications and resource types and scan and analyze to ensure those configuration and states are maintained continuously. With AWS Systems Manager you can customize and create your own compliance types to marry to your Enterprise Organizational baseline of your company’s business requirements. With that in place, I can constantly scan and analyze against these compliance baselines to ensure and maintain the operational configuration and state always.
We analyze and report on the current state and quickly determine compliance or out of compliance state centrally for our cloud services and resources. We can create base reports around our compliance position at any time, and with this knowledge, we can set in motion remediation to return our services and resources back to a compliant state and configuration.
With Amazon Systems Manager we can scan all resources for patch state, determine what patches are missing and manually, scheduled or automate the remediation of those patches to maintain patch management compliance.
Amazon Systems Manager also integrates with Chef InSpec, allowing you to leverage Chef InSpec to operate in a continuous compliance framework for your cloud resources.
On the road to compliance it is important to flex the tools and capabilities of your Cloud Provider. Amazon gives us a rich set of Systems Management capabilities across configuration, state management, patch management and remediation, as well as reporting. Amazon Systems Manager is provided at no cost to Amazon customers and will help you along your Journey to realizing continuous compliance of your cloud environment across the Amazon Cloud and the Hybrid Cloud. To learn more about using Amazon Systems Manager or your systems’ compliance, contact us.
-Peter Meister, Sr Director of Product Management
There have been countless numbers of articles, blogs and whitepapers written on the subject of security in the cloud and an even greater number of opinions as to the number of risks associated with a move to the same. Five, seven, ten, twenty-seven? How many risks are associated with you or your company’s move to the cloud? Well, in the best consultant-speak, it depends.
One could say that it depends on how far “up the stack” you’re moving. If, for instance, you are moving from an essentially stand-alone, self-administrated environment to a cloud-based presence, you most likely will be in for the security-based shock of your life. On the other hand, if you, in the corporate sense, are moving a large, multi-national corporation to the cloud, chances are you’ve already encountered many of the challenges, such as regional compliance and legal issues, which will also be present in your move to the cloud.
The differentiator? There are three; scale, complexity and speed. In the hundreds of clients we have helped migrate to the cloud, not once have we come across a security issue that was unique to the cloud. This is why the title of this article is “What are the Greater Risks of Cloud Computing?” and not “What are the Unique Risks of Cloud Computing?” There simply aren’t any. Let’s be clear – this isn’t to say any of these risks aren’t real. They simply aren’t unique, nor are they new. It is just a case of a new bandwagon (the cloud) with a new crew of sensationalists ready to jump on that bandwagon.
Let’s take a few of the most popularly-stated “risks of cloud computing” and see how this plays out.
This often makes the list as though it is a unique problem to the cloud. What about companies utilizing colo’s? And before that, what about companies using time shared systems – can you say payroll systems? Didn’t they pre-date the cloud by some decades? While there might not have been hypervisors or shared applications back in the day, there just as surely could have been shared components at some level, possibly network components or monitoring.
Loss of Data/Data Breaches
In looking at some of the most widely touted data breaches – Target, Ashley Madison, Office of Personnel Management and Anthem to name just a few – the compromises were listed as “result of access to its network via an HVAC contractor monitoring store climate systems,” “unknown,” “contractor’s stolen credentials to plant a malware backdoor in the network,” and “possible watering hole attack that yielded a compromised administrator password.” Your first thought might be, “Do these hacks even involve the cloud?” It’s not clear where the data was stored in these instances, but that doesn’t stop articles from being written about the dangers of the cloud and including references to the instances. Conversely, there is an excellent article in Business Insurance on the very opposite viewpoint. Perhaps the cloud can be a bit safer that traditional environments for one very good reason – reputation. We have seen customers move to the cloud in order to modernize their security paradigm. The end result is a more secure environment in the cloud than they ever had on premise.
Account or Service Traffic Hijacking
Now we have a security issue that really makes use of the cloud in terms of scale and speed. Let’s clarify what we’re talking about here. This is the hacking of a cloud provider and actually taking over instances for the use of command and control for the purpose of using them as botnets. The hijacking of compute resources, whether they be personal computers, corporate or cloud resources, continues to this day.
Hacking a cloud provider follows the simple logic of robbing a bank vs. a taco stand in more ways than one. Where there’s increased reward, there’s increased risk, to turn an old saying around a bit. If you’re going to hit a lot of resources and make it worth your while, the cloud is the place to go. However, know that it’s going to be a lot harder and that a lot more eyes are going to be on you and looking for you. Interestingly, the most recent sightings of this type of activity seem to about the 2009-’10 timeframes as Amazon, Microsoft, Google and the other providers learned quickly from their mistakes.
If you were to continue down the list of other cloud security issues – malicious insiders, inadequate security controls, DDoS attacks, compromised credentials, and the list goes on – it becomes pretty evident that there simply aren’t any out there that are unique. We’ve seen them before in one context or another, but they just haven’t been as big an issue in our environment.
The next time you see an article on the dangers of the cloud, stop for a moment and think, “Is this truly a problem that has never been seen before or just one that I’ve never encountered or had to deal with before?”
-Scott Turvey, Solutions Architect
While some large enterprises avoid moving to the cloud because of rigid security and compliance requirements, SCOR opted for the cloud for a key block of its business precisely because of the cloud’s rigid security and compliance offerings.
SCOR is a leader in the life reinsurance market in the Americas, offering broad capabilities in risk management, capital management and value-added services and solutions. A number of primary insurers use SCOR’s automated life underwriting system, Velogica, to market life insurance policies that can be delivered at the point of sale. Other companies use Velogica as a triage tool for their fully underwritten business.
“Through the Velogica system, we get thousands of life insurance applications a day from multiple clients,” explains Dave Dorans, Senior Vice President. “Velogica is a significant part of our value proposition and is important to the future of our business.”
Data security has always been a priority for SCOR but the issue became even more critical as data breaches at some of the largest and most respected companies made headline news. SCOR decided to invest in a state of the art data security framework for Velogica. “We wanted clients to have full confidence in the way Velogica stores and handles the sensitive personal data of individuals,” Dorans said.
SCOR’s goal was to have Velogica accredited as a Service Organization Control (SOC) 2 organization – a competitive advantage in the marketplace – by aligning with one of the more respected information security standards in the industry. Determining what it would take to achieve that goal became the responsibility of Clarke Rodgers, Chief Information Security Officer with SCOR Velogica. “We quickly determined that SOC2 accreditation for SCOR’s traditional, on premise data center environment would be a monumental task, could cost millions of dollars and perhaps take years to complete. Moreover, while SOC2 made sense for Velogica, it wasn’t necessary for other SCOR businesses.
Once it was determined that SOC2 was business critical for the company, Rodgers, analyzed the different ways of obtaining the security and compliance measure and determined that moving to the cloud was the most efficient path. SCOR Velogica turned to 2nd Watch to help it achieve SOC2 accreditation with AWS, figuring it would be easier than making the journey on its own.
On working with 2nd Watch, Rodgers commented, ““They came in and quickly understood our technical infrastructure and how to replicate it in AWS, which is a huge feat.” SCOR met significant benefits thanks to the migration, including:
Adherence to specific security needs: In addition to its SOC2 accreditation, 2nd Watch also implemented several security elements in the new AWS environment including; encryption at rest in Amazon Elastic Block Store (EBS) volumes leveraging the AWS Key Management System (KMS), Amazon Virtual Private Cloud (VPC) to establish a private network within AWS, security groups tuned for least privilege access, Security-Enhanced Linux, and AWS Identity and Access Management (IAM) Multi-Factor Authentication (MFA).
AWS optimization: 2nd Watch has helped SCOR identify opportunities for optimization and efficiencies on AWS, which will help down the road if the company wishes to expand the AWS-hosted application to regions outside of North America. “With our SOC2 Type 1 behind us, we are now focused on optimizing our resources in the AWS Cloud so we can fully exploit AWS’s capabilities to our security and business benefit.” Rodgers explains. “We will rely on 2nd Watch for guidance and assistance during this optimization phase.”
Cost savings on AWS: Rodgers hasn’t done a full analysis yet of cost savings from running the infrastructure on AWS, but he’s confident the migration will eventually cut up to 30% off the price of hosting and supporting Velogica internally.
Hear from SCOR how it achieved better security with AWS on our live webinar April 7. Register Now
There are several open source (aka free) tools that you can use to the security of your applications and servers like a hacker. One of the best is Kali Linux, a free tool that s almost every layer of you environment (Application, Network, Host, Foundation).
About Kali Linux
Kali Linux was a creation of Offensive Security in an effort to achieve effective defensive security through an offensive mindset. Kali is supported not only by Offensive Security, but also a very impressive community of people who contribute content and software to the project. Kali is preinstalled with over 600 penetration ing scripts and programs (http://tools.kali.org/tools-listing). Formerly known as Backtrack, it’s been used by security professionals and hackers alike for years. This is one of the best tools that you can use to your security.
Kali has just recently released version 2.0 of its open source penetration ing kit. It can be downloaded here.
Steps for ing your security with Kali Linux
Step 1: First you want to do some information gathering on your servers:
- Run a python script called the harvester to query google, Bing, Linkedin, and PGP to find information related to your domain. It will include email addresses, IP addresses, and server configurations.
- OS fingerprinting will give you the versions of operating systems you may be running, which will allow you to look up any outstanding vulnerabilities.
- Run fragroute, which has a simple rule set language to delay, duplicate fragment, and analyze any intrusion detection that you might have in place.
- Finally, run NMAP, which will simply scan your IP address to find what TCP/UDP ports are open. You want to make sure that the only ports open are what you need to conduct business—nothing more and nothing less.
Step 2: Nessus is a tool used by auditors and analysts to assess vulnerabilities in systems, networks, and applications. While this doesn’t replace the auditors who certify you for compliance, it does make you more secure by giving you a better understanding of the risks within your environment. It has configuration and vulnerabilities scanning capabilities, as well as malware detection and sensitive data searches. You can also utilize particular cloud services that will conduct the same scans and auditing in a way that is built for the cloud.
Step 3: WPScan is a great tool if you are utilizing wordpress in your infrastructure. WPScan looks for vulnerabilities that might have been installed in your environment through vulnerable plugins and themes. The capabilities of this tool include brute forcing your passwords, finding vulnerable themes/plugins, and enumerating user lists to focus a password dictionary brute force. This is a very efficient tool and is maintained by the community and the WPScan team.
Step 4: The automater is a script that will scan various blacklists to verify if your IP addresses have ever been involved in any botnet activity—if the previous or current users of that IP address were compromised and used to attack others, they would appear on one of those lists. This will ensure your public IP address won’t be blocked when you launch your live site. The automater checks IPvoid.com, Robtex.com, Fortiguard.com, unshorten.me, Urlvoid.com, Labs.alienvault.com, ThreatExpert, VxVault, and VirusTotal.
These are just a few of the tools that are offered in Kali Linux, but they will get you started down the right path, by exploring the distribution of Kali and ing your environment to see how secure you really are.
Learn more about 2W Managed Cloud Security and how our partnership with Alert Logic can ensure your environment’s security
Article contributed by Alert Logic
If you missed the last article in our four-part blog series with our strategic partner, Alert Logic, check out the guide to help digital businesses prepare for—and respond to—cyber incidents here.
Security should be baked into the DevOps process, from tools to skills to collaboration. DevOps and security are not mutually exclusive.
The problem with digital innovation is that considerations for compliance come later, after the product or service is on the market. From public cloud infrastructure to Internet of Things to mobile apps and even to DevOps, tough requirements like security aren’t built into innovators’ plans. Entrepreneurs are thinking primarily about shiny, new, fast and disruptive. Yet for the CIO and other chief executives accountable to customers, laws and financial markets, managing risk around sensitive data is top priority.
DevOps processes are at the heart of business innovation: think Netflix, Facebook, Etsy and Nordstrom, all leaders in their sectors. Yet many of the popular DevOps tools and methodologies, whether commercial or open source, haven’t been optimized for the needs of enterprise security. An application running in a container, for instance, will still require attention around configuration to ensure application security.
As well, many security professionals haven’t yet made the leap to understanding the changing best practices for security in this new world of cloud/agile/mobile IT. Some security experts have imposed barriers to DevOps, by resisting the switch to faster, more iterative development along with the public cloud.
On the surface, the speed at which DevOps teams are approving and releasing code would suggest an increase in security risks to end users by eliminating rigorous security review phases. Yet managing security, as with ing, is in fact optimal when performed side-by-side with developers as code is being written. By integrating security, people and processes tightly within the continuous delivery cycle, DevOps can do a better job of eliminating loopholes and gaps in the code before production. DevOps tools emphasize the use of frequent and automated processes to improve software quality: also an ideal model for handling security ing and fixes. Determining the best way to merge security with DevOps is a work in progress. The following concepts can provide a framework for getting started:
- Use the best of DevOps for security: DevOps, with its focus on automation and continuous integration, provides a more holistic framework for security management. Start by considering security through every step of the development and production cycle. Security professionals can help developers root out design problems in the beginning – such as ensuring all data transport is encrypted. Integrate automated security checks into development, ing and deployment phases, and educate all team members about the importance of incorporating security thinking in their specific job roles. Security should no longer be the last process before committing the code to production.
- Investigate new DevOps and Cloud security tools: Fortunately, the security technology industry is ramping up quickly to the needs of DevOps security. Static Application Security (SAS) tools for security when code is being written while Dynamic Application Security (DAS) tools for interface risks. A few of the reputable systems include Checkmarx, Veracode and Parasoft. The third area of security automation tools covers penetration vulnerability ing, such as Nessus, developed by Tenable. Other contenders in this area include Qualys and OpenVAS. These tools can integrate smoothly into the software development lifecycle, such as by plugging into Jenkins. By adding automation, security is not only built-in, but doesn’t slow down the DevOps process.
- Getting buy-in from security teams: This might just be the hardest part. While developers are incentivized to go faster and do more, security professionals are incentivized to control, monitor and reduce risk. Meeting in the middle is definitely possible – but it will require some opinion shifting on both sides. Developers and product managers will need to understand the importance of working collaboratively with the security team, and in an accountable way. Security people can benefit from a more comprehensive understanding of security in the cloud. This should include continuous education on the new tools and services available today to manage risk and to deliver even higher levels of security than in the past – from better reporting, to API-based security and easier encryption at rest.\
- Manage tool sprawl: The concept of self-organization is an important one in DevOps, because it fosters a spirit of flexibility and rapid collaboration. Yet this same principle can also lead to environments of dozens or even hundreds of different tools in use to manage deployment, configuration, QA and orchestration. That creates risks for visibility and monitoring as well as standardizing around security controls and access. Engineering leads should help strike a balance between too much and too little governance when it comes to tools and workflows by providing guidelines for tool selection. The DevOps automation infrastructure itself can introduce risks. If a hacker gains access to a tool like Puppet or Chef, he can modify any number of configurations and add new user accounts. Configuration and change management tools must be adequately secured and governed, lest they become a new attack plane.\
With the advent of DevOps, there’s an opportunity at last for security to become an integral and seamless aspect of innovation. We think it’s not only possible but critical to give security the attention it demands in the world of fast IT.
-Kris Bliesner, CTO
This article was first published on DevOps.com on 12/3/15.
Implementing security in a cloud environment may seem like a difficult task and slows down, or even prevents, some organizations from migrating to the cloud. Some cloud security models have similarities to traditional data center or on-premises security; however, there are opportunities to implement new security measures as well as tweak your existing security plan. Here are five tips for getting started with cloud security.
- Secure your application security code
Knowing and understanding account usage and the types of coding languages, inputs, outputs, and resource requests is essential.
- Implement a solid patch management and configuration management strategy
These strategies are usually more people and process driven, but are important components to the care of feeding of the technology solution. Organizations should take inventory of all the data they are maintaining and understand what type of data it is, where it is being stored, what accounts have access to this data, and how is it being secured.
- Dedicate time and resources to the design and maintenance of identity and access management solutions
Attackers continue to use brute force attacks against accounts to crack passwords and gain authenticated privileges in your environment. Accounts should follow the least privilege concept and account activity should be logged. A robust logging and log review system should be a standard implementation for all systems, accounts, and configuration modifications to ensure accountability of legitimate activity.
- Understand the shared responsibility of security
Generally, cloud providers will have security implemented throughout their core infrastructure, which is primarily designed to safeguard their systems and the basic foundational services for each of their customers. Cloud providers will maintain and secure their infrastructure; however, they won’t necessarily provide customers reports or notifications from this layer unless it impacts a significant amount of customers. Therefore, it is highly recommended that you implement a customized security plan within your own cloud environment.
At the moment a cloud provider drops a network packet onto your systems, you should employ security monitoring and network threat detection. The customer responsibility for security increases when moving from the network level to the host level and further to the application level. Once you have access to your operating system, you are giving root/administrator access and therefore, that system is yours to secure and manage.
At this point, the customer is responsible for the security of the applications and the application code that is used on the host systems. Cloud customers need to pay particular attention to the application code that is used in their environment since web application attacks are the most prevalent type of attacks used by adversaries.
- Stay informed about the la threats and vulnerabilities
Organizations should also stay informed about the la threats and vulnerabilities to their cloud systems. Adversaries, hacking groups and security researchers are constantly working to discover new vulnerabilities within systems and keeping up with these threats is imperative. Organizations that have dedicated resources to monitoring and responding to the la threat activities are able to anticipate cyber activity and minimize the impact of an attack.
Implementing effective security within a cloud environment may seem to be a challenging task; however, a strategic plan and the proper integration of people, process, and technology enable organizations to overcome this challenge.
Learn more about 2W Managed Cloud Security and how our partnership with Alert Logic can ensure your environment’s security.
Blog contributed by Alert Logic