Compliance is a constant challenge today. Keeping our system images in a healthy and trusted state of compliance requires time and effort. There are millions of tools and technologies in market to help customers maintain compliance and state, so where do I start?
Amazon has built a rich set of core technologies within the Amazon Web Services console. Systems Manager is a fantastic operations management platform tool that can assist you with setting up and maintaining configuration and state management.
One of the first things we must focus on when we build out our core images in the cloud is the configuration of those images. What is the role of the image, what operating system am I going to utilize and what applications and/or core services do I need to enable, configure and maintain? In the datacenter, we call these Gold Images. The same applies in the cloud.
We define these roles for our images and we place them in different functional areas – Infrastructure, Web Services, Applications. We may have many core image templates for our enterprise workloads, by building these base images and maintaining them continuously – we set in motion a solid foundation for core security and core compliance of our cloud environment.
Amazon Systems Manager looks across my cloud environment and allows me to bring together all the key information around all my operating resources in the cloud. It allows me to centralize the gathering of all core baseline information for my resources in one place. In the past I would have had to look at my AWS CloudWatch information in one area, my AWS CloudTrail information in another area and my configuration information in yet another area. Centralizing this information in one console allows you to see the holistic state of your cloud environment baselines in one console.
AWS Systems Manager provides built-in Insight and Dashboards that allow you to look across your entire cloud environment and see into and act upon your cloud resources. AWS Systems Manager allows you to see the configuration compliance of all your resources as well as the state management and associations across your resources. It provides a rich ability to customize configuration and state management for your workloads, applications and resource types and scan and analyze to ensure those configuration and states are maintained continuously. With AWS Systems Manager you can customize and create your own compliance types to marry to your Enterprise Organizational baseline of your company’s business requirements. With that in place, I can constantly scan and analyze against these compliance baselines to ensure and maintain the operational configuration and state always.
We analyze and report on the current state and quickly determine compliance or out of compliance state centrally for our cloud services and resources. We can create base reports around our compliance position at any time, and with this knowledge, we can set in motion remediation to return our services and resources back to a compliant state and configuration.
With Amazon Systems Manager we can scan all resources for patch state, determine what patches are missing and manually, scheduled or automate the remediation of those patches to maintain patch management compliance.
Amazon Systems Manager also integrates with Chef InSpec, allowing you to leverage Chef InSpec to operate in a continuous compliance framework for your cloud resources.
On the road to compliance it is important to flex the tools and capabilities of your Cloud Provider. Amazon gives us a rich set of Systems Management capabilities across configuration, state management, patch management and remediation, as well as reporting. Amazon Systems Manager is provided at no cost to Amazon customers and will help you along your Journey to realizing continuous compliance of your cloud environment across the Amazon Cloud and the Hybrid Cloud. To learn more about using Amazon Systems Manager or your systems’ compliance, contact us.
-Peter Meister, Sr Director of Product Management
By Paul Fletcher, Alert Logic
The “Internet of Things” (IoT) is a broadly accepted term which basically describes any Internet-connected devices (usually via Wi-Fi) that isn’t a traditional computer system. These connected, IoT devices offer many conveniences for everyday life. Also, it’s difficult to remember how life was before you could check email, weather and stream live video using a smart TV. It’s now considered commonplace for a smart refrigerator to send you a text every morning with an updated shopping list. We can monitor and manage the lights, thermostat, doors, locks and web cameras from wherever we may roam, thanks to smartphone apps and the proliferation of our connected devices.
With this added convenience comes a larger digital footprint, which makes for a larger target for attackers to discover other systems on your network, steal data or seize control of your DVR. The hacker community is just getting warmed up in regards to attacking IoT devices. There are a lot of fun things hackers can do with vulnerable connected devices and/or “smart” homes. The early attacks were just about exploring, hackers would simulate ghosts by having all the lights in the house go on and off in a pattern, turn the heater on during the summer and the air conditioning in the winter or make the food inside the fridge go bad with the change of a few temperature levels.
The current IoT security threat landscape has grown more sophisticated recently and we’ve seen some significant attacks. The most impactful IoT-based cyber attack happened on Oct. 21, 2016, when a hacker group activated 10% of their IoTBotNet, with malware called “Mirai.” Approximately 50,000 web cameras and DVR systems launched a massive DDoS attack on the Dyn DNS Service, disrupting Internet services for companies like Spotify, Twitter, Github and others for more than 8 hours. The attackers only used 10% of the 500,000 DVR’s and Web Camera’s infected by the malware, but cause monetary damage to customers of the Dyn DNS service. A few months later, attackers launched a new IoT-specific malware called “Persirai” that infected over 100,000 web cameras. This new malware comes complete with a sleek detection avoidance feature. Once the malware executes on the web cam it only runs in the RAM memory space and deletes the original infection file, making it extremely difficult to detect.
The plain, cold truth is that most IoT manufacturers use stripped down versions of the Linux (and possibly Android) operating system, because the OS requires minimal system resources to operate. ALL IoT devices have some version of an operating system and are therefore; “lightweight” computers. Since most IoT devices are running some form of Linux or Android operating system, this means that they have vulnerabilities that are researched and discovered on an on-going basis. So, yes, it’s possible that you may have to install a security patch for your refrigerator or coffee maker.
Special-purpose computer systems with customized versions of operating systems have been around for decades. The best example of this is old school arcade games or early gaming consoles. The difference today is that these devices now come with fast, easy connectivity to your internal network and the Internet. Most IoT manufacturers don’t protect the underlying operating system on their “smart” devices and consumers shouldn’t assume it’s safe to connect a new device to their network. Both Mirai and Persirai compromised IoT devices using simple methods like default usernames and passwords. Some manufacturers feel like their devices are so “lightweight” that their limited computing resources (hard drive, RAM etc.) wouldn’t be worth hacking, because they wouldn’t provide much firepower for an attacker. The hacking community repeatedly prove that they are interested in ANY resource (regardless of capacity) they can leverage.
When an IoT device is first connected to your network (either home or office), it will usually try to “call home” for software updates and/or security patches. It’s highly recommended that all IoT devices be placed on an isolated network segment and blocked from the enterprise or high valued home computer systems. It’s also recommended to monitor all outbound Internet traffic from your “IoT” network segment to discern a baseline of “normal” behavior. This helps you better understand the network traffic generated from your IoT devices and any “abnormal” behavior could help discover a potential attack.
Remember “hackers gonna hack,” meaning the threat is 24/7. IoT devices need good computer security hygiene, just like your laptop, smartphone and tablet. Make sure you use unique and easily remembered passwords and make sure to rotate all passwords regularly. Confirm that all of your systems are using the la patches and upgrades for better functionality and security. After patches are applied, validate your security settings haven’t been changed back to the default settings.
IoT devices are very convenient and manufacturers are getting better at security, but with the ever-changing IoT threat landscape we can expect to see more impactful and sophisticated attack in the near future. The daily burden of relevant operational security for an organization or household is no easy task and IoT devices are just one of the many threats that require on-going monitoring. It’s highly recommended that IoT cyber threats be incorporated into a defense in depth strategy as a holistic approach to cyber security.
Learn more about 2nd Watch Managed Cloud Security and how our partnership with Alert Logic can ensure your environment’s security.
Blog Contributed by 2nd Watch Cloud Security Partner, Alert Logic
There have been countless numbers of articles, blogs and whitepapers written on the subject of security in the cloud and an even greater number of opinions as to the number of risks associated with a move to the same. Five, seven, ten, twenty-seven? How many risks are associated with you or your company’s move to the cloud? Well, in the best consultant-speak, it depends.
One could say that it depends on how far “up the stack” you’re moving. If, for instance, you are moving from an essentially stand-alone, self-administrated environment to a cloud-based presence, you most likely will be in for the security-based shock of your life. On the other hand, if you, in the corporate sense, are moving a large, multi-national corporation to the cloud, chances are you’ve already encountered many of the challenges, such as regional compliance and legal issues, which will also be present in your move to the cloud.
The differentiator? There are three; scale, complexity and speed. In the hundreds of clients we have helped migrate to the cloud, not once have we come across a security issue that was unique to the cloud. This is why the title of this article is “What are the Greater Risks of Cloud Computing?” and not “What are the Unique Risks of Cloud Computing?” There simply aren’t any. Let’s be clear – this isn’t to say any of these risks aren’t real. They simply aren’t unique, nor are they new. It is just a case of a new bandwagon (the cloud) with a new crew of sensationalists ready to jump on that bandwagon.
Let’s take a few of the most popularly-stated “risks of cloud computing” and see how this plays out.
This often makes the list as though it is a unique problem to the cloud. What about companies utilizing colo’s? And before that, what about companies using time shared systems – can you say payroll systems? Didn’t they pre-date the cloud by some decades? While there might not have been hypervisors or shared applications back in the day, there just as surely could have been shared components at some level, possibly network components or monitoring.
Loss of Data/Data Breaches
In looking at some of the most widely touted data breaches – Target, Ashley Madison, Office of Personnel Management and Anthem to name just a few – the compromises were listed as “result of access to its network via an HVAC contractor monitoring store climate systems,” “unknown,” “contractor’s stolen credentials to plant a malware backdoor in the network,” and “possible watering hole attack that yielded a compromised administrator password.” Your first thought might be, “Do these hacks even involve the cloud?” It’s not clear where the data was stored in these instances, but that doesn’t stop articles from being written about the dangers of the cloud and including references to the instances. Conversely, there is an excellent article in Business Insurance on the very opposite viewpoint. Perhaps the cloud can be a bit safer that traditional environments for one very good reason – reputation. We have seen customers move to the cloud in order to modernize their security paradigm. The end result is a more secure environment in the cloud than they ever had on premise.
Account or Service Traffic Hijacking
Now we have a security issue that really makes use of the cloud in terms of scale and speed. Let’s clarify what we’re talking about here. This is the hacking of a cloud provider and actually taking over instances for the use of command and control for the purpose of using them as botnets. The hijacking of compute resources, whether they be personal computers, corporate or cloud resources, continues to this day.
Hacking a cloud provider follows the simple logic of robbing a bank vs. a taco stand in more ways than one. Where there’s increased reward, there’s increased risk, to turn an old saying around a bit. If you’re going to hit a lot of resources and make it worth your while, the cloud is the place to go. However, know that it’s going to be a lot harder and that a lot more eyes are going to be on you and looking for you. Interestingly, the most recent sightings of this type of activity seem to about the 2009-’10 timeframes as Amazon, Microsoft, Google and the other providers learned quickly from their mistakes.
If you were to continue down the list of other cloud security issues – malicious insiders, inadequate security controls, DDoS attacks, compromised credentials, and the list goes on – it becomes pretty evident that there simply aren’t any out there that are unique. We’ve seen them before in one context or another, but they just haven’t been as big an issue in our environment.
The next time you see an article on the dangers of the cloud, stop for a moment and think, “Is this truly a problem that has never been seen before or just one that I’ve never encountered or had to deal with before?”
-Scott Turvey, Solutions Architect
While some large enterprises avoid moving to the cloud because of rigid security and compliance requirements, SCOR opted for the cloud for a key block of its business precisely because of the cloud’s rigid security and compliance offerings.
SCOR is a leader in the life reinsurance market in the Americas, offering broad capabilities in risk management, capital management and value-added services and solutions. A number of primary insurers use SCOR’s automated life underwriting system, Velogica, to market life insurance policies that can be delivered at the point of sale. Other companies use Velogica as a triage tool for their fully underwritten business.
“Through the Velogica system, we get thousands of life insurance applications a day from multiple clients,” explains Dave Dorans, Senior Vice President. “Velogica is a significant part of our value proposition and is important to the future of our business.”
Data security has always been a priority for SCOR but the issue became even more critical as data breaches at some of the largest and most respected companies made headline news. SCOR decided to invest in a state of the art data security framework for Velogica. “We wanted clients to have full confidence in the way Velogica stores and handles the sensitive personal data of individuals,” Dorans said.
SCOR’s goal was to have Velogica accredited as a Service Organization Control (SOC) 2 organization – a competitive advantage in the marketplace – by aligning with one of the more respected information security standards in the industry. Determining what it would take to achieve that goal became the responsibility of Clarke Rodgers, Chief Information Security Officer with SCOR Velogica. “We quickly determined that SOC2 accreditation for SCOR’s traditional, on premise data center environment would be a monumental task, could cost millions of dollars and perhaps take years to complete. Moreover, while SOC2 made sense for Velogica, it wasn’t necessary for other SCOR businesses.
Once it was determined that SOC2 was business critical for the company, Rodgers, analyzed the different ways of obtaining the security and compliance measure and determined that moving to the cloud was the most efficient path. SCOR Velogica turned to 2nd Watch to help it achieve SOC2 accreditation with AWS, figuring it would be easier than making the journey on its own.
On working with 2nd Watch, Rodgers commented, ““They came in and quickly understood our technical infrastructure and how to replicate it in AWS, which is a huge feat.” SCOR met significant benefits thanks to the migration, including:
Adherence to specific security needs: In addition to its SOC2 accreditation, 2nd Watch also implemented several security elements in the new AWS environment including; encryption at rest in Amazon Elastic Block Store (EBS) volumes leveraging the AWS Key Management System (KMS), Amazon Virtual Private Cloud (VPC) to establish a private network within AWS, security groups tuned for least privilege access, Security-Enhanced Linux, and AWS Identity and Access Management (IAM) Multi-Factor Authentication (MFA).
AWS optimization: 2nd Watch has helped SCOR identify opportunities for optimization and efficiencies on AWS, which will help down the road if the company wishes to expand the AWS-hosted application to regions outside of North America. “With our SOC2 Type 1 behind us, we are now focused on optimizing our resources in the AWS Cloud so we can fully exploit AWS’s capabilities to our security and business benefit.” Rodgers explains. “We will rely on 2nd Watch for guidance and assistance during this optimization phase.”
Cost savings on AWS: Rodgers hasn’t done a full analysis yet of cost savings from running the infrastructure on AWS, but he’s confident the migration will eventually cut up to 30% off the price of hosting and supporting Velogica internally.
Hear from SCOR how it achieved better security with AWS on our live webinar April 7. Register Now
In the last of our four-part blog series with our strategic partner, Alert Logic, we explore business resumption for cloud environments. Check out last week’s article on Free Tools and Tips for Testing the Security of Your Environment Against Attacks first.
Business resumption, also known as disaster recovery, has always been a challenge for organizations. Aside from those in the banking and investment industry, many businesses don’t take business resumption as seriously as they should.
I formerly worked at a financial institution that would send their teams to another city in another state where production data was backed up and could be restored in the event of a disaster. Employees would go to this location and use the systems in production to complete their daily workloads. This would the redundancy of a single site, but what if you could have many redundant sites? What if you could have a global backup option and have redundancy not only when you need it, but as a daily part of your business strategy?
To achieve true redundancy, I recommend understanding your service provider’s offerings. Each service provider has different facilities located in different regions that are spread between different telecom service providers.
From a customer’s perspective, this creates a good opportunity to build out an infrastructure that has fully redundant load balances, giving your business a regional presence in almost every part of the world. In addition, you are able to deliver application speed and efficiency to your regional consumers.
Look closely at your provider’s services like hardware health monitoring, log management, security monitoring and all the management services that accompany those solutions. If you need to conform to certain compliance regulations, you also need to make sure the services and technologies meet each regulation.
Organize your vendors and managed service providers so that you can get your data centralized based on service across all providers and all layers of the stack. This is when you need to make sure that your partners share data, have the ability to ingest logs, and exchange APIs with each other to effectively secure your environment.
Additionally, centralize the notification process so you are getting one call per incident versus multiple calls across providers. This means that API connectivity or log collection needs to happen between technologies that are correlating triggered events across multiple platforms. This will centralize your notification and increase the efficiency and decrease detection time to mitigate risks introduced into your environment by outside and inside influences.
Lastly, to find incidents as quickly as possible, you need to find a managed services provider that will be able to ingest and correlate all events and logs across all infrastructures. There are also cloud migration services that will help you with all these decisions as they help move you to the cloud.
Learn more about 2W Managed Cloud Security and how our partnership with Alert Logic can ensure your environment’s security
Article contributed by Alert Logic