Cloud Automation for I.T. Governance, Risk, and Compliance (GRC) in Healthcare

It has been said that the “hero of a successful digital transformation is GRC.” The ISACA website states, “to successfully manage the risk in digital transformation you need a modern approach to governance, risk and regulatory compliance.” For GRC program development, it is important to understand the health information technology resources and tools available to enable long term success.

What is GRC and why it important?

According to the HIPAA Journal, the average cost of a healthcare data breach is now $9.42 million. In the first half of 2021, 351 significant data breaches were reported, affecting nearly 28 million individuals. The needs have never been more acute among healthcare providers, insurers, biotechnology and health research companies for effective information security and controls. Protecting sensitive data and establishing a firm security posture is essential.  Improving health care and reducing cost relies on structured approaches and thoughtful implementation of available technologies to help govern data and mitigate risk across the enterprise.

Effective and efficient management of governance, risk, and compliance, or GRC, is fast becoming a business priority across industries. Leaders at hospitals and health systems of all sizes are looking for ways to build operating strategies that harmonize and enhance efforts for GRC. Essential to that mission are effective data governance, risk management, regulatory compliance, business continuity management, project governance, and security. But rather than stand-alone or siloed security or compliance efforts, a cohesive program coupled with GRC solutions allow for organizational leaders to address the multitude of challenges more effectively and efficiently.

What are the goals for I.T. GRC?

For GRC efforts, leaders are looking to:

  • Safeguard Protected Healthcare Data
  • Meet and Maintain Compliance to Evolving Regulatory Mandates and Standards
  • Identify, Mitigate and Prevent Risk
  • Reduce operational friction
  • Build in and utilize best practices

Managing governance, risk, and compliance in healthcare enterprises is a daunting task. GRC implementation for healthcare risk managers can be difficult, especially during this time of rapid digital and cloud transformation. But relying on internal legacy methods and tools leads to the same issues that have been seen on-premises, stifling innovation and improvement. As organizations adapt to cloud environments as a key element of digital transformation and integrated health care, leaders are realizing that now is the time to leverage the technology to implement GRC frameworks that accelerate their progress toward positive outcomes. What’s needed is expertise and a clear roadmap to success.

Cloud Automation of GRC

The road to success starts with a framework, aligned to business objectives, that provides cloud automation of Governance, Risk, and Compliance. Breaking this into three distinct phases, ideally this would involve:

  1. Building a Solid Foundation – within the cloud environment, ensuring infrastructure and applications are secured before they are deployed.
  • Image/Operation System hardening automation pipelines.
  • Infrastructure Deployment Automation Pipelines including Policy as Code to meet governance requirements.
  • CI/CD Pipelines including Code Quality and Code Security.
  • Disaster Recovery as a Service (DRaaS) meeting the organization’s Business Continuity Planning requirements.
  • Configuration Management to allow automatic remediation of your applications and operating systems.
  • Cost Management strategies with showback and chargeback implementation.
  • Automatic deployment and enforcement of standard security tools including FIM, IDS/IPS, AV and Malware tooling.
  • IAM integration for authorization and authentication with platforms such as Active Directory, Okta, and PingFederate, allowing for more granular control over users and elevated privileges in the clouds.
  • Reference Architectures created for the majority of the organization’s needs that are pre-approved, security baked-in to be used in the infrastructure pipelines.
  • Self-service CMDB integration with tools such ServiceNow, remedy and Jira ServiceDesk allowing business units to provision their own infrastructure while providing the proper governance guardrails.
  • Resilient Architecture designs
  1. Proper Configuration and MaintenanceInfrastructure misconfiguration is the leading cause of data breaches in the cloud, and a big reason misconfiguration happens is infrastructure configuration “drift,” or change that occurs in a cloud environment post-provisioning. Using automation to monitor and self-remediate the environment will ensure the cloud environment stays in the proper configuration eliminating the largest cause of incidents. Since workloads will live most of their life in this phase, it is important to ensure there isn’t any drift from the original secure deployment. An effective program will need:
  • Cloud Integrity Monitoring using cloud native tooling.
  • Log Management and Monitoring with centralized logging, critical in a well-designed environment.
  • Application Monitoring
  • Infrastructure Monitoring
  • Managed Services including patching to resolve issues.
  • SLAs to address incidents and quickly get them resolved.
  • Cost Management to ensure that budgets are met and there are no runaway costs.
  • Perimeter security utilizing cloud native and 3rd party security appliance and services.
  • Data Classification
  1. Use of Industry Leading Tools – for risk assessment, reporting, verification and remediation. Thwart future problems and provide evidence to stakeholders that the cloud environment is rock solid. Tools and verification components would include:
  • Compliance reporting
  • Risk Registry integration into tools
  • Future attestations (BAAs)
  • Audit evidence generation

Where do you go from here?

Your organization needs to innovate faster and drive value with the confidence of remaining in compliance. You need to get to a proactive state instead of being reactive. Consider an assessment to help you evaluate your organization’s place in the cloud journey and how the disparate forms of data in the organization are collected, controlled, processed, stored, and protected.

Start with an assessment that includes:

  • Identification of security gaps
  • Identification of foundational gaps
  • Remediation plans
  • Managed service provider onboarding plan
  • A Phase Two (Foundational/Remediation) proposal and Statement of Work

About 2nd Watch

2nd Watch is a trusted and proven partner, providing deep skills and advisory to leading organizations for over a decade. We earned a client Net Promoter Score of 85, a good way of telling you that our customers nearly always recommend us to others. We can help your organization with cloud native solutions. We offer skills in the following areas:

  • Developing cloud first strategies
  • Migration of workloads to the cloud
  • Implementing automation for governance and security guardrails
  • Implementing compliance controls and processes
  • Pipelines for data, infrastructure and application deployment
  • Subject matter expertise for FHIR implementations
  • Managed cloud services

Schedule time with an expert now, contact us.

-Tom James, Sr. Marketing Manager, Healthcare


Cloud Crunch Podcast: You’re on the Cloud. Now What? 5 Strategies to Maximize Your Cloud’s Value

You migrated your applications to the cloud for a reason. Now that you’re there, what’s next? How do you take advantage of your applications and data that reside in the cloud? What should you be thinking about in terms of security and compliance? In this first episode of a 5-part series, we discuss 5 strategies you should consider to maximize the value of being on the cloud. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.

We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas.


You’re on AWS. Now What? 5 Strategies to Increase Your Cloud’s Value

Now that you’ve migrated your applications to AWS, how can you take the value of being on the cloud to the next level? To provide guidance on next steps, here are 5 things you should consider to amplify the value of being on AWS.


3 Security and Compliance Must-Haves to Meet Any Regulation

The security processes and controls you put in place must meet the compliance standards required for your industry. Whether it’s GDPR, CCPA, or any other state, federal, or industry specific regulation, there are at least three things you need to do to meet the minimum requirements. Of course, each regulation comes with its unique conditions, but these are the first steps to take when moving toward a more secure and compliant environment.

1. Data discovery and data mapping

Most of the compliance standards today surround the data an organization collects from consumers. The first step to understanding your data, for both compliance and to inform decision making, is to collect and analyze all of your data from the various sources it originates. In addition to data discovery, you need to have a process for data mapping as well. Data mapping matches the data fields of data from one database to another.

It’s important to have data flows so you know how your data gets to you, how it’s entered into various systems, what resources it hits, and where it finally ends up. Knowing, at every point, were your data lives is the key first step, regardless of which law you need to comply with.

A strict tagging strategy that is uniform and specific across departments aids in ongoing data mapping. Review your strategy regularly to make sure your teams are following it and it still works as expected with any advancements made in the data you’re collecting. You can also use the tools available through cloud providers to help with these governance tasks. Some recommended tools include Amazon Macie and AWS Config, as well as Azure Security Center.

2. Notification and purge mechanisms with identity validation and an audit trail

A person’s data belongs to them, regardless of which company holds it. In order to field data requests from consumers, you need to have some sort of notification mechanism that allows you to understand and deliver what the consumer wants. They may want to know what data you have and how it is being used. People may want to update inaccurate data or, depending on the regulations in your area, they may request that it be deleted.

In order to fulfil a consumer’s request to delete or update data, you need a purge mechanism. A purge mechanism clears data once such action has been approved. In order to complete any data request, you must first validate the identity of the requesting consumer.

While this step is necessary, there is not yet an industry gold standard on how best to verify data without threatening the personal information provided. Additionally, data requests need to be checked against any compliance exceptions that may complicate your ability to do what the consumer wants. You may need the data because the person is still using your services, or, depending on the compliance standards you adhere to, you may be required to maintain certain pieces of data for a set period of time.

Meeting consumer requests for data management can be tricky depending on the compliance standards in your location and industry. A proper audit trail that proves your best attempt at compliance is critical, should a lawsuit or formal complaint ever come your way. Hopefully all of these processes will be automated through machine learning one day, but for now, notification and purge mechanisms, identity validation, and a comprehensive audit trail are the most important factors in proving compliance.

3. Encryption

Data is the life blood of most organizations and it needs to be protected. Simply put, encrypt everything! Additionally, make sure your identity and access management (IAM) policies are up to date. The most common vulnerabilities are problems with an organization’s IAM. There might be an abundance of keys spread across the business or keys might not have been rotated regularly. Unauthorized employees might have admin credentials, or there’s no incident response policy in place.

If your data is breached, either by cyber attack or human error, you need a process to get servers back up and running again as soon as possible. You also need to preserve the evidence of the attack, or accidental deletion, in order to prevent a recurrence. Don’t assume your data is safe, instead, be ready to quickly recover from data loss.

While these three necessities are required for most compliance standards, there are certainly more you need to be following. Let 2nd Watch provide a prescriptive security roadmap to ensure compliance no matter where your business is going. You can also take advantage of our four-phased security assessment that runs an automatic skim of your environment to identify vulnerabilities. Contact Us to make sure the next step you take in your cloud journey is a compliant one.

-Chris Garvey, EVP of Product


You’re on AWS, now what? Five things you should consider now.

You migrated your applications to AWS for a reason. Maybe it was for the unlimited scalability, powerful computing capability, ease and flexibility of deployment, movement from CapEx to OpEx model, or maybe it was simply because the boss told you to. However you got there, you’re there. So, what’s next? How do you take advantage of your applications and data that reside in AWS? What should you be thinking about in terms of security and compliance? Here are 5 things you should consider in order to amplify the value of being on AWS:

  1. Create competitive advantage from your AWS data
  2. Accelerate application development
  3. Increase the security of your AWS environment
  4. Ensure cloud compliance
  5. Reduce cloud spend without reducing application deployment

Create competitive advantage from your data

You have a wealth of information in the form of your AWS datasets. Finding patterns and insights not just within these datasets, but across all datasets is key to using data analysis to your advantage. You need a modern, cloud-native data lake.

Data lakes, though, can be difficult to implement and require specialized, focused knowledge of data architecture. Utilizing a cloud expert can help you architect and deploy a data lake geared toward your specific business needs, whether it’s making better-informed decisions, speeding up a process, reducing costs or something else altogether.

Download this datasheet to learn more about transforming your data analytics processes into a flexible, scalable data lake.

Accelerate application development

If you arrived at AWS to take advantage of the rapid deployment of infrastructure to support development, you understand the power of bringing applications to market faster. Now may be the time to fully immerse your company in a DevOps transformation.

A DevOps Transformation involves adopting a set of cultural values and organizational practices that improve business outcomes by increasing collaboration and feedback between business stakeholders, Development, QA, IT Operations, and Security. This includes an evolution of your company culture, automation and tooling, processes, collaboration, measurement systems, and organizational structure—in short, things that cannot be accomplished through automation alone.

To learn more about DevOps transformation, download this free eBook about the Misconceptions and Challenges of DevOps Transformation.

Increase the security of your AWS environment

How do you know if you’re AWS environment is truly secure? You don’t, unless you deploy a comprehensive security assessment of your AWS environment that measures your environment against the latest industry standards and best practices. This type of review provides a list of vulnerabilities and actionable remediations, an evaluation of your Incident Response Policy, and a comprehensive consultation of the system issues that are causing these vulnerabilities.

To learn more, review this Cloud Security Rapid Review document and learn how to gain protection from immediate threats.

Ensure cloud compliance

Deploying and managing cloud infrastructure requires new skills, software and management to maintain regulatory compliances within your organization. Without the proper governance in place, organizations can be exposed to security vulnerabilities and potentially compromise confidential information.

A partner like 2nd Watch can be a great resource in this area. The 2nd Watch Compliance Assessment and Remediation service is designed to evaluate, monitor, auto-remediate, and report on compliance of your cloud infrastructure, assessing industry standard policies including CIS, GDPR, HIPAA, NIST, PCI-DSS, and SOC2.

Download this datasheet to learn more about our Compliance Assessment & Remediation service.

Reduce cloud spend without reducing application deployment

Need to get control of your cloud spend without reducing the value that cloud brings to your business? This is a common discussion we have with clients. To reduce your cloud spend without decreasing the benefits of your cloud environment, we recommend examining the Pillars of Cloud Cost Optimization to prevent over-expenditure and wasted investment. The pillars include:

  • Auto-parking and on-demand services
  • Cost models
  • Rightsizing
  • Instance family / VM type refresh
  • Addressing waste
  • Shadow IT

For organizations that incorporate cloud cost optimization into their cloud infrastructure management, significant savings can be found, especially in larger organizations with considerable cloud spend.

Download our A Holistic Approach to Cloud Cost Optimization eBook to learn more.

After you’ve migrated to AWS, the next logical step in ensuring IT satisfies corporate business objectives is knowing what’s next for your organization in the cloud. Moving to the cloud was the right decision then and can remain the right decision going forward. Implement any of the five recommendations and accelerate your organization forward.

-Michael Elliott, Sr Director of Product Marketing


Cloud Crunch Podcast: Unraveling Cloud Security, Compliance and Regulations

Cloud compliance, cloud security…NOT the same thing. Victoria Geronimo, Security & Compliance Product Manager at 2nd Watch who also happens to have an internet law and internet policy background, joins us today as we look at how security, compliance, and state regulations affect architecting your cloud environment and the farther-reaching effects they have on business. We’d love to hear from you! Email us at CloudCrunch@2ndwatch.com with comments, questions and ideas. Listen now on Spotify, iTunes, iHeart Radio, Stitcher, or wherever you get your podcasts.


CCPA and the cloud

Since the EU introduced the General Data Protection Regulation (GDPR) in 2018, all eyes have been on the U.S. to see if it will follow suit. While a number of states have enacted data privacy statutes, California’s Consumer Protection Act (CCPA) is the most comprehensive U.S. state law to date. Entities were expected to be in compliance with CCPA as of January 1, 2020.

CCPA compliance requires entities to think about how the regulation will impact their cloud infrastructures and development of cloud-native applications. Specifically, companies must understand where personally identifiable information (PII) and other private data lives, and how to process, validate, complete, and communicate consumer information and consent requests.

What is CCPA and how to ensure compliance

CCPA gives California residents greater privacy rights their data that is collected by companies. It applies to any business with customers in California and that either has gross revenues over $25 million or that acquires personal information from more than 50,000 consumers per year. It also applies to companies that earn more than half their annual revenue selling consumers’ personal information.

In order to ensure compliance, the first thing firms should look at is whether they are collecting PII, and if they are, ensuring they know exactly where it is going. CCPA not only mandates that California consumers have the right to know what PII is being collected, it also states that customers can dictate whether it’s sold or deleted. Further, if a company suffers a security breach, California consumers have the right to sue that company under the state’s data notification law. This increases the potential liability for companies whose security is breached, especially if their security practices do not conform to industry standards.

Regulations regarding data privacy are proliferating and it is imperative that companies set up an infrastructure foundation which help them evolve fluidly with these changes to the legal landscape, as opposed to “frankensteining” their environments to play catch up. The first is data mapping in order to know where all consumer PII lives and, importantly, where California consumer PII lives. This requires geographic segmentation of the data. There are multiple tools, including cloud native ones, that empower companies with PII discovery and mapping. Secondly, organizations will need to have a data deletion mechanism in place and an audit trail for data requests, so that they can prove they have investigated, validated, and adequately responded requests made under CCPA. The validation piece is also crucial – companies must make sure the individual requesting the data is who they say they are.

And thirdly, having an opt-in or out system in place that allows consumers to consent to their data being collected in the first place is essential for any company doing business in California. If the website is targeted at children, there must be a specific opt-in request for any collection of California consumer date. These three steps must be followed with an audit trail that can validate each of them.

The cloud

It’s here that we start to consider the impact on cloud journeys and cloud-native apps, as this is where firms can start to leverage tools that that Amazon or Azure, for example, currently have, but that haven’t been integral for most businesses in a day-to-day context, until now. This includes AI learning tools for data discovery, which will help companies know exactly where PII lives, so that they may efficiently comply with data subject requests.

Likewise, cloud infrastructures should be set up so that firms aren’t playing catch up later on when data privacy and security legislation is enacted elsewhere. For example, encrypt everything, as well as making sure access control permissions are up to date. Organizations must also prevent configuration drift with tools that will automate closing up a security gap or port if one gets opened during development.

For application development teams, it’s vital to follow security best practices, such as CIS benchmarks, NIST standards and the OWASP Top Ten. These teams will be getting the brunt of the workload in terms of developing website opt-out mechanisms, for example, so they must follow best practices and be organized, prepared, and efficient.

The channel and the cloud

For channel partners, there are a number of considerations when it comes to CCPA and the cloud. For one, partners who are in the business of infrastructure consulting should know how the legislation affects their infrastructure and what tools are available to set up a client with an infrastructure that can handle the requests CCPA mandates.

This means having data discovery tools in place, which can be accomplished with both cloud native versions and third party software. Also, making sure notification mechanisms are in place, such as email, or if you’re on Amazon, SNS (Simple Notification Service). Notification mechanisms will help automate responding to data subject requests. Additionally, logging must be enabled to establish an audit trail. Consistent resource tagging and establishing global tagging policies is integral to data mapping and quickly finding data. There’s a lot from an infrastructure perspective that can be done, so firms should familiarize themselves with tools that can facilitate CCPA compliance that may have never been used in this fashion, or indeed at all.

Ultimately, when it comes to CCPA, don’t sleep on it. GDPR went into effect less than two years ago, and already we have seen huge fines doled out to the likes of British Airways and Google for compliance failures. The EU has been aggressive about ensuring compliance, and California is likely to follow the same game. They know that in order to give CCPA any teeth, they have to make sure that they prosecute it.

If you’re interested in learning more about how privacy laws might affect cloud development, watch our “CCPA: State Privacy Law Effects on Cloud Development” webinar on-demand, at your convenience.

– Victoria Geronimo, Product Manager – Security & Compliance


Leveraging the cloud for SOC 2 compliance

In a world of high profile attacks, breaches, and information compromises, companies that rely on third parties to manage and/or store their data sets are wise to consider a roadmap for their security, risk and compliance strategy. Failure to detect or mitigate the loss of data or other security breaches, including breaches of their suppliers’ information systems, could seriously expose a cloud user and their customers to a loss or misuse of information in such a harmful way that it becomes difficult to recover from. In 2018 alone, there were nearly 500 million records exposed from data breaches, according to the Identity Theft Resource Center’s findings, https://www.idtheftcenter.org/2018-end-of-year-data-breach-report/. While absolute security can never be attained while running your business, there are frameworks, tools, and strategies that can be applied to minimize the risks to acceptable levels while maintaining continuous compliance.

SOC 2 is one of those frameworks that is particularly beneficial in the Managed Service Providers space. It is a framework that is built on the AICPA’s Trust Services Principles (TSP) for service security, availability, confidentiality, processing integrity, and privacy.  SOC 2 is well suited for a wide range of applications, especially in the cloud services space. Companies have realized that their security and compliance frameworks must stay aligned with the inherent changes that come along with cloud evolution. This includes making sure to stay abreast of developing capabilities and feature enhancements.  For example, AWS announced a flurry of new services and features at its annual re:Invent conference in 2018 alone. When embedded into their cloud strategy, companies can use the common controls that SOC 2 offers to build the foundation for a robust Information Systems security program.

CISO’s, CSO’s, and company stakeholders must not take on the process of forming the company security strategy in a vacuum. Taking advantage of core leaders in the organization, both at the management level and at the individual contributor level, should be part of the overall security development strategy, just as it is with successful innovation strategies. In fact, the security strategy should be integrated within the company innovation strategy. One of the best approaches to ensure this happens, for example, is to develop a steering committee with participation from all major divisions and/or groups. This is more effective with smaller organizations where information can quickly flow vertically and horizontally, however, larger organizations would simply need to ensure that the vehicles are in place to allow for a quick flow of information to all stakeholders

Organizations with strong security programs have good controls in place to address each of the major domain categories under the Trust Service Principles. Each of the Trust Service Principles can be described through the controls that the company has established. Below are some ways that Managed Cloud Service providers like 2nd Watch meet the requirements for security, availability, and confidentiality while simultaneously lowering the overall risk to their business and their customers business:

Security

  • Change Management – Implement both internal and external system change management using effective ITSM tools to track, at a minimum, the change subject, descriptions, requester, urgency, change agent, service impact, change steps, evidence of testing, back-out plan, and appropriate stakeholder approvals.
  • End-User Security – Implement full-disk encryption for end-user devices, deploy centrally managed Directory Services for authorization, use multi-factor authentication, follow password/key management best-practices, use role based access controls, segregate permission using a least-user-privilege approach, and document the policies and procedures. These are all great ways towards securing environments fairly quickly.
  • Facilities – While “security of the cloud” environment fall into the responsibility of your cloud infrastructure provider, your Managed Services Provider should work to adequately protect their own, albeit not in scope, physical spaces. Door access badges, logs, and monitoring of entry/exit points are positive ways to prevent unauthorized physical entry.
  • AV Scans – Ensure that your cloud environments are built with AV scanning solutions.
  • Vulnerability Scans and Remediation – Ensure that your Managed Services Provider or third party provider is running regular vulnerability scans and performing prompt risk remediation. Independent testing of the provider’s environment will help to identify any unexpected risks so implementing an annual penetration test is important.

Availability

  • DR and Incident Escalations – Ensure that your MSP provider maintains current documented disaster recovery plans with at least annual exercises. Well thought-out plans include testing of upstream and downstream elements of the supply chain, including a plan for notifications to all stakeholders.
  • Risk Mitigation – Implement an annual formal risk assessment with a risk mitigation plan for the most likely situations.

Confidentiality

  • DLP – Implement ways and techniques to prevent data from being lost by unsuspecting employees or customers. Examples may include limiting use of external media ports to authorized devices, deprecating old cypher protocols, and blocking unsafe or malicious downloads.
  • HTTPS – Use secure protocols and connections for the safe transmission of confidential information.
  • Classification of Data – Make sure to identify elements of your cloud environment so that your Managed Service Providers or 3rd Parties can properly secure and protect those elements with a tagging strategy.
  • Emails – Use email encryption when sending any confidential information. Also, check with your own Legal department for proper use of your Confidentiality Statement at end of emails that are appropriate to your business.

By implementing these SOC 2 controls, companies can be expected to have a solid security framework to build on. Regardless of their stage in the cloud adoption lifecycle, businesses must continue to demonstrate to their stakeholders (customers, board members, employees, shareholders) that they have a secure and compliant business. As with any successful customer-service provider relationship, the use of properly formed contracts and agreements comes into play. Without these elements in place and in constant use, it is difficult to evaluate how well a company is measuring up. This is where controls and a framework on compliance like SOC 2 plays a critical role.

Have questions on becoming SOC 2 compliant? Contact us!

– By Eddie Borjas, Director of Risk & Compliance


Continuous Compliance: Continuous Iteration

For most students, one of the most stressful experiences of their educational career are exam days.  Exams are a semi-public declaration of your ability to learn, absorb, and regurgitate the curriculum, and while the rewards for passing are rather mundane, the ramifications of failure are tremendous.  My anecdotal educational experience indicates that exam success is primarily due to preparation, with a fair bit of luck thrown in.  If you were like me in school, my exam preparation plan consisted mostly of cramming, with a heavy reliance on luck that the hours spent jamming material into my brain would cover at least 70% of the exam contents.

After I left my education career behind me and started down a new path in business technology, I was rather dismayed to find out that the anxiety of testing and exams continued, but in the form of audits!  So much for the “we will never use this stuff in real life” refrain that we students expressed Calculus 3 class – exams and tests continue even when you’re all grown up.  Oddly enough, the recipe for audit success was remarkably similar: a heavy dose of preparation with a fair bit of luck thrown in.  Additionally, it seemed that many businesses also adhered to my cram-for-the-exam pattern.

Despite full knowledge and disclosure of the due dates and subject material, audit preparation largely consisted of ignoring it until the last minute, followed by a flurry of activity, stress, anxiety, and panic, with a fair bit of hoping and wishing-upon-a-star that the auditors won’t dig too deeply. There must be a better way to be prepared and execute (hint: there is)!

There are some key differences between school exams and business audits:

  • Audits are open-book: the subject matter details and success criteria are well-defined and well-known to everyone
  • Audits have subject matter and success criteria that remains largely unchanged from one audit to the next

Given these differences, it would seem logical that preparation for audits should be easy. We know exactly what the audit will cover, we know when it will happen, and we know what is required to pass.  If only it was that easy.  Why, then, do we still cram-for-the-exam and wait to the last minute?  I think it comes down to these things:

  • Audits are important, just like everything else
  • The scope of the material seems too large
  • Our business memory is short

Let’s look at that last one first.  Audits tend to be infrequent, often with months or years going by before they come around again.  Like exam cramming, it seems that our main goal is to get over the finish line.  Once we are over that finish line, we tend to forget all about what we learned and did, and our focus turns to other things.  Additionally, the last-minute cram seems to be the only way to deal with the task at hand, given the first two points above.  Just get it done, and hope.

What if our annual audits were more frequent, like once a week?  The method of cramming is not sustainable or realistic.  How could we possibly achieve this?

Iteration.

Iteration is, by definition, a repetitive process that intends to produce a series of outcomes.  Both simple and complex problems can often be attacked and solved by iteration:

  • Painting a dark-colored room in a lighter color
  • Digging a hole with a shovel
  • Building a suspension bridge
  • Attempting to crack an encrypted string
  • Achieving a defined compliance level in complex IT systems

Note that last one: achieving audit compliance within your IT ecosystem can be an iterative process, and it doesn’t have to be compressed into the 5 days before the audit is due.

The iteration (repetitive process) is simple:

The scope and execution of the iteration is where things tend to break down.  The key to successful iterations starts with defining and setting realistic goals. When in doubt, keep the goals small!  The idea here is being able to achieve the goal repeatedly and quickly, with the ability to refine the process to improve the results.

Define

We need to clearly define what we are trying to achieve.  Start big-picture and then drill down into something much smaller and achievable.  This will accomplish two things: 1) build some confidence that we can do this, and 2) using what we will do here, we can “drill up” and tackle a similar problem using the same pattern.   Here is a basic example of starting big-picture and drilling down to an achievable goal:

Identify and Recognize

Given that we are going to monitor failed user logons, we need a way to do this.  There are manual ways to achieve this, but, given that we will be doing this over and over, it’s obvious that this needs to be automated.  Here is where tooling comes into play.  Spend some time identifying tools that can help with log aggregation and management, and then find a way to automate the monitoring of failed network user authentication logs.

Notify and Remediate

Now that we have an automated way to aggregate and manage failed network user authentication logs, we need to look at our (small and manageable) defined goal and perform the necessary notifications and recommendations to meet the requirement.  Again, this will need to be repeated over and over, so spend some time identifying automated tools that can help with this process.

Analyze and Report

Now that we are meeting the notification and remediation requirements in a repeatable and automated fashion, we need to analyze and report on the effectiveness of our remedy and, based on the analysis, make necessary improvements to the process, and then repeat!

Now that we have one iterative and automated process in place that meets and remedies an audit requirement, there is one less thing that needs to be addressed and handled when the audit comes around.  We know that this one requirement is satisfied, and we have the process, analysis, and reports to prove it.  No more cramming for this particular compliance requirement, we are now handling it continuously.

Now, what about the other 1,000 audit requirements?   As the saying goes, “How do you eat an elephant (or a Buick)?  One bite at a time.”  You need the courage to start, and from there every bite gets you one step closer to the goal.

Keys to achieving Continuous Compliance include:

  • You must start somewhere. Pick something!
  • Start big-picture, then drill down to something small and achievable.
  • Automation is a must!

For help getting started on the road to continuous compliance, contact us.

-Jonathan Eropkin, Cloud Consultant


Cloud Autonomics and Automated Management and Optimization

Autonomics systems is an exciting new arena within cloud computing, although it is not a new technology by any means. Automation, orchestration and optimization have been alive and well in the datacenter for almost a decade now. Companies like Microsoft with System Center, IBM with Tivoli and ServiceNow are just a few examples of platforms that harness the ability to collect, analyze and make decisions on how to act against sensor data derived from physical/virtual infrastructure and appliances.

Autonomic Cloud Capabilities

Autonomic cloud capabilities are lighting up quickly across the cloud ecosystem. The systems can monitor infrastructure, services, systems and make decisions to support remediation and healing, failover and failback and snapshot and recovery. The abilities come from workflow creation, runbook and playbook development, which helps to support a broad range of insight with action and corrective policy enforcement.

In the compliance world, we are seeing many great companies come into the mix to bring autonomic type functionality to life in the world of security and compliance.

Evident is a great example of a technology that functions with autonomic-type capabilities. The product can do some amazing things in terms of automation and action. It provides visibility across the entire cloud platform and identifies and manages risk associated with the operation of core cloud infrastructure and applications within an organization.

Using signatures and control insight as well as custom-defined controls, it can determine exploitable and vulnerable systems at scale and report the current state of risk within an organization. That on face value is not autonomic, however, the next phase it performs is critical to why it is a great example of autonomics in action.

After analyzing the current state of the vulnerability and risk landscape, it reports current risk and vulnerability state and derives a set of guided remediations that can be either performed manually against the infrastructure in question or automated for remediation to ensure a proactive response hands off to ensure vulnerabilities and security compliance can always be maintained.

Enhance Autonomic Cloud Operations

Moving away from Evident, the focus going forward is a marriage of many things to increase systems capabilities and enhance autonomic cloud operations. Operations management systems in the cloud will light up advanced Artificially Intelligent and Machine Learning-based capabilities, which will take in large amounts of sensor data across many cloud-based technologies and services and derive analysis, insight and proactive remediation – not just for security compliance, but across the board in terms of cloud stabilization and core operations and optimization.

CloudHealth Technologies and many others in the cloud management platform space are looking deeply into how to turn the sensor data derived into core cloud optimization via automation and optimization.

AIOps is a term growing year over year, and it fits well to describe how autonomic systems have evolved from the datacenter to the cloud. Gartner is looking deeply into this space, and we at 2nd Watch see promising advancement coming from companies like Palo Alto Networks with their native security platform capabilities along with Evident for continuous compliance and security.

MoogSoft is bringing a next generation platform for IT incident management to life for the cloud, and its Artificial Intelligence capabilities for IT operations are helping DevOps teams operate smarter, faster and more effectively in terms of automating traditional IT operations tasks and freeing up IT engineers to work on the important business-level needs of the organization vs day-to-day IT operations. By providing intelligence to the response of systems issues and challenges, IT operations teams can become more agile and more capable to solve mission critical problems and maintain a proactive and highly optimized enterprise cloud.

As we move forward, expect to see more and more AI and ML-based functionality move into the core cloud management platforms. Cloud ISVs will be leveraging more and more sensor data to determine response, action and resolution and this will become tightly coupled directly to the virtual machine topology and the cloud native services underlying all cloud providers moving forward.

It is an exciting time for autonomic systems capabilities in the cloud, and we are excited to help customers realize the many potential capabilities and benefits which can help automate, orchestrate and proactively maintain and optimize your core cloud infrastructure.

2nd Watch Autonomic Systems

To learn more about autonomic systems and capabilities, check out Gartner’s AIOps research and reach out to 2nd Watch. We would love to help you realize the potential of these technologies in your cloud environment today!

-Peter Meister, Sr Director of Product Management