1-888-317-7920 info@2ndwatch.com

Leveraging the cloud for SOC 2 compliance

In a world of high profile attacks, breaches, and information compromises, companies that rely on third parties to manage and/or store their data sets are wise to consider a roadmap for their security, risk and compliance strategy. Failure to detect or mitigate the loss of data or other security breaches, including breaches of their suppliers’ information systems, could seriously expose a cloud user and their customers to a loss or misuse of information in such a harmful way that it becomes difficult to recover from. In 2018 alone, there were nearly 500 million records exposed from data breaches, according to the Identity Theft Resource Center’s findings, https://www.idtheftcenter.org/2018-end-of-year-data-breach-report/. While absolute security can never be attained while running your business, there are frameworks, tools, and strategies that can be applied to minimize the risks to acceptable levels while maintaining continuous compliance.

SOC 2 is one of those frameworks that is particularly beneficial in the Managed Service Providers space. It is a framework that is built on the AICPA’s Trust Services Principles (TSP) for service security, availability, confidentiality, processing integrity, and privacy.  SOC 2 is well suited for a wide range of applications, especially in the cloud services space. Companies have realized that their security and compliance frameworks must stay aligned with the inherent changes that come along with cloud evolution. This includes making sure to stay abreast of developing capabilities and feature enhancements.  For example, AWS announced a flurry of new services and features at its annual re:Invent conference in 2018 alone. When embedded into their cloud strategy, companies can use the common controls that SOC 2 offers to build the foundation for a robust Information Systems security program.

CISO’s, CSO’s, and company stakeholders must not take on the process of forming the company security strategy in a vacuum. Taking advantage of core leaders in the organization, both at the management level and at the individual contributor level, should be part of the overall security development strategy, just as it is with successful innovation strategies. In fact, the security strategy should be integrated within the company innovation strategy. One of the best approaches to ensure this happens, for example, is to develop a steering committee with participation from all major divisions and/or groups. This is more effective with smaller organizations where information can quickly flow vertically and horizontally, however, larger organizations would simply need to ensure that the vehicles are in place to allow for a quick flow of information to all stakeholders

Organizations with strong security programs have good controls in place to address each of the major domain categories under the Trust Service Principles. Each of the Trust Service Principles can be described through the controls that the company has established. Below are some ways that Managed Cloud Service providers like 2nd Watch meet the requirements for security, availability, and confidentiality while simultaneously lowering the overall risk to their business and their customers business:

Security

  • Change Management – Implement both internal and external system change management using effective ITSM tools to track, at a minimum, the change subject, descriptions, requester, urgency, change agent, service impact, change steps, evidence of testing, back-out plan, and appropriate stakeholder approvals.
  • End-User Security – Implement full-disk encryption for end-user devices, deploy centrally managed Directory Services for authorization, use multi-factor authentication, follow password/key management best-practices, use role based access controls, segregate permission using a least-user-privilege approach, and document the policies and procedures. These are all great ways towards securing environments fairly quickly.
  • Facilities – While “security of the cloud” environment fall into the responsibility of your cloud infrastructure provider, your Managed Services Provider should work to adequately protect their own, albeit not in scope, physical spaces. Door access badges, logs, and monitoring of entry/exit points are positive ways to prevent unauthorized physical entry.
  • AV Scans – Ensure that your cloud environments are built with AV scanning solutions.
  • Vulnerability Scans and Remediation – Ensure that your Managed Services Provider or third party provider is running regular vulnerability scans and performing prompt risk remediation. Independent testing of the provider’s environment will help to identify any unexpected risks so implementing an annual penetration test is important.

Availability

  • DR and Incident Escalations – Ensure that your MSP provider maintains current documented disaster recovery plans with at least annual exercises. Well thought-out plans include testing of upstream and downstream elements of the supply chain, including a plan for notifications to all stakeholders.
  • Risk Mitigation – Implement an annual formal risk assessment with a risk mitigation plan for the most likely situations.

Confidentiality

  • DLP – Implement ways and techniques to prevent data from being lost by unsuspecting employees or customers. Examples may include limiting use of external media ports to authorized devices, deprecating old cypher protocols, and blocking unsafe or malicious downloads.
  • HTTPS – Use secure protocols and connections for the safe transmission of confidential information.
  • Classification of Data – Make sure to identify elements of your cloud environment so that your Managed Service Providers or 3rd Parties can properly secure and protect those elements with a tagging strategy.
  • Emails – Use email encryption when sending any confidential information. Also, check with your own Legal department for proper use of your Confidentiality Statement at end of emails that are appropriate to your business.

By implementing these SOC 2 controls, companies can be expected to have a solid security framework to build on. Regardless of their stage in the cloud adoption lifecycle, businesses must continue to demonstrate to their stakeholders (customers, board members, employees, shareholders) that they have a secure and compliant business. As with any successful customer-service provider relationship, the use of properly formed contracts and agreements comes into play. Without these elements in place and in constant use, it is difficult to evaluate how well a company is measuring up. This is where controls and a framework on compliance like SOC 2 plays a critical role.

Have questions on becoming SOC 2 compliant? Contact us!

– By Eddie Borjas, Director of Risk & Compliance

Facebooktwitterlinkedinmailrss

Continuous Compliance: Continuous Iteration

For most students, one of the most stressful experiences of their educational career are exam days.  Exams are a semi-public declaration of your ability to learn, absorb, and regurgitate the curriculum, and while the rewards for passing are rather mundane, the ramifications of failure are tremendous.  My anecdotal educational experience indicates that exam success is primarily due to preparation, with a fair bit of luck thrown in.  If you were like me in school, my exam preparation plan consisted mostly of cramming, with a heavy reliance on luck that the hours spent jamming material into my brain would cover at least 70% of the exam contents.

After I left my education career behind me and started down a new path in business technology, I was rather dismayed to find out that the anxiety of testing and exams continued, but in the form of audits!  So much for the “we will never use this stuff in real life” refrain that we students expressed Calculus 3 class – exams and tests continue even when you’re all grown up.  Oddly enough, the recipe for audit success was remarkably similar: a heavy dose of preparation with a fair bit of luck thrown in.  Additionally, it seemed that many businesses also adhered to my cram-for-the-exam pattern.  Despite full knowledge and disclosure of the due dates and subject material, audit preparation largely consisted of ignoring it until the last minute, followed by a flurry of activity, stress, anxiety, and panic, with a fair bit of hoping and wishing-upon-a-star that the auditors won’t dig too deeply. There must be a better way to be prepared and execute (hint: there is)!

There are some key differences between school exams and business audits:

  • Audits are open-book: the subject matter details and success criteria are well-defined and well-known to everyone
  • Audits have subject matter and success criteria that remains largely unchanged from one audit to the next

Given these differences, it would seem logical that preparation for audits should be easy. We know exactly what the audit will cover, we know when it will happen, and we know what is required to pass.  If only it was that easy.  Why, then, do we still cram-for-the-exam and wait to the last minute?  I think it comes down to these things:

  • Audits are important, just like everything else
  • The scope of the material seems too large
  • Our business memory is short

Let’s look at that last one first.  Audits tend to be infrequent, often with months or years going by before they come around again.  Like exam cramming, it seems that our main goal is to get over the finish line.  Once we are over that finish line, we tend to forget all about what we learned and did, and our focus turns to other things.  Additionally, the last-minute cram seems to be the only way to deal with the task at hand, given the first two points above.  Just get it done, and hope.

What if our annual audits were more frequent, like once a week?  The method of cramming is not sustainable or realistic.  How could we possibly achieve this?

Iteration.

Iteration is, by definition, a repetitive process that intends to produce a series of outcomes.  Both simple and complex problems can often be attacked and solved by iteration:

  • Painting a dark-colored room in a lighter color
  • Digging a hole with a shovel
  • Building a suspension bridge
  • Attempting to crack an encrypted string
  • Achieving a defined compliance level in complex IT systems

Note that last one: achieving audit compliance within your IT ecosystem can be an iterative process, and it doesn’t have to be compressed into the 5 days before the audit is due.

The iteration (repetitive process) is simple:

The scope and execution of the iteration is where things tend to break down.  The key to successful iterations starts with defining and setting realistic goals. When in doubt, keep the goals small!  The idea here is being able to achieve the goal repeatedly and quickly, with the ability to refine the process to improve the results.

Define

We need to clearly define what we are trying to achieve.  Start big-picture and then drill down into something much smaller and achievable.  This will accomplish two things: 1) build some confidence that we can do this, and 2) using what we will do here, we can “drill up” and tackle a similar problem using the same pattern.   Here is a basic example of starting big-picture and drilling down to an achievable goal:

Identify and Recognize

Given that we are going to monitor failed user logons, we need a way to do this.  There are manual ways to achieve this, but, given that we will be doing this over and over, it’s obvious that this needs to be automated.  Here is where tooling comes into play.  Spend some time identifying tools that can help with log aggregation and management, and then find a way to automate the monitoring of failed network user authentication logs.

Notify and Remediate

Now that we have an automated way to aggregate and manage failed network user authentication logs, we need to look at our (small and manageable) defined goal and perform the necessary notifications and remediations to meet the requirement.  Again, this will need to be repeated over and over, so spend some time identifying automated tools that can help with this process.

Analyze and Report

Now that we are meeting the notification and remediation requirements in a repeatable and automated fashion, we need to analyze and report on the effectiveness of our remedy and, based on the analysis, make necessary improvements to the process, and then repeat!

Now that we have one iterative and automated process in place that meets and remedies an audit requirement, there is one less thing that needs to be addressed and handled when the audit comes around.  We know that this one requirement is satisfied, and we have the process, analysis, and reports to prove it.  No more cramming for this particular compliance requirement, we are now handling it continuously.

Now, what about the other 1,000 audit requirements?   As the saying goes, “How do you eat an elephant (or a Buick)?  One bite at a time.”  You need the courage to start, and from there every bite gets you one step closer to the goal.

Keys to achieving Continuous Compliance include:

  • You must start somewhere. Pick something!
  • Start big-picture, then drill down to something small and achievable.
  • Automation is a must!

For help getting started on the road to continuous compliance, contact us.

-Jonathan Eropkin, Cloud Consultant

Facebooktwitterlinkedinmailrss

Continuous Compliance – Automatically Detect and Report Vulnerabilities in your Cloud Enterprise

Customers are wrangling with many challenges in managing security at scale across the enterprise. As customers embrace more and more cloud capabilities across more providers, it becomes daunting to manage compliance.

The landscape of tools and providers is endless, and customers are utilizing a mix of traditional enterprise tools from the past along with cloud tools to try to achieve security baselines within their enterprise.

At 2nd Watch we have a strong partnership with Palo Alto Networks, which provides truly enterprise-grade security to our customers across a very diverse enterprise landscape – datacenter, private cloud, public cloud and hybrid – across AWS, Azure and Google Cloud Platform.

Palo Alto Networks acquired a brilliant company recently – Evident.io. Evident.io is well known for providing monitoring, compliance and security posture management to organizations across the globe. Evident.io provides continuous compliance across AWS and Azure and brings strong compliance vehicles around HIPAA, ISO 27001, NIST 800-53, NIST 900-171, PCI and SOC 2.

The key to continuous compliance lies in the ability to centralize monitoring and reporting as well as insight into one console dashboard where you can see, in real time, the core health and state of your cloud enterprise.

This starts with gaining core knowledge of your environment’s current health state. You must audit, assess and report on where you currently stand in terms of scope of health. Knowing current state will allow you to see the areas where you need to correct and will also open insight into compliance challenges. Evident.io automates this process and allows for automated, continuous visibility and control of infrastructure security while allowing for customized workflow and orchestration, which allows clients to tune the solution to fit specific organizational needs and requirements easily and effectively.

After achieving the core insight of current state of compliance, you must now work on ways to remediate and efficiently maintain compliance moving forward. Evident.io provides a rich set of real-time alerting and workflow functionality that allows clients to achieve automated alerting, automated remediation and automated enforcement. Evident.io employs continuous security monitoring and stores the data collected in the evident security platform, which allows our clients to eliminate manual review and build rich reporting and insight into current state and future state. Evident.io employs a rich set of reporting capabilities out of the box, across a broad range of compliance areas, which helps to report compliance quickly and address existing gaps and reduce and mitigate risk moving forward.

Evident.io works through API on AWS and Azure in a read-only posture. This provides a non-intrusive and effective approach to core system and resource insight without the burden of heavy agent deployment and configuration. Evident Security Platform acquires this data through API securely and analyzes it against core compliance baselines and security best practices to ensure gaps in enterprise security are corrected and risk is reduced.

Continuous Compliance requires continuous delivery. As clients embrace the cloud and the capabilities the cloud providers provide, it becomes more important then ever before that we institute solutions that help us manage against continuous software utilization and delivery. The speed of the cloud requires a new approach for core security and compliance, one that provides automation, orchestration and rich reporting to reduce the overall day-to-day burden of managing towards compliance at scale in your cloud enterprise.

If you are not familiar with Evident.io, check them out at http://evident.io, and reach out to us at 2nd Watch for help realizing your potential for continuous compliance in your organization.

-Peter Meister, Sr Director of Product Management

Facebooktwitterlinkedinmailrss

Assessing your cloud environment and maintaining a baseline security posture

The cloud is an exciting place, where the realization of instant implementation and execution of enterprise applications and workloads at scale is achieved. As we move to utilize the cloud we tend to utilize already existing applications and operating systems that are baked into the cloud service provider’s frame. Baseline images and cloud virtual machines are readily available, and we quickly utilize these images to put our infrastructure in motion in the cloud. This presents some challenges in terms of security, as we tend to deploy first and secure second, which is not ideal from a proactive security stance.

Continuous deployment is becoming the normality. We are literally updating and changing our systems many times a day now in the cloud. New features become available and new ISV applications and services roll out on a continuous basis, and we leverage them rapidly.

To maintain a healthy cloud state for our infrastructure and applications, we need to focus our attention on managing towards a baseline configuration state and desired state configuration for our systems. We need to monitor, maintain and continuously be aware of our systems’ state of health and state of configuration at any given time.

Clients are evolving so quickly in the cloud that they end up  playing catchup on the security side of their cloud environment.  One way to support a proactive level of security for your cloud environment is to perform continuous vulnerability, assessment and remediation activities on a regular basis to ensure all areas of your cloud environment meet a baseline of security.

The cloud service providers are doing a lot of heavy lifting to support you to achieve a proactive security architecture, and they provide many different layers of tools and technologies to help you achieve this.

The first area to focus on is your machine library. Maintaining a secure baseline image scope across the catalog of your infrastructure and applications will ensure you can track and maintain those image states per role of operation. This helps to ensure that when the cataloged healthy state image takes an un-approved change, you become instantly aware of it and can roll back to a healthy image state, undo any compromises to image integrity and return to a secure and compliant operating state.

AWS has a great tool that helps us maintain configuration state management. AWS Config allows you to audit, evaluate and assess configuration baselines of your cloud resources continuously. It provides strong monitoring of your AWS resource configurations and support evaluating the state against your secure baseline and ensuring your desired state is maintain and recovered. It gives you the ability to quickly review changes to configuration state and how that state changes across the broad set of resources that make up your infrastructure or application.

It provides deep insight and details around resources, resource historical records and configuration and supports strong capabilities around compliance, specifically for your organization and specified resource guidelines.

Utilizing tools such as AWS Config provides you a strong first step in the path to achieving continuous security baseline protection and managing against vulnerabilities and unwarranted changes to your resources in production. Outside of AWS Config, AWS provides a strong suite of tools and technologies to help us achieve a more comprehensive security baseline. Check out Amazon System Manager, which supports a common agent topology across Windows and Linux with the SSM agent to support clients scanning and maintaining resource health and information continuously – not to mention it’s a great overall agent to maintain core IT operations management in AWS. We can maintain strong operational health of our cloud resources in AWS with Amazon CloudWatch.

One recent addition for security is Amazon GuardDuty, which provides intelligent threat detection and continuously monitors to protect your AWS accounts and workloads. Amazon GuardDuty is directly accessible through the amazon console and, with a few clicks, can be fully implemented to help support your workload protection proactively.

If you need assistance in achieving strong continuous security for your AWS cloud environment, 2nd Watch offers services to help our clients realize the security posture within their cloud environment today. We can also help create and customize a vulnerability, assessment and remediation strategy that will have long standing benefits in continuously achieving strong security defense in depth in the cloud.

-Peter Meister, Sr Director of Product Management

Facebooktwitterlinkedinmailrss

Governance, Risk and Compliance – Drive Change Across the Organization

Governance, Risk and Compliance (GRC) is a standard framework that helps to drive organizations towards a common set of goals and principals. The overarching theme is strategically focused on how technology utilization and operations tie directly back to an organization’s business goals and, in many cases, aspirations.

There are many facets to GRC. In the cloud it means the same thing as it did in the datacenter. We need to ensure IT organizes around the business, and we need to make sure risk is minimized and compliance is maintained.

At 2nd Watch we work with clients across all areas of GRC. Clients take various levels of focus in each area, and some areas are more important based on the vertical the client is operating in.

The cloud extends beyond the physical bounds of an organization, and with that institutes new challenges and requires a shared cloud responsibility model. The CSP is responsible for the underlying infrastructure setup and physical maintenance of their cloud infrastructure. We work with our cloud ISV and providers’ tools, technologies and best practices to help maintain strong governance and lower risk while meeting compliance.

The landscape of software, tools and solutions to support governance, risk and compliance is daunting in the cloud marketplace. 2nd Watch focuses on providing a holistic support to our clients around GRC. We believe there are fantastic capabilities directly inside the cloud management portals to help customers along the journey to strong GRC framework and institution.

In Microsoft Azure we can utilize Compliance Manager. Compliance Manager is a workflow-based assessment tool that enables organizations to track, assign and verify regulatory compliance procedures and activities in support of Microsoft Cloud technologies – including Office 365 and Dynamics. It supports ISO 27001, IS0 27018 and NIST and supports regulatory compliance around HIPAA and GDPR.  It is a foundational tool to utilize within Microsoft Azure to help you along the path to achieving strong governance, risk and compliance around Microsoft Cloud technologies.

With Amazon Web Services we have a complete set of core cloud operations management tools to utilize within the AWS console to help us bolster governance and security and reduce risk. Amazon provides resources with a full prescriptive set of compliance quick reference guides, which provide an overview of how to maintain a cloud compliant environment through strong security and controls validation, and insight and monitoring for activity and security assurance.

Amazon has a complete Cloud Compliance Center where clients can tap into an abundant set of resources to help along the way.

Beyond the tools, both Microsoft Azure and AWS provide strategic support with partners around compliance. There are many accelerators and programs that organizations can request from and Amazon and Microsoft to help them achieve and maintain GRC specifically tuned to the cloud.

GRC is unique to each organization. Cloud providers bring a substantial set of resources and technologies, along with great prescriptive guidance and best practices to help and guide you in achieving a strategic GRC framework and set of processes and procedures in your organization.

Take advantage of these built-in capabilities as you start to look at other tools and technologies to complete your holistic approach to governance, risk and compliance, and please reach out to 2nd Watch to find out how we can support you along the way.

-Peter Meister, Sr Director of Product Management

Facebooktwitterlinkedinmailrss

How to Use AWS IAM with STS for access to AWS resources

With increased focus on security and governance in today’s digital economy, I want to highlight a simple but important use case that demonstrates how to use AWS Identity and Access Management (IAM) with Security Token Service (STS) to give trusted AWS accounts access to resources that you control and manage.

Security Token Service is an extension of IAM and is one of several web services offered by AWS that does not incur any costs to use.  But, unlike IAM, there is no user interface on the AWS console to manage and interact with STS. Rather all interaction is done entirely through one of several extensive SDKs or directly using common HTTP protocol.  I will be using Terraform to create some simple resources in my sandbox account and .NET Core SDK to demonstrate how to interact with STS.

The main purpose and function of STS is to issue temporary security credentials for AWS resources to trusted and authenticated entities.  These credentials operate identically to the long-term keys that typical IAM users have, with a couple of special characteristics:

  • They automatically expire and become unusable after a short and defined period of time elapses
  • They are issued dynamically

These characteristics offer several advantages in terms of application security and development and are useful for cross-account delegation and access.  STS solves two problems for owners of AWS resources:

  • Meets the IAM best-practices requirement to regularly rotate access keys
  • You do not need to distribute access keys to external entities or store them within an application

One common scenario where STS is useful involves sharing resources between AWS accounts.  Let’s say, for example, that your organization captures and processes data in S3, and one of your clients would like to push large amounts of data from resources in their AWS account to an S3 bucket in your account in an automated and secure fashion.

AWS Cloud Infographic

While you could create an IAM user for your client, your corporate data policy requires that you rotate access keys on a regular basis, and this introduces challenges for automated processes.  Additionally, you would like to limit the distribution of access keys to your resources to external entities.  Let’s use STS to solve this!

To get started, let’s create some resources in your AWS cloud.  Do you even Terraform, bro?

Let’s create a new S3 bucket and set the bucket ACL to be private, meaning nobody but the bucket owner (that’s you!) has access.  Remember that bucket names must be unique across all existing buckets, and they should comply with DNS naming conventions.  Here is the Terraform HCL syntax to do this:

Terraform HCL syntax Code

Great! We now have a bucket… but for now, only the owner can access it.  This is a good start from a security perspective (i.e. “least permissive” access).

Amazon S3 Bucket Image

What an empty bucket may look like

Let’s create an IAM role that, once assumed, will allow IAM users with access to this role to have permissions to put objects into our bucket.  Roles are a secure way to grant trusted entities access to your resources.  You can think about roles in terms of a jacket that an IAM user can wear for a short period of time, and while wearing this jacket, the user has privileges that they wouldn’t normally have when they aren’t wearing it.  Kind of like a bright yellow Event Staff windbreaker!

For this role, we will specify that users from our client’s AWS account are the only ones that can wear the jacket. This is done by including the client’s AWS account ID in the Principal statement.  AWS Account IDs are not considered to be secret, so your client can share this with you without compromising their security.  If you don’t have a client but still want to try this stuff out, put your own AWS account ID here instead.

AWS account ID in Code
Great, now we have a role that our trusted client can wear.  But, right now our client can’t do anything except wear the jacket.  Let’s give the jacket some special powers, such that anyone wearing it can put objects into our S3 bucket.  We will do this by creating a security policy for this role.  This policy will specify what exactly can be done to S3 buckets that it is attached to. Then we will attach it to the bucket we want our client to use.  Here is the Terraform syntax to accomplish this:
Terraform syntax

A couple things to note about this snippet – First, we are using Terraform interpolation to inject values from previous terraform statements into a couple of places in the policy – specifically the ARN from the role and bucket we created previously.  Second, we are specifying a condition for the s3 policy – one that requires a specific object ACL for the action s3:PutObject, which is accomplished by including the HTTP request header x-amz-acl to have a value of bucket-owner-full-control with the PUT object request.  By default, objects PUT in S3 are owned by the account that created them, even if it is stored in someone else’s bucket.  For our scenario, this condition will require your client to explicitly grant ownership of objects placed in your bucket to you, otherwise the PUT request will fail.

So, now we have a bucket, a policy in place on our bucket, and a role that assumes that policy.  Now your client needs to get to work writing some code that will allow them to assume the role (wear the jacket) and start putting objects into your bucket.  Your client will need to know a couple of things from you before they get started:

  1. The bucket name and the region it was created in (the example above created a bucket named d4h2123b9-xaccount-bucket in us-west-2)
  2. The ARN for the role (Terraform can output this for you). It will look something like this but will have your actual AWS Account ID: arn:aws:iam::123456789012:role/sts-delegate-role

They will also need to create an IAM User in their account and attach a policy allowing the user to assume roles via STS.  The policy will look similar to this:

STS Policy Example
Let’s help your client out a bit and provide some C# code snippets for .NET Core 2.0 (available for Windows, macOS and LinuxTo get started, install the .NET SDK for your OS, then fire up a command prompt in a favorite directory and run these commands:
Command Prompt Example
The first command will create a new console app in the subdirectory s3cli.  Then switch context to that directory and import the AWS SDK for .NET Core, and then add packages for SecurityToken and S3 services.
Once you have the libraries in place, fire up your favorite IDE or text editor (I use Visual Studio Code), then open Program.cs and add some code:
Program.cs code
This snippet sends a request to STS for temporary credentials using the specified ARN.  Note that the client must provide IAM user credentials to call STS, and that IAM user must have a policy applied that allows it to assume a role from STS.

This next snippet takes the STS credentials, bucket name, and region name, and then uploads the Program.cs file that you’re editing and assigns it a random key/name.  Also note that it explicitly applies the Canned ACL that is required by the sts-delegate-role:
Snippet of STS credentials, bucket name, and region name
So, to put this all together, run this code block and make the magic happen!  Of course, you will have to define and provide proper variable values for your environment, including  securely storing your credentials.
securely storing your credentials example
Try it out from the command prompt:
Command Prompt
If all goes well, you will have a copy of Program.cs in the bucket. Not very useful itself, but it illustrates how to accomplish the task.
copy of Program.cs in the bucket

What a bucket with something in it may look like

Here is a high-level document of what we put together:

AWS STS Bucket Example

Putting it all together

Steps:

  1. Your client uses their IAM user to call AWS STS and requests the role ARN you gave them
  2. STS authenticates the client’s IAM user and verifies the policy for the ARN role, then issues a temporary credential to the client.
  3. The client can use the temporary credentials to access your S3 bucket (they will expire soon), and since they are now wearing the Event Staff jacket, they can successfully PUT stuff in your bucket!

There are many other use-cases for STS. This is just one very simplistic example. However, with this brief introduction to the concepts, you should now have a decent idea of how STS works with IAM roles and policies, and how you can use STS to give access to your AWS resources for trusted entities. For more tips like this, contact us.

-Jonathan Eropkin, Cloud Consultant

Facebooktwitterlinkedinmailrss