Tuesday, June 6, 2023

What is AWS Security Token Service (STS)?

 https://www.hava.io/blog/what-is-aws-security-token-service-sts


AWS STS is an AWS service that allows you to request temporary security credentials for your AWS resources, for IAM authenticated users and users that are authenticated in AWS such as federated users via OpenID or SAML2.0.

You use STS to provide trusted users with temporary access to resources via API calls, your AWS console or the AWS command line interface (CLI)

The temporary security credentials work exactly like regular long term security access key credentials allocated to IAM users only the lifecycle of the access credentials is shorter.

Typically an application will make an API request to AWS STS endpoint for credentials, these access keys are not stored with the user, they are instead dynamically generated by STS when the request is made. The STS generated credentials will expire at which point the user can request new ones as long as they still have permission to do so.

Once the generated credentials expire they cannot be reused which reduces the risk of having your resource access compromised and removes the need to embed security tokens within your code.

The STS token lifecycle is determined by you and can be anywhere from 15 minutes to 36 hours.

AWS STS security tokens are typically used for identity federation, providing cross-account access and for resources related to EC2 instances that require access by other applications.

Identity Federation Use-Case

Using AWS STS you can grant access to AWS resources for users that have been authenticated at your enterprise network. This enterprise identity federation removes the need to create new AWS identities and no new login credentials are required.

External web identities can be authenticated by a third party online identity manager like amazon, google, facebook or any other open-id connect compatible service. This web identity federation also removes the need to distribute long-term security credentials to facilitate access to your AWS resources.

Enterprise federation can use a number of authentication protocols like SSO and supports open standards like security assertion markup language SAML2.0 with which you can use microsoft active directory federation services ADFS if you are using Microsoft AD or you can use SAML to build your own authentication service.

Cross-Account Access using AWS STS

Lots of organisations maintain multiple AWS accounts and can use IAM identities and cross account roles to allow users from one account to access resources in another. Once the permissions are delegated to an IAM user, this trusted relationship can be used to request temporary access via AWS STS temporary credentials.

EC2 Instance STS Credentials

If you have applications running on an EC2 instance that require access to AWS resources, you can create temporary access credentials using AWS STS when the EC2 instance is launched. To do that, the EC2 instance will need to be associated with an IAM role to allow the application to request credentials. Once the security credentials are granted, they are available to all applications hosted on the EC2 instance so you do not need to store any long-term security credentials in the instance. 

AWS STS EXAMPLE

In this example we’ll set up a new AWS user with no specific permissions and create a role that has STS associated with it and has read-only S3 bucket permissions.

We’ll then try to access an S3 bucket from the AWS CLI before and after connecting to the profile with STS enabled.

First of all we need to set up a new AWS user by going into Console > Services > IAM then hitting the Add users button:

AWS_STS_1

Then name the user and set the access type to Programmatic Access. In this example we'll set our user to sts-user.

AWS_STS_2

In the next screen related to permissions and adding the user to a group, you can skip the page without setting any permissions

AWS_STS_3

You can also skip adding tags and advance to the review page:

AWS_STS_4

Click on create user.  When the user created success page is displayed, copy down the Access key ID and the Secret Access Key into a text notepad document

AWS_STS_5v2

Then return to the IAM users page and open the new user just created.

AWS_STS_6

Copy the User ARN (Amazon Resource Name) and add that to your text notepad 

AWS_STS_7

Now we need to create a new role, so navigate to the roles dashboard from the navigation bar on the left hand of the IAM services. Then “Create Role"

AWS_STS_8

The next step is to select the type of trusted entity that you want to grant permissions to, which will ultimately be the service or user that will be maing the API calls to STS for temporary access credentials

These options are:

  • AWS Service -  like EC2 or Lambda where an application will be requesting credentials
  • Another AWS account - this can be another AWS account ID or the account you are currently using to set up the new role
  • Web Identity - where AWS Congnito or another OpenID credentials provider has authentication rights.
  • SAML 2.0 Federation - Like your corporate active directory

For the purposes of this example, we’ll use ‘Another AWS account’ and enter in the account ID that we’re using to set up the role.

AWS_STS_9

The next step is to attach the permission policies you wish to allow this role to perform.

In this example, we want to grant access to read S3 buckets, so will attach the S3ReadOnlyAccess policy

AWS_STS_10

Next you can add tags if you wish, or skip them.

Then advance to the Review page and name the role and add a description.

AWS_STS_11

Once you are happy with the review page, create the role.

AWS_STS_12

Open up the role and copy the role ARN to the text notepad for future reference.

AWS_STS_13

Now open up the trust relationships tab and edit the trust relationship

By default the trust relationship is set to trust the root account. You need to change this to set up the relationship with the user we set up (sts-user) by changing the ARN in the policy to the ARN for the user we set up and copied to the text doc

AWS_STS_14

Then update the trust policy. Now return to the Users console, open up the new user and select the “Add Inline Policy” option.

AWS_STS_15

Now select “Policy Generator” - here is where we define the connection to the AWS Simple Token System (STS) service.

Here we set:

Effect - set to Allow

AWS Service - set to AWS Security Token Service (STS)

Actions - Assume Role

ARN - The ARN Role we created and noted down in the text notepad.

AWS_STS_16

Then create the policy and we are ready to go.

In this example We’ll use the AWS CLI to access an S3 bucket called “425magfiles” using the sts-user which will generate access tokens on the fly to gain the read-only permissions required.

From a terminal window we then:

Aws configure --profile sts-user 

Enter the Access Key  (paste from notepad)

Enter the Secret Key (paste from notepad)

Set the default region - in this example us-east-1

Set the output format - to json

AWS_STS_17

And then we’ll set the profile variable using:

export AWS_PROFILE=sts-user

AWS_STS_18

Now if we attempt to access an S3 bucket, because there are no explicit permissions granted to sts-user, the request fails.

AWS_STS_19

This is because we haven’t assumed the role required to grant the read permissions using STS.

To do this we use the command : 

aws sts assume-role --role-arn arn:aws:iam::xxxxxxxxxxxx:role/sts_role --role-session-name "Session1"

AWS_STS_20

Now to set up the current session variables to use the session values returned during the assume role, we need to set 3 values using the export CLI command;

export AWS_SECRET_ACCESS_KEY= (SecretAccessKey)

export AWS_SECURITY_TOKEN= (SessionToken)

export AWS_ACCESS_KEY_ID= (AccessKeyId)

The values can be extracted from the JSON returned from the sts assume-role command shown above

AWS_STS_21

Now the CLI session has the permissions granted by STS in play and we can interrogate the contents of our S3 bucket.

aws_sts_22

Success!

So there we have it, a quick run through AWS Security Token Service and an example use case using the AWS CLI.

If you are building on AWS and are still drawing AWS VPC diagrams manually we would like to invite you to experience a better way. Hava fully automates the generation and updating of your AWS, Azure and GCP network infrastructure diagrams. Using Hava you can free yourself from drag and drop forever while enjoying clear and accurate cloud network topology diagrams whenever you need them.

You can take Hava for a test drive using the button below. No credit card required

How To Upgrade Nginx In-Place Without Dropping Client Connections

 

Introduction

Nginx is a powerful web server and reverse proxy that is used to serve many of the most popular sites in the world. In this guide, we’ll demonstrate how to upgrade the Nginx executable in place, without losing client connections.

Prerequisites

Before beginning this guide, you should have a non-root user on your server, configured with sudo privileges. You will also need to have Nginx installed.

You can follow our initial server setup guide for Ubuntu 22.04 and then install Nginx on that server.

How the Upgrade Works

Nginx works by spawning a master process when the service starts. The master service, in turn, spawns one or more worker processes that handle the actual client connections. Nginx is designed to perform certain actions when it receives specific low-level signals from the system. Using these signals provides you with the opportunity to upgrade Nginx or its configuration in-place, without losing client connections.

Nginx’s provided installation and upgrade scripts are designed to send these signals when starting, stopping, and restarting Nginx. However, sending these signals manually allows you to audit the upgrade and revert quickly if there are problems. This will also provide an option to upgrade gracefully if you have installed Nginx from source or are not relying on your package manager to configure the service.

The following signals will be used:

  • USR2: This spawns a new set of master/worker processes without affecting the old set.
  • WINCH: This tells the Nginx master process to gracefully stop its associated worker instances.
  • HUP: This tells an Nginx master process to re-read its configuration files and replace worker processes with those adhering to the new configuration. If an old and new master are running, sending this to the old master will spawn workers using their original configuration.
  • QUIT: This shuts down a master and its workers gracefully.
  • TERM: This initiates a fast shutdown of the master and its workers.
  • KILL: This immediately kills a master and its workers without any cleanup.

Finding Nginx Process PIDs

In order to send signals to the various processes, we need to know the PID for the target process. There are two ways to find this.

First, you can use the ps utility and then grep for Nginx among the results. This allows you to see the master and worker processes:

  1. ps aux | grep nginx
output
root 16653 0.0 0.2 119160 2172 ? Ss 21:48 0:00 nginx: master process /usr/sbin/nginx nginx 16654 0.0 0.9 151820 8156 ? S 21:48 0:00 nginx: worker process sammy 16688 0.0 0.1 221928 1164 pts/0 S+ 21:48 0:00 grep --color=auto nginx

The second, highlighted column contains the PIDs for the selected processes. The last column clarifies that the first result is an Nginx master process.

Another way to find the PID for the master Nginx process is to print out the contents of the /run/nginx.pid file:

  1. cat /run/nginx.pid
output
16653

If there are two Nginx master processes running, the old one will be moved to /run/nginx.pid.oldbin.

Spawn a New Nginx Master/Workers Set

The first step to gracefully updating is to actually update your Nginx package and/or binaries. Do this using whatever method is appropriate for your Nginx installation, whether through a package manager or a source installation.

After the new binary is in place, you can spawn a second set of master/worker processes that use the new executable.

You can do this by sending the USR2 signal directly to the PID number you queried (make sure to substitute the PID of your own Nginx master process here):

  1. sudo kill -s USR2 16653

Or, you can read and substitute the value stored in your PID file directly into the command, like this:

  1. sudo kill -s USR2 `cat /run/nginx.pid`

If you check your running processes, you will see that you now have two sets of Nginx masters and workers:

  1. ps aux | grep nginx
output
root 16653 0.0 0.2 119160 2172 ? Ss 21:48 0:00 nginx: master process /usr/sbin/nginx nginx 16654 0.0 0.9 151820 8156 ? S 21:48 0:00 nginx: worker process root 16699 0.0 1.5 119164 12732 ? S 21:54 0:00 nginx: master process /usr/sbin/nginx nginx 16700 0.0 0.9 151804 8008 ? S 21:54 0:00 nginx: worker process sammy 16726 0.0 0.1 221928 1148 pts/0 R+ 21:55 0:00 grep --color=auto nginx

You can also see that the original /run/nginx.pid file has been moved to /run/nginx.pid.oldbin and the newer master process’s PID has been written to /run/nginx.pid:

  1. tail -n +1 /run/nginx.pid*
output
==> /run/nginx.pid <== 16699 ==> /run/nginx.pid.oldbin <== 16653

You can now send signals to either of the master processes using the PIDs contained in these files.

At this point, both master/worker sets are operational and capable of serving client requests. The first set is using the original Nginx executable and configuration and the second set is using the newer versions. They can continue to operate side-by-side, but for consistency, we should start to transition to the new set.

Shut Down the First Master’s Workers

In order to begin the transition to the new set, the first thing to do is stop the original master’s worker processes. The original workers will finish up handling all of their current connections and then exit.

Stop the original set’s workers by issuing the WINCH signal to their master process:

  1. sudo kill -s WINCH `cat /run/nginx.pid.oldbin`

This will let the new master’s workers handle new client connections alone. The old master process will still be running, but with no workers:

  1. ps aux | grep nginx
output
root 16653 0.0 0.2 119160 2172 ? Ss 21:48 0:00 nginx: master process /usr/sbin/nginx root 16699 0.0 1.5 119164 12732 ? S 21:54 0:00 nginx: master process /usr/sbin/nginx nginx 16700 0.0 0.9 151804 8008 ? S 21:54 0:00 nginx: worker process sammy 16755 0.0 0.1 221928 1196 pts/0 R+ 21:56 0:00 grep --color=auto nginx

This lets you audit the new workers as they accept connections in isolation.

Evaluate the Outcome and Take the Next Steps

You should test and audit your system at this point to make sure that there are no signs of problems. You can leave your configuration in this state for as long as you wish to ensure that the new Nginx executable is bug-free and able to handle your traffic.

Your next step will depend entirely on whether you encounter problems.

If Your Upgrade was Successful

If you experienced no issues with your new set’s workers, you can safely shut down the old master process. To do this, send the old master the QUIT signal:

  1. sudo kill -s QUIT `cat /run/nginx.pid.oldbin`

The old master process will exit gracefully, leaving only your new set of Nginx master/workers. At this point, you’ve successfully performed an in-place binary update of Nginx without interrupting client connections.

If Your Upgrade was Unsuccessful

If your new set of workers seem to be having problems, you can transition back to the old configuration and binary. This is possible as long as you have not QUIT the older master process.

The best way to do this is to restart your old master’s workers by sending it the HUP signal. Usually, when you send an Nginx master the HUP signal, it will re-read its configuration files and start new workers. However, when the target is an older master, it will spawn new workers using its original, working configuration:

  1. sudo kill -s HUP `cat /run/nginx.pid.oldbin`

You now should be back to having two sets of master/worker processes:

  1. ps aux | grep nginx
output
root 16653 0.0 0.2 119160 2172 ? Ss 21:48 0:00 nginx: master process /usr/sbin/nginx nginx 16654 0.0 0.9 151820 8156 ? S 21:48 0:00 nginx: worker process root 16699 0.0 1.5 119164 12732 ? S 21:54 0:00 nginx: master process /usr/sbin/nginx nginx 16700 0.0 0.9 151804 8008 ? S 21:54 0:00 nginx: worker process sammy 16726 0.0 0.1 221928 1148 pts/0 R+ 21:55 0:00 grep --color=auto nginx

The newest workers are associated with the old master. Both worker sets will be accepting client connections at this point. Now, stop the newer, buggy master process and its workers by sending the QUIT signal:

  1. sudo kill -s QUIT `cat /run/nginx.pid`

You should be back to your old master and workers:

  1. ps aux | grep nginx
output
root 16653 0.0 0.2 119160 2172 ? Ss 21:48 0:00 nginx: master process /usr/sbin/nginx nginx 16654 0.0 0.9 151820 8156 ? S 21:48 0:00 nginx: worker process sammy 16688 0.0 0.1 221928 1164 pts/0 S+ 21:48 0:00 grep --color=auto nginx

The original master will regain the /run/nginx.pid file for its PID.

If this does not work for any reason, you can try just sending the new master server the TERM signal, which should initiate a shutdown. This should stop the new master and any workers while automatically kicking over the old master to start its worker processes. If there are serious problems and the buggy workers are not exiting, you can send each of them a KILL signal to clean up. This should be a last resort, however, as it will cut off connections.

After transitioning back to the old binary, remember that you still have the new version installed on your system. You should remove the buggy version and roll back to your previous version so that Nginx will run without issues on reboot.

Conclusion

By now, you should be able to seamlessly transition your machines from one Nginx binary to another. Nginx’s ability to handle two master/workers sets while maintaining information about their relationships provides us with the ability to upgrade server software without taking the server machines offline.

Source