
Introduction
Not surprisingly, Amazon AWS has a number security layers surrounding the resources they offer. It’s sometimes tricky to figure out how to open just enough privileges and ports to allow the transferring of files and other objects.
This tutorial steps you through how to get setup to transfer files to/from AWS EC2 instances and S3 buckets:
- grant an IAM Role with an S3 Policy attached (for example, AmazonS3FullAccess or AmazonS3ReadOnlyAccess)
- assign a Security Group with at least 1 port opened for S3 traffic. S3 supported protocols are:
- HTTP (port 80)
- HTTPS (port 443)
* S3 buckets in this post are referring to secure S3 buckets … those that are not wide open to the public. If your EC2 instance needs to access S3 buckets that are open to the public, the IAM Role is not necessary.
Prerequisites
You’ll need an AWS Account (here for instructions) and AWS CLI installed on your machine (here for installation instructions). The Tutorial is free tier eligible.
Remember to delete the Resources when done with this Tutorial.
Steps
This Tutorial demonstrates two methods for creating AWS Resources: AWS Console and AWS CLI (command line interface). Choose one, or try both to determine your preference:
AWS Console
With the Amazon EC2 console‘s launch wizard, we can create EC2-dependent objects such as roles, security groups and key pairs, ‘on the fly’. I prefer creating these objects prior to starting an EC2 launch.
1. Key Pair
In order to connect to the EC2 instance, where we’ll test copying files back/forth to an S3 bucket, you need a Key Pair. If you don’t have one yet, create one:
- In the AWS EC2 Management Console > Key Pairs > Create key pair > enter a Name that you’ll remember > Create key pair
- For File format, choose the format in which to save the private key. To save the private key in a format that can be used with OpenSSH, choose pem. To save the private key in a format that can be used with PuTTY, choose ppk.
- A file will be downloaded, with the name of your key pair.
2. S3 Bucket
Using the AWS Console, create an S3 Bucket:
- Go to the AWS S3 Management Console > Create bucket > enter a unique Bucket name > select a Region > accept all defaults (to Block all public access) > Create bucket
3. IAM Role
In order for your EC2 instance to access files (or other objects) on S3, the instance needs to be granted privileges to do so. Give it privileges through a Role that has the AmazonS3FullAccess Policy assigned to it.
Create an IAM Role with full S3 access:
- In the AWS IAM Management Console > Roles > Create role > AWS service > EC2 > Next: Permissions > Filter for ‘AmazonS3FullAccess’ the click box next to it > Next: Tags > Next: Review > enter Role name = ‘S3FullAccess’ > Create role
4. Security Group
A Security Group acts as a firewall, controlling traffic to/from an EC2.
Create a Security Group to authorize specific traffic to/from an EC2 instance. Do this by opening ports and applying rule(s) to the port to allow traffic in/out. You’ll also open a port to the Security Group for SSH (connecting) to the EC2 instance from your local machine.
- In the AWS EC2 Management Console > Security Groups > Create security group
- Set both Security group name and Description = ‘EC2-S3-SecurityGroup’ > enter Description = ‘EC2-S3-SecurityGroup’
- In the Inbound rules section, add 2 rules:
- Type = HTTP ; Source = Custom 0.0.0.0/0
- Type = SSH ; Source = My IP
- Click Create security group
5. EC2 Instance
Launch an EC2 instance (or use an existing, running instance.) Assign to it the IAM Role, Security Group and Key Pair you’ve prepared for it.
- In the AWS EC2 Management Console > Instances > Launch Instance > click Free tier only > Select any AMI, the first one listed is fine > verify the Type is marked Free tier eligible > Next: Configure Instance Details > Select IAM role = ‘S3FullAccess’ (the role you just created)
- Next: Add Storage > Next: Add Tags > Next: Configure Security Group
- Click Select an existing security group > Select ‘EC2-S3-SecurityGroup’, the one you just created
- Review and Launch > Launch > Select the key pair you just created, and “I acknowledge …” > Launch Instances > View Instances
- Now wait until the Instance State = ‘running‘
- Click Connect for instructions on how to connect to the new EC2 instance via SSH (which we’ll be doing for our Test).
Now you’re ready to Test!
AWS CLI
I prefer using CloudFormation’s CLI (command line interface) for managing AWS resources. For more information about CloudFormation: Getting Started with AWS CloudFormation.
1. Key Pair
In order to connect to the EC2 instance, where we’ll test copying files back/forth to an S3 bucket, you need a Key Pair. If you don’t have one yet, create one. This command will create a key pair and put they key file in your current directory (you may choose a different key-name and/or location):
aws ec2 create-key-pair --key-name mytestkey > mytestkey.pem
Change the key file’s permissions:
chmod 400 mytestkey.pem
2. Template Parameters
On your local machine using your favorite editor, create a CloudFormation template called myteststack.yml.
Add Parameters to the template (copy example code below.) For simplicity, let’s create default values so we don’t have to create a separate parameter file for this tutorial. (In ‘real life’, we’d use a parameter file so our template would not have to change each time we reuse it.)
You may need to change mytestkey if you are using a different key pair name.
---
AWSTemplateFormatVersion: '2010-09-09'
Parameters:
KeyName:
Description: Keyname to use for EC2 instance(s)
Type: AWS::EC2::KeyPair::KeyName
Default: mytestkey
SSHLocation:
Description: Who can SSH to EC2 instances
Type: String
Default: 0.0.0.0/0 # everyone
3. S3 Bucket
Add an S3 bucket Resource to the template file:
Resources:
# Create S3 bucket, closed to the public
S3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "${AWS::StackName}-bucket-${AWS::AccountId}" # this should be a unique name
PublicAccessBlockConfiguration: # Block public access
BlockPublicAcls: true
IgnorePublicAcls: true
BlockPublicPolicy: true
RestrictPublicBuckets: true
4. IAM Role
In order for your EC2 instance to access files (or other objects) on S3, the instance needs to be granted privileges to do so.
Define an IAM Role with full S3 privileges, then create a Profile with the Role attached so we can add that Profile to the EC2 instance (I know, it seems like a lot of steps to just grant simple privileges, but this is how it’s done):
# Create a role for EC2 instances, to which we can attach policies
OPSRole:
Type: AWS::IAM::Role
Properties:
Description: "An EC2 instance role"
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole
# Give full S3 access to OPSRole
OPSRolePolicies:
Type: AWS::IAM::Policy
Properties:
PolicyName: OPSRolePolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: 's3:*' # AmazonS3FullAccess
Resource: '*'
Roles: [ !Ref OPSRole ]
# Create an Instance Profile and attach the OPSRole
OPSInstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
InstanceProfileName: "OPSInstanceProfile"
Roles: [ !Ref OPSRole ]
5. Security Group
A Security Group acts as a firewall, controlling traffic to/from an EC2.
Create a Security Group to authorize specific traffic to/from an EC2 instance. Do this by opening ports and applying rule(s) to the port to allow traffic in/out. You’ll also open a port for SSH (connecting) to the EC2 instance.
You may choose to update the example (below) to change the SSHLocation
‘s Default to your local IP address so it’s not open to everyone to SSH into. Append “/32” to your IP address. For example: Default: 123.45.67.89/32 # My IP
Discover your IP address from your local terminal using the command curl -s ifconfig.co
.
OPSSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub "${AWS::StackName}-SecurityGroup"
GroupDescription: "Enable HTTP access via port 80, and SSH port 22"
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: !Ref SSHLocation
6. EC2 Instance
Define an EC2 instance. Assign to it the IAM Role, Security Group and Key Pair you’ve defined for it. Also, to make life easier, create Outputs
so we can retrieve our S3 bucket name, and the SSH command needed to connect to the EC2 instance.
This example template assumes you’ll be using the us-west-2 region. If you are not, you’ll want to use a different Instance AMI than the one I use here (ami-a0cfeed8). You may need to change the ec2-user value in Outputs
if you use a different AMI.
# Create EC2 with our new Role (via Instance Profile) and Security Group
OPSInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-a0cfeed8
InstanceType: t2.micro
SecurityGroups: [!Ref OPSSecurityGroup]
KeyName: !Ref KeyName
IamInstanceProfile: !Ref OPSInstanceProfile
Tags:
- Key: Name
Value: !Sub "${AWS::StackName}-Instance"
Outputs:
ConnectString:
Description: SSH Connect
Value:
Fn::Join:
- ''
- - 'ssh -i "'
- !Ref KeyName
- '.pem" ec2-user@'
- Fn::GetAtt:
- OPSInstance
- PublicDnsName
BucketURL:
Description: S3 Bucket name
Value:
Fn::Join:
- ''
- - "s3://"
- !Sub "${AWS::StackName}-bucket-${AWS::AccountId}"
7. Stack
Once your template is ready, create the CloudFormation stack. If you chose to use a different region than us-west-2, you’ll need to update the command accordingly:
aws cloudformation create-stack \
--stack-name myteststack \
--template-body file://myteststack.yml \
--capabilities "CAPABILITY_NAMED_IAM" "CAPABILITY_IAM" \
--region=us-west-2
aws cloudformation wait stack-create-complete --stack-name myteststack
Once it’s completed, print out the S3 bucket name and the SSH command you’ll need to connect to the EC2 instance:
aws cloudformation describe-stacks --stack-name $1 \
--query 'Stacks[].Outputs[].[OutputValue]' --output text
Now you’re ready to Test!
Test
First, connect to your EC2 instance using the SSH command from the previous step.
At the EC2 instance’s prompt, create a text file:
echo "Hello World" > hello.txt
Copy the file to S3, then verify:
aws s3 cp hello.txt s3://<your bucket name>
aws s3 ls s3://<your bucket name>
Copy files from S3 to S3, then verify:
aws s3 cp s3://<your bucket name>/hello.txt s3://<your bucket name>/hello-again.txt
aws s3 ls s3://<your bucket name>
Now let’s get the new hello-again.txt file from S3 back to your EC2 instance, then verify. Use aws s3 sync
to sync the entire directory, which should copy hello-again.txt, which does not yet exist on the EC2. The sync
command will, by default, copy a whole directory. It will only copy new/modified files:
aws s3 sync s3://<your bucket name>/ .
ls
Test that the sync
command works both ways. Create another .txt file on EC2, sync, then see if the new file is now on S3:
echo "Goodbye" > bye.txt
aws s3 sync . s3://<your bucket name>/
aws s3 ls s3://<your bucket name>
Notice, the sync
command copied all the files in the current EC2 directory to the S3 bucket. Refer to aws s3 sync
documentation to see how to avoid copying some files/directories.
The cp
and sync
commands have many parameters. I encourage you to look through the documentation for these useful commands.
When you’re done testing, exit
out of the EC2 instance.
Cleanup
Well, that was fun!
Remember to delete everything you created in this tutorial, if you have no longer use for it. If you used the CloudFormation option, first remove the S3 contents, then delete the stack:
aws s3 rb s3://<your bucket name> --force
aws cloudformation delete-stack --stack-name myteststack
aws cloudformation wait stack-delete-complete myteststack
Otherwise, use the AWS Console to manually delete everything.
Leave a Reply