SaltyCrane: awshttps://www.saltycrane.com/blog/2021-03-25T10:29:45-07:00Example Next.js GitLab CI/CD Amazon ECR and ECS deploy pipeline
2021-03-25T10:29:45-07:00https://www.saltycrane.com/blog/2021/03/example-nextjs-gitlab-cicd-amazon-ecr-and-ecs-deploy-pipeline/<p>
I've created an example <a href="https://nextjs.org/">Next.js</a> project
with a <a href="https://docs.gitlab.com/ee/ci/">GitLab CI/CD</a> pipeline that
builds a Docker image, pushes it to
<a href="https://aws.amazon.com/ecr/">Amazon ECR</a>, deploys it to an
<a href="https://aws.amazon.com/ecs/">Amazon ECS</a>
<a href="https://aws.amazon.com/fargate/">Fargate</a> cluster, and uploads
static assets (JS, CSS, etc.) to
<a href="https://aws.amazon.com/s3/">Amazon S3</a>. The example GitLab repo is
here:
<a href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example"
>https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example</a
>
</p>
<h4 id="interesting-files">Interesting files</h4>
<p>
Here are the interesting parts of some of the files. See the full source code
in the
<a href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example"
>GitLab repo</a
>.
</p>
<ul>
<li>
<p>
<code>.gitlab-ci.yml</code> (<a
href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example/-/blob/main/.gitlab-ci.yml"
>view at gitlab</a
>)
</p>
<ul>
<li>
the variables <code>AWS_ACCESS_KEY_ID</code>,
<code>AWS_SECRET_ACCESS_KEY</code>, and <code>ECR_HOST</code> are set in
the GitLab UI under "Settings" > "CI/CD" >
"Variables"
</li>
<li>
this uses the
<a href="https://hub.docker.com/r/saltycrane/aws-cli-and-docker"
>saltycrane/aws-cli-and-docker</a
>
Docker image which provides the <code>aws</code> v2 command line tools
and <code>docker</code> in a single image. It is based on
<a href="https://hub.docker.com/r/amazon/aws-cli">amazon/aws-cli</a> and
installs bc, curl, docker, jq, and tar. This idea is from
<a href="https://www.youtube.com/watch?v=jg9sUceyGaQ"
>Valentin's tutorial</a
>.
</li>
</ul>
<pre><code>variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
AWS_DEFAULT_REGION: "us-east-1"
CI_APPLICATION_REPOSITORY: "$ECR_HOST/next-aws-ecr-ecs-gitlab-ci-cd-example"
CI_APPLICATION_TAG: "$CI_PIPELINE_IID"
CI_AWS_S3_BUCKET: "next-aws-ecr-ecs-gitlab-ci-cd-example"
CI_AWS_ECS_CLUSTER: "next-aws-ecr-ecs-gitlab-ci-cd-example"
CI_AWS_ECS_SERVICE: "next-aws-ecr-ecs-gitlab-ci-cd-example"
CI_AWS_ECS_TASK_DEFINITION: "next-aws-ecr-ecs-gitlab-ci-cd-example"
NEXT_JS_ASSET_URL: "https://$CI_AWS_S3_BUCKET.s3.amazonaws.com"
stages:
- build
- deploy
build:
stage: build
image: saltycrane/aws-cli-and-docker
services:
- docker:dind
script:
- ./bin/build-and-push-image-to-ecr
- ./bin/upload-assets-to-s3
deploy:
stage: deploy
image: saltycrane/aws-cli-and-docker
services:
- docker:dind
script:
- ./bin/ecs update-task-definition
</code></pre>
</li>
<li>
<p>
<code>Dockerfile</code> (<a
href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example/-/blob/main/Dockerfile"
>view at gitlab</a
>)
</p>
<p>
The value of <code>NEXT_JS_ASSET_URL</code> is passed in using the
<code>--build-arg</code> option of the <code>docker build</code> command
run in <code>bin/build-and-push-image-to-ecr</code>. It is used like an
environment variable in the <code>RUN npm run build</code> command below.
In this project it is assigned to <code>assetPrefix</code> in
<code>next.config.js</code>.
</p>
<pre><code>FROM node:14.16-alpine
ARG NEXT_JS_ASSET_URL
ENV NODE_ENV=production
WORKDIR /app
COPY ./package.json ./
COPY ./package-lock.json ./
RUN npm ci
COPY . ./
RUN npm run build
EXPOSE 3000
CMD ["npm", "start"]
</code></pre>
</li>
<li>
<p>
<code>bin/build-and-push-image-to-ecr</code> (<a
href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example/-/blob/main/bin/build-and-push-image-to-ecr"
>view at gitlab</a
>)
</p>
<pre><code># log in to the amazon ecr docker registry
aws ecr get-login-password | docker login --username AWS --password-stdin "$ECR_HOST"
# build docker image
docker pull "$CI_APPLICATION_REPOSITORY:latest" || true
docker build --build-arg "NEXT_JS_ASSET_URL=$NEXT_JS_ASSET_URL" --cache-from "$CI_APPLICATION_REPOSITORY:latest" -t "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG" -t "$CI_APPLICATION_REPOSITORY:latest" .
# push image to amazon ecr
docker push "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG"
docker push "$CI_APPLICATION_REPOSITORY:latest"
</code></pre>
</li>
<li>
<p>
<code>bin/upload-assets-to-s3</code> (<a
href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example/-/blob/main/bin/upload-assets-to-s3"
>view at gitlab</a
>)
</p>
<pre><code>LOCAL_ASSET_PATH=/tmp/upload-assets
mkdir $LOCAL_ASSET_PATH
# copy the generated assets out of the docker image
docker run --rm --entrypoint tar "$CI_APPLICATION_REPOSITORY:$CI_APPLICATION_TAG" cf - .next | tar xf - -C $LOCAL_ASSET_PATH
# rename .next to _next
mv "$LOCAL_ASSET_PATH/.next" "$LOCAL_ASSET_PATH/_next"
# remove directories that should not be uploaded to S3
rm -rf "$LOCAL_ASSET_PATH/_next/cache"
rm -rf "$LOCAL_ASSET_PATH/_next/server"
# gzip files
find $LOCAL_ASSET_PATH -regex ".*\.\(css\|svg\|js\)$" -exec gzip {} \;
# strip .gz extension off of gzipped files
find $LOCAL_ASSET_PATH -name "*.gz" -exec sh -c 'mv $1 `echo $1 | sed "s/.gz$//"`' - {} \;
# upload gzipped js, css, and svg assets
aws s3 sync --no-progress $LOCAL_ASSET_PATH "s3://$CI_AWS_S3_BUCKET" --cache-control max-age=31536000 --content-encoding gzip --exclude "*" --include "*.js" --include "*.css" --include "*.svg"
# upload non-gzipped assets
aws s3 sync --no-progress $LOCAL_ASSET_PATH "s3://$CI_AWS_S3_BUCKET" --cache-control max-age=31536000 --exclude "*.js" --exclude "*.css" --exclude "*.svg" --exclude "*.map"
</code></pre>
</li>
<li>
<p>
<code>bin/ecs</code> (<a
href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example/-/blob/main/bin/ecs"
>view full file</a
>) (This file was copied from the
<a href=""><code>gitlab-org</code> repo</a>)
</p>
<pre><code>#!/bin/bash -e
update_task_definition() {
local -A register_task_def_args=( \
['task-role-arn']='taskRoleArn' \
['execution-role-arn']='executionRoleArn' \
['network-mode']='networkMode' \
['cpu']='cpu' \
['memory']='memory' \
['pid-mode']='pidMode' \
['ipc-mode']='ipcMode' \
['proxy-configuration']='proxyConfiguration' \
['volumes']='volumes' \
['placement-constraints']='placementConstraints' \
['requires-compatibilities']='requiresCompatibilities' \
['inference-accelerators']='inferenceAccelerators' \
)
image_repository=$CI_APPLICATION_REPOSITORY
image_tag=$CI_APPLICATION_TAG
new_image_name="${image_repository}:${image_tag}"
register_task_definition_from_remote
new_task_definition=$(aws ecs register-task-definition "${args[@]}")
new_task_revision=$(read_task "$new_task_definition" 'revision')
new_task_definition_family=$(read_task "$new_task_definition" 'family')
# Making sure that we at least have one running task (even if desiredCount gets updated again with new task definition below)
service_task_count=$(aws ecs describe-services --cluster "$CI_AWS_ECS_CLUSTER" --services "$CI_AWS_ECS_SERVICE" --query "services[0].desiredCount")
if [[ $service_task_count == 0 ]]; then
aws ecs update-service --cluster "$CI_AWS_ECS_CLUSTER" --service "$CI_AWS_ECS_SERVICE" --desired-count 1
fi
# Update ECS service with newly created task defintion revision.
aws ecs update-service \
--cluster "$CI_AWS_ECS_CLUSTER" \
--service "$CI_AWS_ECS_SERVICE" \
--task-definition "$new_task_definition_family":"$new_task_revision"
return 0
}
read_task() {
val=$(echo "$1" | jq -r ".taskDefinition.$2")
if [ "$val" == "null" ];then
val=$(echo "$1" | jq -r ".$2")
fi
if [ "$val" != "null" ];then
echo -n "${val}"
fi
}
register_task_definition_from_remote() {
task=$(aws ecs describe-task-definition --task-definition "$CI_AWS_ECS_TASK_DEFINITION")
current_container_definitions=$(read_task "$task" 'containerDefinitions')
new_container_definitions=$(echo "$current_container_definitions" | jq --arg val "$new_image_name" '.[0].image = $val')
args+=("--family" "${CI_AWS_ECS_TASK_DEFINITION}")
args+=("--container-definitions" "${new_container_definitions}")
for option in "${!register_task_def_args[@]}"; do
value=$(read_task "$task" "${register_task_def_args[$option]}")
if [ -n "$value" ];then
args+=("--${option}" "${value}")
fi
done
}
update_task_definition
</code></pre>
</li>
</ul>
<h4 id="usage---set-up-aws-resources">Usage - set up AWS resources</h4>
<p>
Below are the minimum steps I needed to create the required AWS services for
my example. I use the AWS region <code>"us-east-1"</code>. For info
about creating some of these services via the command line, see my
<a href="/blog/2021/03/amazon-ecs-notes/">Amazon ECS notes</a>.
</p>
<p><strong>Create an ECR repository</strong></p>
<ul>
<li>
create a private ECR repository here:
<a href="https://console.aws.amazon.com/ecr/repositories?region=us-east-1"
>https://console.aws.amazon.com/ecr/repositories?region=us-east-1</a
>
</li>
<li>name the repository "next-aws-ecr-ecs-gitlab-ci-cd-example"</li>
<li>
leave "Tag immutability" disabled to allow the "latest"
tag to be overwritten
</li>
</ul>
<p><strong>Create an ECS Fargate cluster</strong></p>
<ul>
<li>
click "Create Cluster" here:
<a
href="https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters"
>https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters</a
>
</li>
<li>choose "Networking only" (Fargate)</li>
<li>name the cluster "next-aws-ecr-ecs-gitlab-ci-cd-example"</li>
<li>check the "Create VPC" checkbox</li>
<li>click "Create"</li>
</ul>
<p><strong>Create an ECS task definition</strong></p>
<ul>
<li>
click "Create new Task Definition" here:
<a
href="https://console.aws.amazon.com/ecs/home?region=us-east-1#/taskDefinitions"
>https://console.aws.amazon.com/ecs/home?region=us-east-1#/taskDefinitions</a
>
</li>
<li>select "FARGATE" and click "Next step"</li>
<li>
configure task
<ul>
<li>
for "Task Definition Name" enter
"next-aws-ecr-ecs-gitlab-ci-cd-example"
</li>
<li>for "Task Role" select "None"</li>
<li>
for "Task execution role" select "Create new role"
</li>
<li>for "Task memory" select "0.5GB"</li>
<li>for "Task CPU" select "0.25 vCPU"</li>
<li>
click "Add container"
<ul>
<li>
for "Container Name" enter
"next-aws-ecr-ecs-gitlab-ci-cd-example"
</li>
<li>
for "Image" enter "asdf" (this will be updated
by the gitlab ci/cd job)
</li>
<li>leave "Private repository authentication" unchecked</li>
<li>for "Port mappings" enter "3000"</li>
<li>click "Add"</li>
</ul>
</li>
<li>click "Create"</li>
</ul>
</li>
</ul>
<p><strong>Create an ECS service</strong></p>
<ul>
<li>
click "Create" here:
<a
href="https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/next-aws-ecr-ecs-gitlab-ci-cd-example/services"
>https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/next-aws-ecr-ecs-gitlab-ci-cd-example/services</a
>
</li>
<li>
configure service
<ul>
<li>for "Launch type" select "FARGATE"</li>
<li>
for "Task Definition" enter
"next-aws-ecr-ecs-gitlab-ci-cd-example"
</li>
<li>
for "Cluster" select
"next-aws-ecr-ecs-gitlab-ci-cd-example"
</li>
<li>
for "Service name" enter
"next-aws-ecr-ecs-gitlab-ci-cd-example"
</li>
<li>for "Number of tasks" enter 1</li>
<li>for "Deployment type" select "Rolling update"</li>
<li>click "Next step"</li>
</ul>
</li>
<li>
configure network
<ul>
<li>
select the appropriate "Cluster VPC" and two
"Subnets"
</li>
<li>click "Next step"</li>
</ul>
</li>
<li>
set Auto Scaling
<ul>
<li>click "Next step"</li>
</ul>
</li>
<li>
review
<ul>
<li>click "Create Service"</li>
</ul>
</li>
</ul>
<p><strong>Open port 3000</strong></p>
<ul>
<li>
on the ECS service page
<a
href="https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/next-aws-ecr-ecs-gitlab-ci-cd-example/services/next-aws-ecr-ecs-gitlab-ci-cd-example/details"
>https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/next-aws-ecr-ecs-gitlab-ci-cd-example/services/next-aws-ecr-ecs-gitlab-ci-cd-example/details</a
>
under "Network Access", next to "Security groups", click
the link to the security group
</li>
<li>click "Actions" then click "Edit inbound rules"</li>
<li>click "Add rule"</li>
<li>for "Port range" enter "3000"</li>
<li>for "Source" select "0.0.0.0/0"</li>
<li>click "Save rules"</li>
</ul>
<p><strong>Create a S3 bucket</strong></p>
<ul>
<li>
click "Create bucket" here:
<a href="https://s3.console.aws.amazon.com/s3/home?region=us-east-1"
>https://s3.console.aws.amazon.com/s3/home?region=us-east-1</a
>
</li>
<li>
for "Bucket name" enter
"next-aws-ecr-ecs-gitlab-ci-cd-example"
</li>
<li>uncheck "Block all public access"</li>
<li>
check the "I acknowledge that the current settings might result in this
bucket and the objects within becoming public" checkbox
</li>
<li>click "Create bucket"</li>
</ul>
<p><strong>Update permissions for S3 bucket</strong></p>
<ul>
<li>
go to Permissions (<a
href="https://s3.console.aws.amazon.com/s3/buckets/next-aws-ecr-ecs-gitlab-ci-cd-example?region=us-east-1&tab=permissions"
>https://s3.console.aws.amazon.com/s3/buckets/next-aws-ecr-ecs-gitlab-ci-cd-example?region=us-east-1&tab=permissions</a
>) and under "Bucket policy", click "Edit"
</li>
<li>
enter:
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::next-aws-ecr-ecs-gitlab-ci-cd-example/*"
}
]
}
</code></pre>
</li>
<li>click "Save changes"</li>
</ul>
<p><strong>Create an IAM user</strong></p>
<ul>
<li>
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-an-iam-user"
>create an IAM user</a
>. The user must have at least ECR, ECS, and S3 permissions.
</li>
<li>
take note of the <code>AWS_ACCESS_KEY_ID</code> and
<code>AWS_SECRET_ACCESS_KEY</code>
</li>
</ul>
<h4 id="usage---run-the-cicd-pipeline">Usage - run the CI/CD pipeline</h4>
<p>
<strong>Fork the example gitlab repo and configure CI/CD variables</strong>
</p>
<ul>
<li>
fork
<a
href="https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example"
>https://gitlab.com/saltycrane/next-aws-ecr-ecs-gitlab-ci-cd-example</a
>
</li>
<li>
go to "Settings" > "CI/CD" > "Variables"
and add the following variables. You can choose to "protect" and
"mask" all of them.
<ul>
<li><code>AWS_ACCESS_KEY_ID</code></li>
<li><code>AWS_SECRET_ACCESS_KEY</code></li>
<li>
<code>ECR_HOST</code> (This is the part of the ECR repository URI before
the <code>/</code>. It looks something like
<code>XXXXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com</code>)
</li>
</ul>
</li>
</ul>
<p>
<strong>Edit variables in <code>.gitlab-ci.yml</code></strong>
</p>
<p>
If you used names other than
"next-aws-ecr-ecs-gitlab-ci-cd-example", edit the variables in
<code>.gitlab-ci.yml</code>.
</p>
<p><strong>Test it</strong></p>
<ul>
<li>clone the repo and push a commit</li>
<li>
see the pipeline running under "CI/CD" > "Pipelines"
</li>
<li>
go to the cluster tasks page:
<a
href="https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/next-aws-ecr-ecs-gitlab-ci-cd-example/tasks"
>https://us-east-1.console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/next-aws-ecr-ecs-gitlab-ci-cd-example/tasks</a
>
</li>
<li>click on the task and copy the "Public IP"</li>
<li>
enter the public IP followed by <code>:3000</code> in the browser (Note: the
IP address changes for every <code>git push</code>. A
<a href="https://aws.amazon.com/elasticloadbalancing/">load balancer</a>
should probably be used, but I didn't do that.)
</li>
</ul>
<h4 id="references-buildpush">References (build/push)</h4>
<ul>
<li>
<a href="https://www.youtube.com/watch?v=jg9sUceyGaQ"
>https://www.youtube.com/watch?v=jg9sUceyGaQ</a
>
</li>
<li>
<a
href="https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker"
>https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#docker</a
>
</li>
<li>
<a
href="https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-docker-caching"
>https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#using-docker-caching</a
>
</li>
</ul>
<h4 id="references-deploy">References (deploy)</h4>
<ul>
<li>
<a
href="https://docs.gitlab.com/ee/ci/cloud_deployment/#deploy-your-application-to-the-aws-elastic-container-service-ecs"
>https://docs.gitlab.com/ee/ci/cloud_deployment/#deploy-your-application-to-the-aws-elastic-container-service-ecs</a
>
</li>
<li>
<a
href="https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/AWS/Deploy-ECS.gitlab-ci.yml"
>https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/AWS/Deploy-ECS.gitlab-ci.yml</a
>
</li>
<li>
<a
href="https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy/ECS.gitlab-ci.yml"
>https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/ci/templates/Jobs/Deploy/ECS.gitlab-ci.yml</a
>
</li>
<li>
<a
href="https://gitlab.com/gitlab-org/cloud-deploy/-/blob/master/aws/ecs/Dockerfile"
>https://gitlab.com/gitlab-org/cloud-deploy/-/blob/master/aws/ecs/Dockerfile</a
>
</li>
<li>
<a
href="https://gitlab.com/gitlab-org/cloud-deploy/-/blob/master/aws/src/bin/ecs"
>https://gitlab.com/gitlab-org/cloud-deploy/-/blob/master/aws/src/bin/ecs</a
>
</li>
</ul>
Amazon ECS notes
2021-03-19T10:04:07-07:00https://www.saltycrane.com/blog/2021/03/amazon-ecs-notes/<p>
These are my notes for creating a Docker image, pushing it to
<a href="https://aws.amazon.com/ecr/">Amazon ECR</a> (Elastic Container
Registry), and deploying it to
<a href="https://aws.amazon.com/ecs/">Amazon ECS</a> (Elastic Container
Service) using
<a href="https://aws.amazon.com/fargate/">AWS Fargate</a> (serverless for
containers) using command line tools.
</p>
<h4 id="create-docker-image-on-local-machine">
Create docker image on local machine
</h4>
<ul>
<li>
<p>install docker (macOS)</p>
<pre><code>brew install homebrew/cask/docker
</code></pre>
</li>
<li>
<p>create directory</p>
<pre><code>mkdir /tmp/my-project
cd /tmp/my-project
</code></pre>
</li>
<li>
<p>create <code>/tmp/my-project/Dockerfile</code>:</p>
<pre><code>FROM python:3.9-alpine3.13
WORKDIR /app
RUN echo 'Hello' > ./index.html
EXPOSE 80
CMD ["python", "-m", "http.server", "80"]
</code></pre>
</li>
<li>
<p>create Docker image</p>
<pre><code>docker build -t my-image .
</code></pre>
</li>
<li>
<p>test running the Docker image locally</p>
<pre><code>docker run -p 80:80 my-image
</code></pre>
</li>
<li>
<p>
go to <a href="http://localhost">http://localhost</a> in the browser and
see the text "Hello"
</p>
</li>
</ul>
<h4 id="install-and-configure-aws-command-line-tools">
Install and configure AWS command line tools
</h4>
<ul>
<li>
<p>install AWS command line tools</p>
<pre><code>brew install awscli
</code></pre>
</li>
<li>
<p>
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-an-iam-user"
>create an IAM user</a
>
</p>
</li>
<li>
<p>
run
<a
href="https://docs.aws.amazon.com/cli/latest/reference/configure/index.html"
><code>aws configure</code></a
>
and enter:
</p>
<ul>
<li>AWS Access Key ID</li>
<li>AWS Secret Access Key</li>
</ul>
<p>This creates the file <code>~/.aws/credentials</code></p>
</li>
</ul>
<h4 id="create-ecr-repository-and-push-image-to-it">
Create ECR repository and push image to it
</h4>
<p>
From
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#use-ecr"
>https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#use-ecr</a
>
</p>
<ul>
<li>
<p>
create an Amazon ECR repository using
<a
href="https://docs.aws.amazon.com/cli/latest/reference/ecr/create-repository.html"
><code>aws ecr create-repository</code></a
>
</p>
<pre><code>aws ecr create-repository --repository-name my-repository --region us-east-1
</code></pre>
<p>output:</p>
<pre><code>{
"repository": {
"repositoryArn": "arn:aws:ecr:us-east-1:AAAAAAAAAAAA:repository/my-repository",
"registryId": "AAAAAAAAAAAA",
"repositoryName": "my-repository",
"repositoryUri": "AAAAAAAAAAAA.dkr.ecr.us-east-1.amazonaws.com/my-repository",
"createdAt": "2021-03-17T10:48:18-07:00",
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
}
}
}
</code></pre>
<p>
Take note of the "registryId" and use it in place of
"AAAAAAAAAAAA" below.
</p>
</li>
</ul>
<ul>
<li>
<p>tag the docker image with the <code>repositoryUri</code></p>
<pre><code>docker tag my-image AAAAAAAAAAAA.dkr.ecr.us-east-1.amazonaws.com/my-repository
</code></pre>
</li>
<li>
<p>
log in to the Amazon ECR registry using
<a
href="https://docs.aws.amazon.com/cli/latest/reference/ecr/get-login-password.html"
><code>aws ecr get-login-password</code></a
>
</p>
<pre><code>aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin AAAAAAAAAAAA.dkr.ecr.us-east-1.amazonaws.com
</code></pre>
</li>
<li>
<p>push the docker image to the Amazon ECR repository</p>
<pre><code>docker push AAAAAAAAAAAA.dkr.ecr.us-east-1.amazonaws.com/my-repository
</code></pre>
</li>
<li>
<p>
see the image in AWS console
<a href="https://console.aws.amazon.com/ecr/repositories?region=us-east-1"
>https://console.aws.amazon.com/ecr/repositories?region=us-east-1</a
>
</p>
</li>
</ul>
<h4 id="install-ecs-command-line-tools">Install ECS command line tools</h4>
<ul>
<li>
install <code>ecs-cli</code>. Note there is <code>ecs-cli</code> in addition
to <code>aws ecs</code> tools. The reason is probably similar to why some
services are named
<a href="https://docs.aws.amazon.com/"
>"Amazon Service" and some are named "AWS Service"</a
>. (It seems like <code>ecs-cli</code> provides higher level commands.)
<pre><code>brew install amazon-ecs-cli
</code></pre>
</li>
</ul>
<h4 id="create-amazon-ecs-fargate-cluster">
Create Amazon ECS Fargate cluster
</h4>
<p>
From
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html"
>https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html</a
>
</p>
<ul>
<li>
create a cluster using
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-up.html"
><code>ecs-cli up</code></a
>
<pre><code>ecs-cli up --cluster my-cluster --launch-type FARGATE --region us-east-1
</code></pre>
output:
<pre><code>INFO[0001] Created cluster cluster=my-cluster region=us-east-1
INFO[0002] Waiting for your cluster resources to be created...
INFO[0002] Cloudformation stack status stackStatus=CREATE_IN_PROGRESS
VPC created: vpc-BBBBBBBBBBBBBBBBB
Subnet created: subnet-CCCCCCCCCCCCCCCCC
Subnet created: subnet-DDDDDDDDDDDDDDDDD
Cluster creation succeeded.
</code></pre>
Take note of the VPC (virtual private cloud), and two subnet IDs to use
later. See the cluster in the AWS console UI:
<a href="https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters"
>https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters</a
>
</li>
</ul>
<h4 id="gather-parameters-required-to-deploy-to-ecs-cluster">
Gather parameters required to deploy to ECS cluster
</h4>
<p>
From
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html"
>https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html</a
>
</p>
<h5 id="create-task-execution-iam-role">Create task execution IAM role</h5>
<ul>
<li>
<p>
create a file <code>/tmp/my-project/task-execution-assume-role.json</code>
</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
</code></pre>
</li>
<li>
<p>
create the task execution role using
<a
href="https://docs.aws.amazon.com/cli/latest/reference/iam/create-role.html"
><code>aws iam create-role</code></a
>
</p>
<pre><code>aws iam create-role --role-name my-task-execution-role --assume-role-policy-document file:///tmp/my-project/task-execution-assume-role.json --region us-east-1
</code></pre>
</li>
<li>
<p>
attach the task execution role policy using
<a
href="https://docs.aws.amazon.com/cli/latest/reference/iam/attach-role-policy.html"
><code>aws iam attach-role-policy</code></a
>
</p>
<pre><code>aws iam attach-role-policy --role-name my-task-execution-role --policy-arn arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy --region us-east-1
</code></pre>
</li>
</ul>
<h5 id="get-security-group-id">Get security group ID</h5>
<ul>
<li>
<p>
get the default security group ID for the virtual private cloud (VPC)
created when creating the ECS cluster using
<a
href="https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-security-groups.html"
><code>aws ec2 describe-security-groups</code></a
>. Replace "vpc-BBBBBBBBBBBBBBBBB" with your VPC ID
</p>
<pre><code>aws ec2 describe-security-groups --filters Name=vpc-id,Values=vpc-BBBBBBBBBBBBBBBBB --region us-east-1
</code></pre>
<p>output:</p>
<pre><code>{
"SecurityGroups": [
{
"Description": "default VPC security group",
"GroupName": "default",
"IpPermissions": [
{
"IpProtocol": "-1",
"IpRanges": [],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": [
{
"GroupId": "sg-EEEEEEEEEEEEEEEEE",
"UserId": "AAAAAAAAAAAA"
}
]
}
],
"OwnerId": "AAAAAAAAAAAA",
"GroupId": "sg-EEEEEEEEEEEEEEEEE",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"VpcId": "vpc-BBBBBBBBBBBBBBBBB"
}
]
}
</code></pre>
<p>Take note of the "GroupId" to be used later</p>
</li>
</ul>
<h4 id="deploy-to-amazon-ecs-cluster">Deploy to Amazon ECS cluster</h4>
<p>
From
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html"
>https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html</a
>
</p>
<ul>
<li>
<p>
create <code>/tmp/my-project/ecs-params.yml</code> replacing
"subnet-CCCCCCCCCCCCCCCCC",
"subnet-DDDDDDDDDDDDDDDDD", and "sg-EEEEEEEEEEEEEEEEE"
with appropriate IDs from above.
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-ecsparams.html"
>ECS Parameters docs</a
>
</p>
<pre><code>version: 1
task_definition:
task_execution_role: my-task-execution-role
ecs_network_mode: awsvpc
task_size:
mem_limit: 0.5GB
cpu_limit: 256
run_params:
network_configuration:
awsvpc_configuration:
subnets:
- "subnet-CCCCCCCCCCCCCCCCC"
- "subnet-DDDDDDDDDDDDDDDDD"
security_groups:
- "sg-EEEEEEEEEEEEEEEEE"
assign_public_ip: ENABLED
</code></pre>
</li>
<li>
<p>
create <code>/tmp/my-project/docker-compose.yml</code> replacing
AAAAAAAAAAAA with the registryId:
</p>
<pre><code>version: '3'
services:
web:
image: 'AAAAAAAAAAAA.dkr.ecr.us-east-1.amazonaws.com/my-repository'
ports:
- '80:80'
</code></pre>
</li>
<li>
<p>
deploy to the ECS cluster using
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-service-up.html"
><code>ecs-cli compose service up</code></a
>. This creates a task definition and service. This uses the
<code>docker-compose.yml</code> file in the current directory.
</p>
<pre><code>ecs-cli compose --cluster my-cluster --project-name my-project --ecs-params ecs-params.yml --region us-east-1 service up --launch-type FARGATE
</code></pre>
<p>
see the service in the web UI:
<a
href="https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/my-cluster/services"
>https://console.aws.amazon.com/ecs/home?region=us-east-1#/clusters/my-cluster/services</a
>
</p>
</li>
</ul>
<h4 id="hit-the-server-in-the-browser">Hit the server in the browser</h4>
<ul>
<li>
<p>
configure security group to allow inbound access on port 80 using
<a
href="https://docs.aws.amazon.com/cli/latest/reference/ec2/authorize-security-group-ingress.html"
><code>aws ec2 authorize-security-group-ingress</code></a
>
</p>
<pre><code>aws ec2 authorize-security-group-ingress --group-id sg-EEEEEEEEEEEEEEEEE --protocol tcp --port 80 --cidr 0.0.0.0/0 --region us-east-1
</code></pre>
</li>
<li>
<p>
get the IP address using
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-service-ps.html"
><code>ecs-cli compose service ps</code></a
>
</p>
<pre><code>ecs-cli compose --cluster my-cluster --project-name my-project --region us-east-1 service ps
</code></pre>
<p>output:</p>
<pre><code>Name State Ports TaskDefinition Health
my-cluster/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/web RUNNING FF.FF.FF.FF:80->80/tcp my-project:1 UNKNOWN
</code></pre>
<p>Take note of the IP address under "Ports"</p>
</li>
<li>
<p>
visit in the browser:
<a href="http://FF.FF.FF.FF">http://FF.FF.FF.FF</a> replacing
"FF.FF.FF.FF" with your IP address
</p>
</li>
</ul>
<h4 id="destroy">Destroy</h4>
<ul>
<li>
<p>
delete the ECS service using
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-service-rm.html"
><code>ecs-cli compose service down</code></a
>
</p>
<pre><code>ecs-cli compose --cluster my-cluster --project-name my-project --region us-east-1 service down
</code></pre>
</li>
<li>
<p>
delete the ECS cluster using
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-down.html"
><code>ecs-cli down</code></a
>
</p>
<pre><code>ecs-cli down --force --cluster my-cluster --region us-east-1
</code></pre>
</li>
<li>
<p>
delete the ECR repository using
<a
href="https://docs.aws.amazon.com/cli/latest/reference/ecr/delete-repository.html"
><code>aws ecr delete-repository</code></a
>
</p>
<pre><code>aws ecr delete-repository --repository-name my-repository --region us-east-1 --force
</code></pre>
</li>
</ul>
<h4 id="references">References</h4>
<ul>
<li>
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html"
>https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html</a
>
</li>
<li>
<a
href="https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html"
>https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-cli-tutorial-fargate.html</a
>
</li>
</ul>
Setting the Expires header for S3 media using Python and boto
2012-02-11T00:12:44-08:00https://www.saltycrane.com/blog/2012/02/setting-expires-header-s3-media-using-python-and-boto/<h4 id="install">Install boto</h4>
<pre class="console">$ pip install boto
$ pip freeze |grep boto
boto==2.2.1 </pre>
<h4 id="script">Script</h4>
<p>This script sets the "Expires" header 25 years from the current date
for all the files starting with the prefix "mydirectory". Replace
the access key id, secret access key, and bucket.
</p>
<pre class="python">import mimetypes
from datetime import datetime, timedelta
from boto.s3.connection import S3Connection
AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXXXX'
AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
BUCKET_NAME = 'mybucket'
PREFIX = 'mydirectory'
def main():
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(BUCKET_NAME)
key_list = bucket.get_all_keys(prefix=PREFIX)
for key in key_list:
content_type, unused = mimetypes.guess_type(key.name)
if not content_type:
content_type = 'text/plain'
expires = datetime.utcnow() + timedelta(days=(25 * 365))
expires = expires.strftime("%a, %d %b %Y %H:%M:%S GMT")
metadata = {'Expires': expires, 'Content-Type': content_type}
print key.name, metadata
key.copy(BUCKET_NAME, key, metadata=metadata, preserve_acl=True)
if __name__ == '__main__':
main()</pre>
<h4 id="references">References</h4>
<ul>
<li><a href="http://groups.google.com/group/boto-users/browse_thread/thread/b072849f3f97735b/02dbedbe874dbd22?pli=1">
Add cache-control header for object in S3? - boto-users | Google Groups
</a></li>
<li><a href="http://boto.cloudhackers.com/en/latest/ref/s3.html">
boto S3 API reference
</a></li>
<li><a href="http://boto.cloudhackers.com/en/latest/s3_tut.html">
boto S3 Introduction
</a></li>
<li><a href="http://docs.python.org/library/mimetypes.html">
Python documentation — mimetypes
</a></li>
</ul>
Notes on backing up EBS backed AMIs
2011-07-05T21:24:53-07:00https://www.saltycrane.com/blog/2011/07/notes-backing-ebs-backed-amis/<p>EBS backed AMIs are stored as EBS snapshots. EBS snapshots are stored on
S3, but they <em>"are not directly accessible through S3. They can only be
accessed by creating an EBS volume."</em> See the following thread:
<a href="https://forums.aws.amazon.com/thread.jspa?messageID=120766">
<em>AWS Developer Forums: Snapshot Location</em></a>
</p>
<p>Backing up an EBS snapshot or AMI involves copying data from a volume
using e.g. rsync:</p>
<ul>
<li><a href="https://forums.aws.amazon.com/thread.jspa?messageID=182313">
<em>AWS Developer Forums: Backup Machine Images Off Amazon</em></a></li>
<li><a href="https://forums.aws.amazon.com/thread.jspa?messageID=151285">
<em>AWS Developer Forums: Can I download an EBS snapshot from S3?</em></a></li>
</ul>
<p>An easier alternative would be to share the AMI with another AWS user
(account). The other account would:</p>
<ul>
<li>launch an instance from the shared AMI</li>
<li>create an AMI from the instance</li>
</ul>
<p>Here is the documentation on sharing AMIs. It looks like a single
command is required to share an AMI with another user:
<a href="http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/index.html?AESDG-chapter-sharingamis.html#sharingamis-intro">
<em>AWS Documentation » Amazon EC2 » User Guide » Using Amazon EC2 » Using AMIs » Sharing AMIs</em></a>
</p>
Using a Python timeout decorator for uploading to S3
2010-04-27T15:55:58-07:00https://www.saltycrane.com/blog/2010/04/using-python-timeout-decorator-uploading-s3/<p>At work we are uploading many images to
<a href="http://aws.amazon.com/s3/">S3</a> using Python's
<a href="http://code.google.com/p/boto/">boto</a> library.
However we are experiencing a <em>RequestTimeTooSkewed</em>
error once every 100 uploads on average.
<a href="http://developer.amazonwebservices.com/connect/thread.jspa?messageID=144783">
We</a>
<a href="http://groups.google.com/group/boto-users/browse_thread/thread/467e0796052820ce/813e5b7db3867824?lnk=gst">
googled</a>, but did not find a solution. Our system time was in sync and our file
sizes were small (~50KB).
</p>
<p>Since we couldn't find the root cause, we added a
<a href="http://en.wikipedia.org/wiki/Watchdog_timer">watchdog timer</a>
as a bandaid solution.
We already use a
<a href="http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/">retry
decorator</a> to retry uploads to S3 when we get a
<em>500 Internal Server Error</em> response. To this we added a
timeout decorator which
cancels the S3 upload if it takes more than a couple minutes.
With this decorator, we don't have to wait the full 15 minutes
before S3 returns the <em>403 Forbidden</em> (RequestTimeTooSkewed error)
response.
</p>
<p>I found the timeout decorator at
<a href="http://code.activestate.com/recipes/307871-timing-out-function/">Activestate's
Python recipes</a>.
It makes use of Python's <a href="http://docs.python.org/library/signal.html">signal
library</a>.
Below is an example of how it's used.
</p>
<pre class="python">import signal
class TimeoutError(Exception):
def __init__(self, value = "Timed Out"):
self.value = value
def __str__(self):
return repr(self.value)
def timeout(seconds_before_timeout):
def decorate(f):
def handler(signum, frame):
raise TimeoutError()
def new_f(*args, **kwargs):
old = signal.signal(signal.SIGALRM, handler)
signal.alarm(seconds_before_timeout)
try:
result = f(*args, **kwargs)
finally:
# reinstall the old signal handler
signal.signal(signal.SIGALRM, old)
# cancel the alarm
# this line should be inside the "finally" block (per Sam Kortchmar)
signal.alarm(0)
return result
new_f.func_name = f.func_name
return new_f
return decorate</pre>
<p>Try it out:</p>
<pre class="python">import time
@timeout(5)
def mytest():
print "Start"
for i in range(1,10):
time.sleep(1)
print "%d seconds have passed" % i
if __name__ == '__main__':
mytest()</pre>
<p>Results:</p>
<pre>
Start
1 seconds have passed
2 seconds have passed
3 seconds have passed
4 seconds have passed
Traceback (most recent call last):
File "timeout_ex.py", line 47, in <module>
function_times_out()
File "timeout_ex.py", line 17, in new_f
result = f(*args, **kwargs)
File "timeout_ex.py", line 42, in function_times_out
time.sleep(1)
File "timeout_ex.py", line 12, in handler
raise TimeoutError()
__main__.TimeoutError: 'Timed Out'</pre>
<h4>Bug found by Sam Kortchmar <small><em>(added 2018-08-18)</em></small></h4>
<p>
The code on the
<a href="http://code.activestate.com/recipes/307871-timing-out-function/">
Activestate recipe</a> has <code>signal.alarm(0)</code> outside of the <code>finally</code>
block, but <a href="http://skortchmark.com/">Sam Kortchmar</a>
reported to me that it needs to be inside the <code>finally</code> block
so that the alarm will be cancelled even if there is an exception in the user's function
that is handled by the user. With <code>signal.alarm(0)</code> outside of the <code>finally</code>
block, the alarm still fires in that case.
</p>
<p>Here is the test case sent by Sam:</p>
<pre>import unittest2
import time
class TestTimeout(unittest2.TestCase):
def test_watchdog_doesnt_kill_interpreter(self):
"""If this test executes at all, it's working!
otherwise, the whole testing section will be killed
and print out "Alarm clock"
"""
@timeout(1)
def my_func():
raise Exception
try:
my_func()
except Exception:
pass
time.sleep(1.2)
assert True</pre>
<h4>The RequestTimeTooSkewed error</h4>
<pre>S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the request time and the current time is too large.</Message><MaxAllowedSkewMilliseconds>900000</MaxAllowedSkewMilliseconds><RequestId>7DDDS67HF8E37</RequestId><HostId>LKE893JFGDLASKJR9BJ-A9NASFPNAOPWEJORG-98DFGJA498JVJ-A04320JF0293JLKE</HostId><RequestTime>Tue, 27 Apr 2010 22:20:58 GMT</RequestTime><ServerTime>2010-04-27T22:55:24Z</ServerTime></Error></pre>
<h4>See also</h4>
<ul>
<li><a href="http://nick.vargish.org/clues/python-tricks.html">
http://nick.vargish.org/clues/python-tricks.html</a></li>
<li><a href="http://programming-guides.com/python/timeout-a-function">
http://programming-guides.com/python/timeout-a-function</a></li>
</ul>
How to list attributes of an EC2 instance with Python and boto
2010-03-08T12:00:23-08:00https://www.saltycrane.com/blog/2010/03/how-list-attributes-ec2-instance-python-and-boto/<p>Here's how to find out information about your Amazon
<a href="http://aws.amazon.com/ec2/">EC2</a> instances using the Python
<a href="http://code.google.com/p/boto/">boto</a> library.
</p>
<h4>Install boto</h4>
<ul>
<li><a href="http://www.saltycrane.com/blog/2010/02/how-install-pip-ubuntu/">
Install pip</a>
</li>
<li>Install boto
<pre>sudo pip install boto</pre>
</li>
</ul>
<h4>Example</h4>
<pre class="python">from pprint import pprint
from boto import ec2
AWS_ACCESS_KEY_ID = 'XXXXXXXXXXXXXXXXXX'
AWS_SECRET_ACCESS_KEY = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
ec2conn = ec2.connection.EC2Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
reservations = ec2conn.get_all_instances()
instances = [i for r in reservations for i in r.instances]
for i in instances:
pprint(i.__dict__)
break # remove this to list all instances</pre>
<p>Results:</p>
<pre>{'_in_monitoring_element': False,
'ami_launch_index': u'0',
'architecture': u'x86_64',
'block_device_mapping': {},
'connection': EC2Connection:ec2.amazonaws.com,
'dns_name': u'ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com',
'id': u'i-xxxxxxxx',
'image_id': u'ami-xxxxxxxx',
'instanceState': u'\n ',
'instance_class': None,
'instance_type': u'm1.large',
'ip_address': u'xxx.xxx.xxx.xxx',
'item': u'\n ',
'kernel': None,
'key_name': u'FARM-xxxx',
'launch_time': u'2009-10-27T17:10:22.000Z',
'monitored': False,
'monitoring': u'\n ',
'persistent': False,
'placement': u'us-east-1d',
'previous_state': None,
'private_dns_name': u'ip-10-xxx-xxx-xxx.ec2.internal',
'private_ip_address': u'10.xxx.xxx.xxx',
'product_codes': [],
'public_dns_name': u'ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com',
'ramdisk': None,
'reason': '',
'region': RegionInfo:us-east-1,
'requester_id': None,
'rootDeviceType': u'instance-store',
'root_device_name': None,
'shutdown_state': None,
'spot_instance_request_id': None,
'state': u'running',
'state_code': 16,
'subnet_id': None,
'vpc_id': None}</pre>
<h4>For more information</h4>
<ul>
<li><a href="http://boto.s3.amazonaws.com/ref/ec2.html#module-boto.ec2.instance">EC2
instance API docs</a></li>
<li><a href="http://code.google.com/p/boto/source/browse/trunk/boto/ec2/instance.py">boto.ec2.instance.py
source code</a></li>
</ul>
s3curl notes
2010-02-25T13:53:58-08:00https://www.saltycrane.com/blog/2010/02/s3curl-notes/<ul>
<li>Download s3curl from
<a href="http://developer.amazonwebservices.com/connect/entry.jspa?categoryID=47&externalID=128">here</a>.
</li>
<li>Unzip, make executable
<pre>unzip s3-curl.zip
cd s3-curl
chmod a+x s3curl.pl</pre>
</li>
<li>Create <code>~/.s3curl</code> config file
<pre>%awsSecretAccessKeys = (
# personal account
personal => {
id => '1ME55KNV6SBTR7EXG0R2',
key => 'zyMrlZUKeG9UcYpwzlPko/+Ciu0K2co0duRM3fhi',
},
# corporate account
work => {
id => '1ATXQ3HHA59CYF1CVS02',
key => 'WQY4SrSS95pJUT95V6zWea01gBKBCL6PI0cdxeH8',
},
);</pre>
</li>
<li>List contents of a bucket
<pre>./s3curl.pl --id=work -- http://s3.amazonaws.com/mybucket</pre>
</li>
</ul>
<p>See also: <a href="http://open.eucalyptus.com/wiki/s3curl">http://open.eucalyptus.com/wiki/s3curl</a></p>
s3cmd notes
2010-02-25T12:38:46-08:00https://www.saltycrane.com/blog/2010/02/s3cmd-notes/<p><a href="http://s3tools.org/s3cmd">s3cmd</a> is an intuitive way
to work with Amazon's <a href="http://aws.amazon.com/s3/">S3</a>
on the command line. I first tried s3cmd based on the
<a href="http://twitter.com/clemesha/status/9298830898">Alex Clemesha's
recommendation</a>. Here are my notes. I'm running on Ubuntu Karmic.
</p>
<h4 id="install-s3cmd">Install s3cmd</h4>
<pre class="console">$ sudo apt-get install s3cmd</pre>
<h4 id="configure-s3cmd">Configure s3cmd</h4>
<pre class="console">$ s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3
Access Key: XXXXXXXXXXXXXX
Secret Key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: XXXXX
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP and can't be used if you're behind a proxy
Use HTTPS protocol [No]: yes
New settings:
Access Key: XXXXXXXXXXXXXX
Secret Key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Encryption password: XXXXX
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: True
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n]
Please wait...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Success. Encryption and decryption worked fine :-)
Save settings? [y/N] y
Configuration saved to '/home/saltycrane/.s3cfg'</pre>
<h4 id="list-buckets">List all your buckets</h4>
<pre class="console">$ s3cmd ls</pre>
<h4 id="list-bucket-contents">List contents of your bucket</h4>
<pre class="console">$ s3cmd ls s3://mybucket</pre>
<h4 id="upload-file">Upload a file (and make it public)</h4>
<pre class="console">$ s3cmd -P put /path/to/local/file.jpg s3://mybucket/my/prefix/file.jpg</pre>
<h4 id="delete-file">Delete a file</h4>
<pre class="console">$ s3cmd del s3://mybucket/my/prefix/file.jpg</pre>
<h4 id="s3cmd-help">Get help</h4>
<pre class="console">$ s3cmd --help
Usage: s3cmd [options] COMMAND [parameters]
S3cmd is a tool for managing objects in Amazon S3 storage. It allows for
making and removing "buckets" and uploading, downloading and removing
"objects" from these buckets.
Options:
-h, --help show this help message and exit
--configure Invoke interactive (re)configuration tool.
-c FILE, --config=FILE
Config file name. Defaults to /home/eliot/.s3cfg
--dump-config Dump current configuration after parsing config files
and command line options and exit.
-n, --dry-run Only show what should be uploaded or downloaded but
don't actually do it. May still perform S3 requests to
get bucket listings and other information though (only
for file transfer commands)
-e, --encrypt Encrypt files before uploading to S3.
--no-encrypt Don't encrypt files.
-f, --force Force overwrite and other dangerous operations.
--continue Continue getting a partially downloaded file (only for
[get] command).
--skip-existing Skip over files that exist at the destination (only
for [get] and [sync] commands).
-r, --recursive Recursive upload, download or removal.
-P, --acl-public Store objects with ACL allowing read for anyone.
--acl-private Store objects with default ACL allowing access for you
only.
--delete-removed Delete remote objects with no corresponding local file
[sync]
--no-delete-removed Don't delete remote objects.
-p, --preserve Preserve filesystem attributes (mode, ownership,
timestamps). Default for [sync] command.
--no-preserve Don't store FS attributes
--exclude=GLOB Filenames and paths matching GLOB will be excluded
from sync
--exclude-from=FILE Read --exclude GLOBs from FILE
--rexclude=REGEXP Filenames and paths matching REGEXP (regular
expression) will be excluded from sync
--rexclude-from=FILE Read --rexclude REGEXPs from FILE
--include=GLOB Filenames and paths matching GLOB will be included
even if previously excluded by one of
--(r)exclude(-from) patterns
--include-from=FILE Read --include GLOBs from FILE
--rinclude=REGEXP Same as --include but uses REGEXP (regular expression)
instead of GLOB
--rinclude-from=FILE Read --rinclude REGEXPs from FILE
--bucket-location=BUCKET_LOCATION
Datacentre to create bucket in. Either EU or US
(default)
-m MIME/TYPE, --mime-type=MIME/TYPE
Default MIME-type to be set for objects stored.
-M, --guess-mime-type
Guess MIME-type of files by their extension. Falls
back to default MIME-Type as specified by --mime-type
option
--add-header=NAME:VALUE
Add a given HTTP header to the upload request. Can be
used multiple times. For instance set 'Expires' or
'Cache-Control' headers (or both) using this options
if you like.
--encoding=ENCODING Override autodetected terminal and filesystem encoding
(character set). Autodetected: UTF-8
--list-md5 Include MD5 sums in bucket listings (only for 'ls'
command).
-H, --human-readable-sizes
Print sizes in human readable form (eg 1kB instead of
1234).
--progress Display progress meter (default on TTY).
--no-progress Don't display progress meter (default on non-TTY).
--enable Enable given CloudFront distribution (only for
[cfmodify] command)
--disable Enable given CloudFront distribution (only for
[cfmodify] command)
--cf-add-cname=CNAME Add given CNAME to a CloudFront distribution (only for
[cfcreate] and [cfmodify] commands)
--cf-remove-cname=CNAME
Remove given CNAME from a CloudFront distribution
(only for [cfmodify] command)
--cf-comment=COMMENT Set COMMENT for a given CloudFront distribution (only
for [cfcreate] and [cfmodify] commands)
-v, --verbose Enable verbose output.
-d, --debug Enable debug output.
--version Show s3cmd version (0.9.9) and exit.
Commands:
Make bucket
s3cmd mb s3://BUCKET
Remove bucket
s3cmd rb s3://BUCKET
List objects or buckets
s3cmd ls [s3://BUCKET[/PREFIX]]
List all object in all buckets
s3cmd la
Put file into bucket
s3cmd put FILE [FILE...] s3://BUCKET[/PREFIX]
Get file from bucket
s3cmd get s3://BUCKET/OBJECT LOCAL_FILE
Delete file from bucket
s3cmd del s3://BUCKET/OBJECT
Synchronize a directory tree to S3
s3cmd sync LOCAL_DIR s3://BUCKET[/PREFIX] or s3://BUCKET[/PREFIX] LOCAL_DIR
Disk usage by buckets
s3cmd du [s3://BUCKET[/PREFIX]]
Get various information about Buckets or Files
s3cmd info s3://BUCKET[/OBJECT]
Copy object
s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
Move object
s3cmd mv s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]
Modify Access control list for Bucket or Files
s3cmd setacl s3://BUCKET[/OBJECT]
List CloudFront distribution points
s3cmd cflist
Display CloudFront distribution point parameters
s3cmd cfinfo [cf://DIST_ID]
Create CloudFront distribution point
s3cmd cfcreate s3://BUCKET
Delete CloudFront distribution point
s3cmd cfdelete cf://DIST_ID
Change CloudFront distribution point parameters
s3cmd cfmodify cf://DIST_ID
See program homepage for more information at
http://s3tools.org</pre>
Card store project #4: Notes on using Amazon's CloudFront
2008-12-27T17:22:45-08:00https://www.saltycrane.com/blog/2008/12/card-store-project-4-notes-using-amazons-cloudfront/<p>I haven't been keeping up with the current events very well recently, but
I haven't noticed a lot of people using Amazon's
<a href="http://aws.amazon.com/s3/">S3</a> or
<a href="http://aws.amazon.com/cloudfront/">CloudFront</a> with Django on
VPS hosting. Though there is <a href="http://holovaty.com/blog/archive/2006/04/07/0927">
Adrian's post</a> from 2006, I see more articles about serving media
files with <a href="http://www.lighttpd.net/">lighttpd</a> or, more recently,
<a href="http://wiki.codemongers.com/Main">nginx</a>. Is a CDN unnecessary
for our needs? I thought it'd be good to take some load off my VPS server
since I need all the memory I can get for my Django web server and database.
But maybe web servers such as nginx are so lightweight it doesn't make much
of an impact? I didn't think the cost would be too much-- on this blog, I'm
only paying about $0.10/month for S3 services to serve my static media. Of course,
there isn't a lot of static media to serve on this blog, but it still seems
like it would be a fraction of the $20/month I'm paying for VPS at
<a href="http://www.slicehost.com/">Slicehost</a>. It may be the convenience
factor-- because every time I update a static file, I then have to upload it
to S3. This is even more inconvenient for files uploaded through the admin
interface. I think some people have probably solved this already... maybe using
Django signals. Maybe it is a combination of all these things. Please let me know what
you think. If you're not using S3/CloudFront, why aren't you?</p>
<p>Well I went ahead and gave CloudFront a try since it is so easy. My card store project
website seems to
be somewhat faster than before. Please check it out
<a href="http://handsoncards.com/">here</a>.
I'm still not sure if I should be happy with the site's speed though. I did a quick
<a href="http://www.danga.com/memcached/">memcached</a> install, but I don't
think I've configured it properly. I will probably need to revisit that.
Anyways, here are my notes on using
CloudFront with my <a href="http://www.satchmoproject.com/">Satchmo</a> store.</p>
<h4>Sign up for S3</h4>
<ul>
<li>Sign up for <a href="http://aws.amazon.com/">Amazon
Web Services</a></li>
<li>Sign up for <a href="http://aws.amazon.com/s3/">Simple Storage
Service</a></li>
<li>Take note of your "Access Key ID" and your "Secret Access Key" under
"Your Account", "Access Identifiers"</li>
</ul>
<h4>Get S3 Python library</h4>
<ul>
<li>Download the <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=134">
Amazon S3 Python library</a></li>
<li>Unpack it, and put <code>s3-example-libraries/python/S3.py</code>
<a href="http://www.saltycrane.com/blog/2008/08/somewhere-your-python-path/">somewhere on your
Python path</a>.</li>
</ul>
<h4>Create a S3 bucket</h4>
<ul>
<li>Create a file named <code>create_bucket.py</code>:
<pre>import S3
ACCESS_KEY = 'myaccesskey'
SECRET_KEY = 'mysecretaccesskey'
BUCKET_NAME = 'handsoncards'
conn = S3.AWSAuthConnection(ACCESS_KEY, SECRET_KEY)
conn.create_bucket(BUCKET_NAME)</pre>
</li>
<li>Run it:
<pre>python create_bucket.py</pre>
</li>
</ul>
<h4>Upload files to S3</h4>
<ul>
<li>Download <a href="http://www.holovaty.com/code/update_s3.py">Adrian's S3 upload
script</a> and save it to <code>/srv/HandsOnCards/handsoncards/bin/update_s3.py</code></li>
<li>Edit the script with the correct values for <code>AWS_ACCESS_KEY_ID</code>,
<code>AWS_SECRET_ACCESS_KEY</code>, and <code>BUCKET_NAME</code>.</li>
<li>Upload files. (Assumes static directory is linked to <code>/var/www/site_media</code>).
<pre>cd /var/www
find -L site_media | grep -v '~$' | python /srv/HandsOnCards/handsoncards/bin/update_s3.py
find -L admin_media | grep -v '~$' | python /srv/HandsOnCards/handsoncards/bin/update_s3.py</pre>
</li>
</ul>
<h4>Set up CloudFront</h4>
<ul>
<li>Sign up for CloudFront</li>
<li>Get the S3 Fox firefox plugin</li>
<li>Click "Manage Accounts" and enter access key and secret key</li>
<li>Right click on your bucket (handsoncards) and select "Manage Distributions"
Enter a "Comment" and optional CNAME, then click "Create Distribution".
</li>
<li>Wait a while while the distribution is created. Take note of the
"Domain Name". For me it is: <code>http://d16z1yuk7jeryy.cloudfront.net</code>
</li>
<li>Click the refresh button until the "Status" says "Deployed"</li>
</ul>
<h4>Update settings and templates to use CloudFront</h4>
<ul>
<li>In settings.py set MEDIA_URL and ADMIN_MEDIA_PREFIX as follows:
<pre>MEDIA_URL = 'http://d16z1yuk7jeryy.cloudfront.net/site_media/'
ADMIN_MEDIA_PREFIX = 'http://d16z1yuk7jeryy.cloudfront.net/admin_media/'</pre>
</li>
<li>In your base.html template and all other templates, replace
<code>/site_media/</code> with
<code>http://d16z1yuk7jeryy.cloudfront.net/site_media/</code>.
</li>
</ul>
<h4>Update 2009-06-08: Add "Expires" headers</h4>
<p>For better performance, it is good to add a far-future "Expires" header to static
content on S3. To do this I modified Adrian's script to set the "Expires"
header to be one year in the future as shown below.
Thanks to <a href="#c2711">orip</a> for this tip.</p>
<pre><span style="color:red">from datetime import datetime, timedelta</span>
import mimetypes
import os.path
import sys
import S3 # Get this from Amazon
AWS_ACCESS_KEY_ID = 'CHANGEME'
AWS_SECRET_ACCESS_KEY = 'CHANGEME'
BUCKET_NAME = 'CHANGEME'
def update_s3():
conn = S3.AWSAuthConnection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
for line in sys.stdin:
filename = os.path.normpath(line[:-1])
if filename == '.' or not os.path.isfile(filename):
continue # Skip this, because it's not a file.
print "Uploading %s" % filename
filedata = open(filename, 'rb').read()
<span style="color:red">expires = datetime.utcnow() + timedelta(days=365)
expires = expires.strftime("%a, %d %b %Y %H:%M:%S GMT")</span>
content_type = mimetypes.guess_type(filename)[0]
if not content_type:
content_type = 'text/plain'
conn.put(BUCKET_NAME, filename, S3.S3Object(filedata),
{'x-amz-acl': 'public-read',
'Content-Type': content_type,
<span style="color:red">'Expires': expires,</span>
})
if __name__ == "__main__":
update_s3()</pre>
<br>
For more information:<br>
<ul>
<li><a href="http://developer.yahoo.com/performance/rules.html#expires">
"Add an Expires or a Cache-Control Header" section of the Yahoo Developer Network
"Best Practices for Speeding Up Your Web Site" guide</a></li>
<li><a href="http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.21">
Section 14.21 of the HTTP specification</a></li>
<li><a href="http://www.drunkenfist.com/304/2007/12/26/setting-far-future-expires-headers-for-images-in-amazon-s3/">
Rob Larsen's blog post: "Setting Far Future Expires Headers For Images In Amazon S3"</a></li>
</ul>
<h4>Update 2009-10-21: Add CNAME record</h4>
<p>Go to your DNS Zone manager and add a CNAME record with the following parameters:</p>
<ul>
<li>Type: CNAME</li>
<li>Name: static</li>
<li>Data: d16z1yuk7jeryy.cloudfront.net. <em>(Don't forget the period at the end!)</li>
<li>TTL: <em>whatever you want. I left it at 86400</em></li>
</ul>
<p>Now wherever I previously would have used <code>http://d16z1yuk7jeryy.cloudfront.net</code>, I can replace it with <code>http://static.handsoncards.com</code>.</p>
Notes on Python deployment using Fabric
2008-09-28T00:24:21-07:00https://www.saltycrane.com/blog/2008/09/notes-python-deployment-using-fabric/<p>I found out about
<a href="http://www.nongnu.org/fab/">Fabric</a> via Armin Ronacher's article
<a href="http://lucumr.pocoo.org/cogitations/2008/07/17/deploying-python-web-applications/">
Deploying Python Web Applications</a>.
Fabric is a
<a href="http://www.capify.org/">Capistrano</a> inspired
deployment tool for the Python community. It is very simple
to use. There are 4 main commands: <code>local</code> is
almost like <code>os.system</code> because it runs a command
on the local machine, <code>run</code> and <code>sudo</code>
run a command on a remote machine as either a normal user
or as root, and <code>put</code> transfers a file to a remote
machine.</p>
<p>Here is a sample setup which displays information about
the Apache processes on my remote EC2 instance.
</p>
<ul>
<li><a href="http://www.saltycrane.com/blog/2007/01/how-to-install-easy-install-for-python/">
Install Easy Install</a></li>
<li>Install Fabric
<pre>$ sudo easy_install Fabric</pre></li>
<li>Create a file called <code>fabfile.py</code> located at <code>~/myproject</code>
<pre class="python">def ec2():
set(fab_hosts = ['ec2-65-234-55-183.compute-1.amazonaws.com'],
fab_user = 'sofeng',
fab_password = 'mypassword',)
def ps_apache():
run("ps -e -O rss,pcpu | grep apache")</pre>
Note: for security reasons, you can remove the password from the fabfile and
Fabric will prompt for it interactively. Per
<a href="http://www.nongnu.org/fab/user_guide.html">the documentation</a>,
Fabric also supports key-based authentication.<br><br>
</li>
<li>Run it
<pre>$ cd ~/myproject
$ fab ec2 ps_apache</pre>
Results:
<pre> Fabric v. 0.0.9, Copyright (C) 2008 Christian Vest Hansen.
Fabric comes with ABSOLUTELY NO WARRANTY; for details type `fab warranty'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `fab license' for details.
Running ec2...
Running ps_apache...
Logging into the following hosts as sofeng:
ec2-65-234-55-183.compute-1.amazonaws.com
[ec2-65-234-55-183.compute-1.amazonaws.com] run: ps -e -O rss,pcpu | grep apache
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2163 5504 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2520 15812 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2521 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2522 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2523 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2524 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2619 3664 0.0 S ? 00:00:00 /usr/sbin/apache2 -k start
[ec2-65-234-55-183.compute-1.amazonaws.com] out: 2629 1204 0.0 R ? 00:00:00 /bin/bash -l -c ps -e -O rss,pcpu | grep apache
Done.</pre>
</li>
</ul>
Notes on using EC2 command line tools
2008-09-27T20:52:48-07:00https://www.saltycrane.com/blog/2008/09/notes-using-ec2-command-line-tools/
<h5>Create AWS accounts</h5>
<ul>
<li>Create an AWS account at <a href="http://aws.amazon.com/">http://aws.amazon.com/</a>.</li>
<li>Create an AWS EC2 account at <a href="http://aws.amazon.com/ec2/">http://aws.amazon.com/ec2/</a>.
(You will need to enter a credit card number.)
</li>
</ul>
<h5>Create a X.509 Certificate</h5>
<p>Note: A X.509 Certificate is one type of Access Identifier. Access Identifiers
are used to <em>"identify yourself as the sender of a request to an AWS web service"</em>.
There are two types of access identifiers: AWS Access Key Identifiers and
X.509 Certificates. AWS Access Key Identifiers are supported by
all Amazon Web Services and X.509 Certificates are supported only by
Amazon's EC2 and SQS services (see <a href="http://aws-portal.amazon.com/gp/aws/developer/account/access-identifier-help.html">
here</a> for the chart). However, for some reason, the popular Java command
line tools for EC2 only support X.509 Certificates (and not AWS Access Key
Identifiers).</p>
<ul>
<li>From <a href="http://aws.amazon.com/account/">Your Account page</a>,
select <a href="http://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key">Access
Identifiers</a>.</li>
<li>In the "X.509 Certificate" section, click "Create New".
</li>
<li>Download both the "Private Key" file and the "X.509 Certificate" file
to the directory, <code>~/.ec2</code>. (The private key file will be
named something like pk-XXXXXXXXXXXXXXXXXXXXXX.pem and the X.509
Certificate file will be named something like
cert-XXXXXXXXXXXXXXXXXXXXXX.pem.)</li>
</ul>
<h5>Install Java</h5>
<p>The command line tools require Java version 5 or later. Only the
JRE is required.</p>
<ul>
<li><pre>$ sudo apt-get install sun-java6-jre</pre></li>
</ul>
<h5>Download Java Command-line Tools</h5>
<ul>
<li>Go to the <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351&categoryID=88">
Amazon EC2 Command-Line Tools</a> library page, and
<a href="http://www.amazon.com/gp/redirect.html/ref=aws_rc_ec2tools?location=http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip&token=A80325AA4DAB186C80828ED5138633E3F49160D9">
Download the Amazon EC2 Command-Line Tools</a>.</li>
<li>Unzip the tools to <code>~/lib</code>
<pre>$ unzip ec2-api-tools.zip
mv ec2-api-tools-1.3-24159 ~/lib</pre>
</li>
</ul>
<h5>Define environment variables</h5>
<ul>
<li>Add the following lines to your <code>~/.bashrc</code> (or wherever
you set your environment variables).
<pre>export EC2_HOME=$HOME/lib/ec2-api-tools-1.3-24159
export JAVA_HOME=/usr
export EC2_PRIVATE_KEY=$HOME/.ec2/pk-XXXXXXXXXXXXXXXXXXXX.pem
export EC2_CERT=$HOME/.ec2/cert-XXXXXXXXXXXXXXXXXXXX.pem
export PATH=$PATH:$EC2_HOME/bin
</pre>
</li>
<li>Source your <code>.bashrc</code> or whichever file you used
<pre>$ source ~/.bashrc</pre>
</li>
</ul>
<h5>Test the command-line tools</h5>
<ul>
<li>Run the <code>ec2-describe-images</code> command to verify everything is working.
It should list all the Ubuntu 8.xx images from Alestic.
<pre>$ ec2-describe-images -a | grep alestic/ubuntu-8</pre>
Results:
<pre height='300px' style="height: 200px; overflow: auto">IMAGE ami-3a7c9953 alestic/ubuntu-8.04-hardy-base-20080419.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-75789d1c alestic/ubuntu-8.04-hardy-base-20080424.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-ce44a1a7 alestic/ubuntu-8.04-hardy-base-20080430.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-2048ad49 alestic/ubuntu-8.04-hardy-base-20080514.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-6a57b203 alestic/ubuntu-8.04-hardy-base-20080517.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-26bc584f alestic/ubuntu-8.04-hardy-base-20080628.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-179e7a7e alestic/ubuntu-8.04-hardy-base-20080803.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-c0fa1ea9 alestic/ubuntu-8.04-hardy-base-20080905.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-38d43051 alestic/ubuntu-8.04-hardy-base-20080922.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-1cd73375 alestic/ubuntu-8.04-hardy-base-20080924.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-337c995a alestic/ubuntu-8.04-hardy-desktop-20080419.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-4f789d26 alestic/ubuntu-8.04-hardy-desktop-20080424.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-f744a19e alestic/ubuntu-8.04-hardy-desktop-20080430.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-1f4bae76 alestic/ubuntu-8.04-hardy-desktop-20080514.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-0e57b267 alestic/ubuntu-8.04-hardy-desktop-20080517.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-b5bc58dc alestic/ubuntu-8.04-hardy-desktop-20080628.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-f39e7a9a alestic/ubuntu-8.04-hardy-desktop-20080803.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-44c4202d alestic/ubuntu-8.04-hardy-desktop-20080905.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-f7d4309e alestic/ubuntu-8.04-hardy-desktop-20080922.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-88d733e1 alestic/ubuntu-8.04-hardy-desktop-20080924.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-bcbe5ad5 alestic/ubuntu-8.04-hardy-rightscale-20080701.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-27b95d4e alestic/ubuntu-8.04-hardy-rightscale-20080703.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-b1ea0ed8 alestic/ubuntu-8.04-hardy-rightscale-20080824.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-47c4202e alestic/ubuntu-8.04-hardy-rightscale-20080905.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-f4d4309d alestic/ubuntu-8.04-hardy-rightscale-20080922.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-89d733e0 alestic/ubuntu-8.04-hardy-rightscale-20080924.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-dcbc58b5 alestic/ubuntu-8.10-intrepid-base-20080628.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-db9e7ab2 alestic/ubuntu-8.10-intrepid-base-20080804.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-9de105f4 alestic/ubuntu-8.10-intrepid-base-20080814.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-c3fa1eaa alestic/ubuntu-8.10-intrepid-base-20080905.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-3bd43052 alestic/ubuntu-8.10-intrepid-base-20080922.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-1ad73373 alestic/ubuntu-8.10-intrepid-base-20080924.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-b6bc58df alestic/ubuntu-8.10-intrepid-desktop-20080628.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-d69e7abf alestic/ubuntu-8.10-intrepid-desktop-20080804.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-d4e206bd alestic/ubuntu-8.10-intrepid-desktop-20080815.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-7dc22614 alestic/ubuntu-8.10-intrepid-desktop-20080908.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-f5d4309c alestic/ubuntu-8.10-intrepid-desktop-20080922.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc
IMAGE ami-b6d733df alestic/ubuntu-8.10-intrepid-desktop-20080924.manifest.xml 063491364108 available public i386 machine aki-a71cf9ce ari-a51cf9cc</pre>
</li>
</ul>
<h5>Generate a keypair</h5>
<p>In the second step, I generated a keypair as my X.509 Certificate. That was used
to identifiy myself to Amazon Web Services. Now I need to create another keypair
which is used to log into a running EC2 instance. (Note, there is exactly one
X.509 Certificate per user (i.e. AWS account), but a user can have many keypairs
used for logging into various EC2 instances.) See also the
<a href="http://docs.amazonwebservices.com/AWSEC2/2008-05-05/GettingStartedGuide/running-an-instance.html#generating-a-keypair">
Generating a keypair</a> section in the Getting Started Guide.
</p>
<ul>
<li>Generate the keypair. I named the keypair, <code>disco-keypair</code> because
I will use this keypair with EC2 instances used to try out
<a href="http://discoproject.org">Disco</a>.
<pre>$ ec2-add-keypair disco-keypair > ~/.ec2/id_rsa-disco-keypair
</pre>
</li>
<li>Set the permissions on the private key
<pre>chmod 600 ~/.ec2/id_rsa-disco-keypair</pre>
</li>
</ul>
<h5>Run an EC2 instance</h5>
<ul>
<li>Select an image to run. I used the <code>alestic/ubuntu-8.04-hardy-base-20080924</code>
image with image ID <code>ami-1cd73375</code>.
</li>
<li>Run the instance
<pre>$ ec2-run-instances -k disco-keypair ami-1cd73375</pre>
It should return something like:
<pre>RESERVATION r-568f5d3f 719606167433 default
INSTANCE i-339f3c5a ami-1cd73375 pending disco-keypair 0 m1.small 2008-09-28T00:50:35+0000 us-east-1c aki-a71cf9ce ari-a51cf9cc</pre>
</li>
<li>Check the status of the running instance:
<pre>$ ec2-describe-instances</pre>
After a short period of time, it should return something like:
<pre>RESERVATION r-568f5d3f 719606167433 default
INSTANCE i-339f3c5a ami-1cd73375 ec2-75-101-200-13.compute-1.amazonaws.com ip-10-251-30-10.ec2.internal running disco-keypair 0 m1.small 2008-09-28T00:50:35+0000us-east-1c aki-a71cf9ce ari-a51cf9cc</pre>
Note the address <code>ec2-75-101-200-13.compute-1.amazonaws.com</code>. This
is the external address used to connect to the instance. Also note the instance
ID <code>i-339f3c5a</code>. This is needed to terminate the instance.
</li>
<li>Authorize access to the instance through ports 22 (ssh) and 80 (http)
<pre>$ ec2-authorize default -p 22
GROUP default
PERMISSION default ALLOWS tcp 22 22 FROM CIDR 0.0.0.0/0</pre>
<pre>$ ec2-authorize default -p 80
GROUP default
PERMISSION default ALLOWS tcp 80 80 FROM CIDR 0.0.0.0/0</pre>
</li>
</ul>
<h5>SSH into instance</h5>
<ul>
<li>Use the address from the previous step to SSH into your instance:
<pre>$ ssh -i ~/.ec2/id_rsa-disco-keypair -l root ec2-75-101-200-13.compute-1.amazonaws.com</pre>
</li>
</ul>
<h5>Terminate the instance</h5>
<ul>
<li><pre>$ ec2-terminate-instance i-339f3c5a</pre>
which returns:
<pre>INSTANCE i-339f3c5a running shutting-down</pre>
</li>
<li>Running <code>ec2-describe-instances</code> shows that the
instance is terminated.
<pre>$ ec2-describe-instances
RESERVATION r-568f5d3f 719606167433 default
INSTANCE i-339f3c5a ami-1cd73375 terminated disco-keypair 0 m1.small 2008-09-28T00:50:35+0000 aki-a71cf9ce ari-a51cf9cc</pre>
</li>
</ul>
Notes on Django and MySql on Amazon's EC2
2008-08-30T03:08:25-07:00https://www.saltycrane.com/blog/2008/08/notes-django-and-mysql-amazons-ec2/<h5>Install Elasticfox</h5>
<p>Install the Elasticfox Firefox Extension for Amazon EC2:
<a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609">
http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609</a>
</p>
<h5>Set up Amazon EC2 accounts and Elasticfox</h5>
<p>Follow
<a href="http://arope99.blogspot.com/2008/05/getting-started-with-amazon-elastic.html">
Arope's instructions for setting up Amazon EC2 accounts
and Elasticfox</a>. I used the
alestic/ubuntu-8.04-hardy-base-20080628.manifest.xml machine
image.
</p>
<h5>view standard apache page</h5>
<p>In Elasticfox, right-click on your running instance and select
"Copy Public DNS Name to clipboard". Then, paste that address
in your browser. You should see Apache's "It works!" page.
</p>
<h5>ssh into instance</h5>
<p>In Elasticfox, right-click on your running instance and select
"SSH to Public Domain Name"</p>
<h5>install stuff</h5>
<p>Ubuntu Hardy has the following versions:</p>
<ul>
<li>Apache 2.2.8</li>
<li>Mod_python 3.3.1</li>
<li>MySql 5.0.51</li>
<li>Django 0.96.1</li>
</ul>
<br>
<p>On your remote instance, do the following.</p>
<pre># apt-get update
# apt-get install python-django
# apt-get install mysql-server
# apt-get install python-mysqldb
# apt-get install libapache2-mod-python</pre>
<p><em>Update 2008-09-09</em>: The
<a href="http://www.djangoproject.com/documentation/modpython/">Django mod_python
documentation</a> recommends using Apache's
<a href ="http://httpd.apache.org/docs/2.2/mod/prefork.html">prefork MPM</a> as opposed
to the <a href="http://httpd.apache.org/docs/2.2/mod/worker.html">worker MPM</a>. The
worker MPM was installed by default on my Alestic Ubuntu image so I uninstalled it and replaced it
with the prefork version.</p>
<pre># apt-get autoremove --purge apache2-mpm-worker
# apt-get install apache2-mpm-prefork</pre>
<p>To see your current version of Apache, run the command:
<code>apache2 -V</code></p>
<h5>create a django project</h5>
<pre># cd /srv
# django-admin startproject mysite</pre>
<h5>configure django mod_python</h5>
<p>See also Jeff Baier's article:
<a href="http://www.jeffbaier.com/2007/07/26/installing-django-on-an-ubuntu-linux-server/">
Installing Django on an Ubuntu Linux Server</a>
for more information.
</p>
<p>Edit <code>/etc/apache2/httpd.conf</code> and insert the
following:</p>
<pre><location "/">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mysite.settings
PythonPath "['/srv'] + sys.path"
PythonDebug On
</location></pre>
<h5>restart the apache server</h5>
<pre># /etc/init.d/apache2 restart</pre>
<p>You should see Django's "It Worked!" page.</p>
<h5>Set up a MySql database and user</h5>
<p>Note, use the password you entered when installing MySql</p>
<pre># mysql -u root -p
Enter password:
mysql> CREATE DATABASE django_db;
Query OK, 1 row affected (0.01 sec)
mysql> GRANT ALL ON django_db.* TO 'djangouser'@'localhost' IDENTIFIED BY 'yourpassword';
Query OK, 0 rows affected (0.03 sec)
mysql> quit
Bye</pre>
<h5>Edit the Django database settings</h5>
Edit <code>mysite/settings.py</code>:
<pre>DATABASE_ENGINE = 'mysql' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'.
DATABASE_NAME = 'django_db' # Or path to database file if using sqlite3.
DATABASE_USER = 'djangouser' # Not used with sqlite3.
DATABASE_PASSWORD = 'yourpassword' # Not used with sqlite3.
DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3.
DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3.</pre>
<h5>Do a 'syncdb' to create the database tables</h5>
<pre># cd mysite
# python manage.py syncdb
Creating table auth_message
Creating table auth_group
Creating table auth_user
Creating table auth_permission
Creating table django_content_type
Creating table django_session
Creating table django_site
You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes
Username (Leave blank to use 'sofeng'):
E-mail address: sofeng@email.com
Password:
Password (again):
Superuser created successfully.
Installing index for auth.Message model
Installing index for auth.Permission model
Loading 'initial_data' fixtures...
No fixtures found.</pre>
<h5>upload a mercurial django project</h5>
<p>on the remote instance, install mercurial:</p>
<pre># apt-get install mercurial</pre>
<p>on your local machine with the mercurial repo, run:</p>
<pre>$ hg clone -e 'ssh -i /home/sofeng/.ec2-elasticfox/id_django-keypair.pem' yourproj ssh://root@yourdns.compute-1.amazonaws.com//srv/yourproj</pre>
where <code>/home/sofeng/.ec2-elasticfox/id_django-keypair.pem</code> is
the private key associated with your instance and
<code>yourdns.compute-1.amazonaws.com</code> is the
public domain name associated with your instance.
<p>back on the remote instance:</p>
<pre># cd /srv/mozblog
# hg update</pre>
<pre># python manage.py syncdb</pre>
<h5>set up apache to serve static files</h5>
<ul>
<li>Create a link to the media files:
<pre># cd /var/www
# ln -s /srv/mozblog/media site_media
# ln -s /usr/share/python-support/python-django/django/contrib/admin/media/ admin_media</pre>
</li>
<li>Edit <code>/etc/apache2/httpd.conf</code>:
<pre><location "/">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE mozblog.settings
PythonPath "['/srv'] + sys.path"
PythonDebug On
</location>
<location "/site_media">
SetHandler None
</location>
<location "/admin_media">
SetHandler None
</location></pre>
</li>
</ul>
<h5>Restart the apache server</h5>
<pre># /etc/init.d/apache2 restart</pre>
<br>