Example Ansible playbook for deploying Serverless service.
Benefits on using this approach:
- The service can be tested in the build flow
- Stack deployment is faster when the service is already compiled and test are runned
- Same tested package is used in all the stages
For small scale serverless projects there is also a lite version that builds and deploys Serverless service serverless-deployment-ansible-lite.
Contents
group_vars
default variablesinventories
inventory files and variables for environments (development, production, etc.)roles
infra
role for infrastructure, vpc, database etc.service
role for Serverless service
scripts
scripts that helps deployment
If you are deploying from your local environment, Docker is required for deploying the playbook or alternatively Ansible and all the dependencies defined in Dockerfile installed in you environment.
When using jenkins for deployment, easiest way is to setup Jenkins into EC2 instance running in your AWS account. Here is quick and dirty instructions how to do that Jenkins setup instructions
In addition to suggested plugins, install following plugins also:
- Pipeline: AWS Steps
- Version Number Plug-In
- Create artifact from the Serverless service that contains parameterized CloudFormation Jinja2 template and functions package
- Create artifact from other services, e.g. website frontend
- Upload artifacts to S3 (or Artifactory or other file system).
- Download artifact with Ansible
- Extract and create environment specific CloudFormation templates
- Deploy stack to AWS
Create S3 bucket for artifacts using AWS Console or aws-cli.
aws s3api create-bucket --bucket my-artifacts-bucket --region us-east-1
serverless-deployment-example-service has ready made setup for serverless-ansible-build-plugin that helps the artifact creation process.
In the serverless-deployment-example-service, scripts directory there is create-artifact.sh
that builds service and creates artifact with recommended filename.
Executing create-artifact.sh
with param -p
creates tar package, -s
parameter is mandatory and needs to match service name.
Following snippet will create package named e.g. example-service-1.20170206.1-799dcd4.tar.gz
. Where
example-service
is service name (set with-s
)1
is version for service (set with-v
, optional, defaults to 1)20170206
is build date1
is number of the build in build date (set with-b
, optional, defaults to 1)799dcd4
is short hash from git
./scripts/create-artifact.sh -p -s example-service
The 1.20170206.1-799dcd4
part is the service_version
what is used later in Ansible vars.
Then copy the artifact to you artifact S3 bucket.
Following pipeline script will
- checkouts the service from repository
- creates version tag
- builds service
- uploads artifact to S3 bucket
MAJOR_VERSION = "1"
ARTIFACTS_BUCKET = "my-artifacts-bucket"
SERVICE = "example-service"
node {
stage('Checkout service repo') {
dir('project') {
git 'https://github.com/SC5/serverless-deployment-example-service.git'
}
}
stage('Set version') {
dir('project') {
gitCommitShort = sh(returnStdout: true, script: 'git rev-parse --short HEAD').trim()
version = VersionNumber(projectStartDate: '2016-10-01', skipFailedBuilds: true, versionNumberString: '${BUILD_DATE_FORMATTED, "yyyyMMdd"}.${BUILDS_TODAY, X}-${gitCommitShort}', versionPrefix: "${MAJOR_VERSION}.")
currentVersion = "${version}" + "${gitCommitShort}"
}
}
stage('Build') {
dir('project') {
withEnv(["VERSION=${currentVersion}"]) {
sh "./scripts/create-artifact.sh -s ${SERVICE}"
}
}
}
stage("Copy Artifact to S3") {
dir('artifacts') {
withEnv(["VERSION=${currentVersion}"]) {
sh "truncate latest --size 0 && echo ${currentVersion} > latest"
sh "tar -zcf ${SERVICE}-${VERSION}.tar.gz -C ../project/.ansible ${SERVICE}.zip ${SERVICE}.json.j2"
withAWS {
s3Upload(file:"${SERVICE}-${VERSION}.tar.gz", bucket:"${ARTIFACTS_BUCKET}", path:"${SERVICE}/${SERVICE}-${VERSION}.tar.gz")
s3Upload(file:"latest", bucket:"${ARTIFACTS_BUCKET}", path:"${SERVICE}/latest")
}
}
}
}
}
Define artifacts S3 bucket to artifacts_bucket
variable in group_vars/aws/vars.yml
, so that Ansible knows where to download artifacts.
First define the service name to roles/service/tasks/main.yml
- set_fact:
service_name: example-service
When deploying from local environment AWS secrets needs to be passed to deployment container, for that e.g. .deploy.sh
script in the project root with contents of
#!/usr/bin/env bash
export AWS_ACCESS_KEY=my-access-key
export AWS_SECRET_KEY=my-secret-key
./scripts/deploy-local.sh
might ease up the deployment flow.
- Build Dockerfile with
./scripts/build-docker.sh
- Run
./.deploy.sh
To deploy certain version of the Serverless service, use environmental variable SERVICE_VERSION
. If version is not defined it fallbacks to latest version.
When using Jenkins on AWS EC2, the role of the instance needs to have permissions to deploy CloudFormation stacks, create S3 buckets, IAM Roles, Lambda and other services that are used in Serverless service.
Following pipeline script will
- checkout this repository
- build docker image for deployment
- deploy stacks to AWS
node {
stage('Checkout repository') {
git 'https://github.com/SC5/serverless-deployment-ansible.git'
}
stage('Build Docker image') {
sh "./scripts/build-docker.sh"
}
stage('Deploy') {
sh './scripts/deploy-development.sh'
}
}
The deploy script ./scripts/deploy-development.sh
in deployment repository executes Ansible in docker container.
#!/usr/bin/env bash
docker run -v $(pwd):/src -w /src serverless-deployment /bin/bash -c "\
export SERVICE_VERSION=$SERVICE_VERSION && \
ansible-playbook -vvvv --inventory-file inventories/development/inventory site.yml"
[[ $? -eq 0 ]] || { echo "Deployment failed!" ; exit 1; }
To deploy certain version of the Serverless service, use environmental variable SERVICE_VERSION
, that can be defined in Jenkins parameterized job. If version is not defined it fallbacks to latest version.