Guys I want to automate terraform with Jenkins pipeline
And My terraform is installed on different Vm and Jenkins
Also.
I know there is the Terraform Plugin. But it seems like The Terraform
Has to be installed on the same vm as Jenkins(or on /var/lib/jenkins/workspace)
Is there anyway to get this done?
Please share your suggestions.
Generally, it's a good idea to keep your Jenkins machine as clean as possible, therefore you should avoid installing extra packages on it like Terraform. A better approach for this problem would be creating a Dockerfile with your Terraform binary and all the plugins that you need already built-in, then all you need to do in your Jenkins pipeline is build and execute your Terraform docker.
This is an example of such Dockerfile:
FROM hashicorp/terraform:0.11.7
RUN apk add --no-cache bash python3 && \
pip3 install --no-cache-dir awscli
RUN mkdir -p /plugins
# AWS provider
ENV AWS_VERSION=1.16.0
ENV AWS_SHA256SUM=1150a4095f18d02258d1d52e176b0d291274dee3b3f5511a9bc265a0ef65a948
RUN wget https://releases.hashicorp.com/terraform-provider-aws/${AWS_VERSION}/terraform-provider-aws_${AWS_VERSION}_linux_amd64.zip && \
echo "${AWS_SHA256SUM} terraform-provider-aws_${AWS_VERSION}_linux_amd64.zip" | sha256sum -c - && \
unzip *.zip && \
rm -f *.zip && \
mv -v terraform-provider-aws_* /plugins/
COPY . /app
WORKDIR /app
ENTRYPOINT []
The Terraform documentation also contains a section on best practices running Terraform in CI:
https://www.terraform.io/guides/running-terraform-in-automation.html
Yes, the fastest way to have it done is use master/slave setup for you jenkins. So, what you need to do is to add slave to the machine on which your terraform is running.
I have created a Global Shared Library awesome-jenkins-utils with which you can use different version of terraform simultaneously in same pipeline. Additionally you can easily map build parameters to terraform variables
Related
I want to deploy a lambda function written on a windows machine - to my AWS lambda. Using Upload as Zip - it takes all the node_modules and package file.
But I get an error
errorMessage": "/var/task/node_modules/ibm_db/build/Release/odbc_bindings.node: invalid ELF header",
How can I install a linux appropriate package from the DB2 driver?
You can use docker to run a Linux container with a shared volume between host and the container and build on the container.
Installing Docker in windows can be painful at times, I had the same situation.
Install Ubuntu (or any other distro ) in Windows
from the Windows app store and then have all the dependencies installed and then use AWS CLI to zip the all the modules and upload in Lambda
A sample script can be like this
# Remove zip file if already exit
rm index.zip
# Creating zip file
zip -r index.zip *
# Update lambda function, the present directory name should be same as the lambda function name present in AWS
lambdaName=${PWD##*/}
aws lambda update-function-code --function-name $lambdaName --zip-file fileb://index.zip
# Publish version
aws lambda publish-version --function-name $lambdaName
# Get latest version
version=$(aws lambda publish-version --function-name $lambdaName --description "updated via cli" --query Version | bc)
# Map alias to latest version
aws lambda update-alias --function-name $lambdaName \--function-version $version --name SANDBOX
# Create new alias
# aws lambda create-alias --function-name loyalty-gift-card-link-sl \ --function-version 2 --name SANDBOX2
I had a similar problem, so I launched a t2.micro AWS linux instance, and installed docker, and created the Lambda package.
Here are the steps, if it helps you.
Launch new ec2 of amazonlinux from amzn-ami-hvm-2018.03.0.20181129-x86_64-gp2 (ami-01e24be29428c15b2)
install docker
sudo su
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
log out and log in to pickup the added group
cd /home/ec2-user/
mkdir <LambdaProject>
checkout code from repo
git clone <repo>
build docker, and install Node.js 6.10 along with dependencies
cd /home/ec2-user/
docker build --tag amazonlinux:nodejs .
Install the sharp and querystring module dependencies (OR What you need), and compile the ‘Origin-Response’ function
docker run --rm --volume ${PWD}/lambda/origin-response-function:/build amazonlinux:nodejs /bin/bash -c "source ~/.bashrc; npm init -f -y; npm install sharp --save; npm install querystring --save; npm install url --save; npm install path --save; npm install --only=prod"
mkdir -p dist && cd lambda/origin-response-function && zip -FS -q -r ../../dist/origin-response-function.zip * && cd ../..
Package the ‘Origin-Response’ function.
mkdir -p dist && cd lambda/origin-response-function && zip -FS -q -r ../../dist/origin-response-function.zip * && cd ../..
Note: package is created as dist/origin-response-function.zip
create S3 bucket in us-east-1 region to hold the deployment files and upload the zip files created in above steps. NOTE: You can add triggers only for functions in the US East (N. Virginia) Region.
Bucket:
copy lambda package to s3 bucket
aws s3 cp dist/origin-response-function.zip s3://<bucket_name>/
I'm following this part of the Docker tutorial (on a Mac): https://docs.docker.com/mac/step_four/. I'm getting an error when I try to run the docker-whalesay image because it can't find fortunes.
I started off in the Dockerfile using /user/games/fortunes. Then I changed to just fortunes. Neither work.
How do I specify in the Dockerfile to use the current folder (mydockerbuild)?
The Dockerfile in that example does not rely on files that are present on your computer, basically, the only steps needed are;
Create an empty directory (you named it mydockerbuild)
mkdir mydockerbuild
Change to that directory
cd mydockerbuild
Create a Dockerfile
Edit the Dockerfile to look like this;
FROM docker/whalesay:latest
RUN apt-get -y update && apt-get install -y fortunes
CMD /usr/games/fortune -a | cowsay
Build the Dockerfile, and name the built image "docker-whale"
docker build -t docker-whale .
Run the image you just built
docker run --rm docker-whale
The /usr/games/fortunes path in the Dockerfile is referring to a path inside the container. In this case, the /usr/games/fortunes is created by the fortune package that it's installed by apt-get install -y fortunes.
I'm trying to build a Docker image for a Ruby project. The problem is the project has some gem dependencies that need to build native extensions. My understanding is that I have a couple of choices:
Start with a base image that already has build tools installed.
Use a base image with no build tools, install build tools as a step in the Dockerfile before running bundle install.
Precompile the native extensions on the host, vendorize the gem, and simply copy the resulting bundle into the image.
1 & 2 seem to require that the resulting image contains the build tools needed to build the native extensions. I'm trying to avoid that scenario for security reasons. 3 is cumbersome, but doable, and would accomplish what I want.
Are there any options I'm missing or am I misunderstanding something?
I use option 3 all the time, the goal being to end up with an image which has only what I need to run (not to compile)
For example, here I build and install Apache first, before using the resulting image as a base image for my (patched and recompiled) Apache setup.
Build:
if [ "$(docker images -q apache.deb 2> /dev/null)" = "" ]; then
docker build -t apache.deb -f Dockerfile.build . || exit 1
fi
The Dockerfile.build declares a volume which contains the resulting Apache recompiled (in a deb file)
RUN checkinstall --pkgname=apache2-4 --pkgversion="2.4.10" --backup=no --deldoc=yes --fstrans=no --default
RUN mkdir $HOME/deb && mv *.deb $HOME/deb
VOLUME /root/deb
Installation:
if [ "$(docker images -q apache.inst 2> /dev/null)" = "" ]; then
docker inspect apache.deb.cont > /dev/null 2>&1 || docker run -d -t --name=apache.deb.cont apache.deb
docker inspect apache.inst.cont > /dev/null 2>&1 || docker run -u root -it --name=apache.inst.cont --volumes-from apache.deb.cont --entrypoint "/bin/sh" openldap -c "dpkg -i /root/deb/apache2-4_2.4.10-1_amd64.deb"
docker commit apache.inst.cont apache.inst
docker rm apache.deb.cont apache.inst.cont
fi
Here I install the deb using another image (in my case 'openldap') as a base image:
docker run -u root -it --name=apache.inst.cont --volumes-from apache.deb.cont --entrypoint "/bin/sh" openldap -c "dpkg -i /root/deb/apache2-4_2.4.10-1_amd64.deb"
docker commit apache.inst.cont apache.inst
Finally I have a regular Dockerfile starting from the image I just committed.
FROM apache.inst:latest
psmith points out in the comments to Building Minimal Docker Image for Rails App from Jari Kolehmainen.
For a ruby application, you can remove the part needed for the build easily with:
bundle install --without development test && \
apk del build-dependencies
Since ruby is needed to run the application anyway, that works great in this case.
I my case, I still need a separate image for building, as gcc is not needed to run Apache (and it is quite large, comes with multiple dependencies, some of them needed by Apache at runtime, some not...)
Yesterday I've been asked about how to make a docker images with dockerfile
This time I want to add a question
If I want to make the OS ubuntu 14:04 on images docker, which it is installed, postgresql-9.3.10, install Java JDK 6, copy the file (significant location), and create a user on the images.
Whether I can combine of several dockerfile as needed for images? (dockerfile of postgresql, java, copyfile, and create a user so one dockerfile)
Example. I made one dockerfile "ubuntu"
which contains the command
top line
# Create dockerfile
# get OS ubuntu to images
FROM ubuntu: 14:04
# !!further adding a command on the following link, below the line per-dockerfile(intends command in dockerfile on the link)
# command on dockerfile postgresql-9.3
https://github.com/docker-library/postgres/blob/ed23320582f4ec5b0e5e35c99d98966dacbc6ed8/9.3/Dockerfile
# command on dockerfile java
https://github.com/docker-library/java/blob/master/openjdk-6-jdk/Dockerfile
# create a user on images ubuntu
RUN adduser myuser
# copy file/directory on images ubuntu
COPY /home/myuser/test /home/userimagedockerubuntu/test
# ?
CMD ["ubuntu:14.04"]
Please help me
No, you cannot combine multiple Dockerfile.
The best practice is to:
start from an imabe already included what you need, like this postgresql image already based on ubuntu.
That means that if your Dockerfile starts with:
FROM orchardup/postgresql
You would be building an image which already contains ubuntu and postgresql.
COPY or RUN what you need in your dockerfile, like for openjdk6:
RUN \
apt-get update && \
apt-get install -y openjdk-6-jdk && \
rm -rf /var/lib/apt/lists/*
ENV JAVA_HOME /usr/lib/jvm/java-6-openjdk-amd64
Finally, your default command should run the service you want:
# Set the default command to run when starting the container
CMD ["/usr/lib/postgresql/9.3/bin/postgres", "-D", "/var/lib/postgresql/9.3/main", "-c", "config_file=/etc/postgresql/9.3/main/postgresql.conf"]
But since the Dockerfile of orchardup/postgresql already contains a CMD, you don't even have to specify one: you will inherit from the CMD defined in your base image.
I think nesting multiple Dockerfiles is not possible due to the layer system. You may however outsource tasks into shell scripts and run those in your Dockerfile.
In your Dockerfile please fix the base image:
FROM ubuntu:14.04
Further your CMD is invalid. You may want to execute a bash with CMD ["bash"] that you can work with.
I would suggest you to start with the doc on Dockerfile as you clearly missed this and it contains all the answers to your questions, and even questions you don't even think to ask yet.
I'm using the official elasticsearch Docker image instead of setting up my own elastic search instance. And that works great, up to the point when I wanted to extend it. I wanted to install marvel into that ElasticSearch instance to get more information.
Now dockerfile/elasticsearch automatically runs ElasticSearch and setting the command to /bin/bash doesn't work, neither does attaching to the container or trying to access it over SSH, nor installing ssh-daemon with apt-get install -y openssh-server.
In this particular case, I could just go into the container's file system and execute opt/elasticsearch/bint/plugin -i elasticsearch/marvel/latest and everything worked.
But how could I install an additional service which needs to be installed with apt-get when I can't have a terminal inside the running container?
Simply extend it using a Dockerfile that start with
FROM dockerfile/elasticsearch
and install marvel or ssh-server or whatever you need. Then, end with the correct command to start your services. You can use supervisor to start multple services, see Run a service automatically in a docker container for more info on that.
If you don't mind using docker-compose, what I usually do is to add a first section for the base image you plan to reuse, and then use that image as the base in the rest of the services' Dockerfiles, something along the lines of:
---
version: '2'
services:
base:
build: ./images/base
collector:
build: ./images/collector
Then, in images/collector/Dockerfile, and since my project is called webtrack, I'd type
FROM webtrack_base
...
And now it's done!
Update August 2016
Having found very little current information on how to do this with latest versions of ElasticSearch (2.3.5 for example), Kibana (4.5.3) and Marvel & Sense plugins, I opted to take the steeper path and write my own image.
Please find the source code (Dockerfile) and README here
FROM java:jre-alpine
MAINTAINER arcseldon <arcseldon#gmail.com>
ENV ES_VERSION=2.3.5 \
KIBANA_VERSION=4.5.3
RUN apk add --quiet --no-progress --no-cache nodejs \
&& adduser -D elasticsearch
USER elasticsearch
WORKDIR /home/elasticsearch
RUN wget -q -O - http://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/${ES_VERSION}/elasticsearch-${ES_VERSION}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ES_VERSION} elasticsearch \
&& wget -q -O - http://download.elastic.co/kibana/kibana/kibana-${KIBANA_VERSION}-linux-x64.tar.gz \
| tar -zx \
&& mv kibana-${KIBANA_VERSION}-linux-x64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm \
&& ./elasticsearch/bin/plugin install license \
&& ./elasticsearch/bin/plugin install marvel-agent \
&& ./kibana/bin/kibana plugin --install elasticsearch/marvel/latest \
&& ./kibana/bin/kibana plugin --install elastic/sense
CMD elasticsearch/bin/elasticsearch --es.logger.level=OFF --network.host=0.0.0.0 & kibana/bin/kibana -Q
EXPOSE 9200 5601
If you just want the pre-built image then please do:
docker pull arcseldon/elasticsearch-kibana-marvel-sense
You can visit the repository on hub.docker.com here
Usage:
docker run -d -p 9200:9200 -p 5601:5601 arcseldon/elasticsearch-kibana-marvel-sense
You can connect to Elasticsearch with http://localhost:9200 and its Kibana front-end with http://localhost:5601.
You can connect to Marvel with http://localhost:5601/app/marvel and Sense with http://localhost:5601/app/sense
Hope this helps others and saves some time!