https://github.com/spicysomtam/jenkins-deploy-eks-via-terraform
Jenkins file is used to create EKS.how do automate Adding worker nodes to the cluster in the same Jenkins job
stage('Terraform init and Plan eks') {
if (params.action == 'create') {
dir("infra/terraform/eks") {
script {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: awsCredentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh """
export TF_CLI_ARGS_init='-backend-config="bucket=${awsS3}"'
terraform init -reconfigure
terraform workspace new ${plan} || true
terraform workspace select ${plan}
terraform plan -out=${plan} -var-file=${WORKSPACE}/environments/tf.tfvars
terraform apply ${plan}
terraform output config_map_aws_auth > ./config_map_aws_auth.yaml
"""
}
}
}
}
}
stage('add worker nodes'){
def k8sImage = docker.image('pahud/eks-kubectl-docker')
k8sImage.inside('-u 0:0') {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: eksCredentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} AWS_DEFAULT_REGION=us-west-2 CLUSTER_NAME=my-eksctl'
sh 'aws eks --region us-west-2 update-kubeconfig --name my-eksctl'
sh 'kubectl apply -f config_map_aws_auth.yaml'
}
}
}
How do I store terraform output config_map_aws_auth > ./config_map_aws_auth.yaml
so that in the next stage I can run the kubectl command like so
sh 'kubectl apply -f config_map_aws_auth.yaml'
Related
I have created a jenkins pipeline for an application. I have following stages in my declarative pipeline.
Checkout
nuget restore
sonar scan start
dotnet build
sonar scan end
build docker image
run container
deploy on google Kubernetes cluster
If I don't include 8th step my pipeline works fine, but if I include 8th step my pipeline works only the first time. For the next runs I will get the below error in the first stage.
I have created a windows machine on Azure and running Jenkins on that machine.
Jenkins file
stages {
stage('Code Checkout') {
steps {
echo 'Cloning project...'
deleteDir()
checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/development']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/shailu0287/JenkinsTest.git']]]
echo 'Project cloned...'
}
}
stage('Nuget Restore') {
steps {
echo "nuget restore"
bat 'dotnet restore \"WebApplication4.sln\"'
}
}
stage('Sonar Scan Start'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "Sonar scan start"
echo "${scannerHome}"
bat "${scannerHome}\\SonarScanner.MSBuild.exe begin /k:\"Pan33r\" /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Build Solution') {
steps {
echo "Build Solution"
bat "\"${tool 'MSBUILD_Home'}\" WebApplication4.sln /p:Configuration=Release /p:Platform=\"Any CPU\" /p:ProductVersion=1.0.0.${env.BUILD_NUMBER}"
}
}
stage('Sonar Scan End'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "${scannerHome}"
echo "sonar scan end"
bat "${scannerHome}\\SonarScanner.MSBuild.exe end /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Building docker image') {
steps{
script {
echo "Building docker image"
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Containers'){
parallel{
stage("Run PreContainer Checks"){
environment{
containerID = "${bat(script: 'docker ps -a -q -f name="c-Shailendra-master"', returnStdout: true).trim().readLines().drop(1).join("")}"
}
steps{
script{
echo "Run PreContainer Checks"
echo env.containerName
echo "containerID is "
echo env.containerID
if(env.containerID != null){
echo "Stop container and remove from stopped container list too"
bat "docker stop ${env.containerID} && docker rm ${env.containerID}"
}
}
}
}
stage("Publish Docker Image to DockerHub"){
steps{
script {
echo "Pushing docker image to docker hub"
docker.withRegistry( '', registryCredential ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}
}
}
stage('Docker Deployment'){
steps{
echo "${registry}:${BUILD_NUMBER}"
echo "Docker Deployment by using docker hub's image"
bat "docker run -d -p 7200:80 --name c-${containerName}-master ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to GKE') {
steps{
echo "Deployment started ..."
step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME, location: env.LOCATION, manifestPattern: 'Kubernetes.yml', credentialsId: env.CREDENTIALS_ID, verify deployments: true])
}
}
}
}
If I remove the last step, all my builds work fine. If I include the last step, only the first build works fine then I have to restart the machine. I am not sure what is the issue with the YML file.
I am trying to run kubernetes and helm command in the deploy step of the Jenkinsfile but I am facing Nullpointer exception. Below is my code:
stage ('Create and Deploy to k8s Dev Environment') {
//agent {label 'docker-maven-slave'}
options {
skipDefaultCheckout()
}
steps {
withCredentials([string(credentialsId: K8S_DEV_SECRET_ID)]) {
command """
kubectl apply --server=https://10.0.0.0:443 --insecure-skip-tls-verify=false --namespace: "dev-ns" -f -
helm template -f "portal-chart/deploy/values-dev.yaml" portal-chart
"""
}
}
Below are the logs:
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: java.lang.NullPointerException
Caused: hudson.remoting.ProxyException: com.xxx.jenkins.pipeline.library.utils.dispatch.ShellCommandException: Exception calling shell command, 'kubectl apply ... ': null
at command.call(command.groovy:51)
I was able to resolve this. Below is the code:
stage ('Create and Deploy to k8s Dev Environment') {
//agent {label 'docker-maven-slave'}
options {
skipDefaultCheckout()
}
steps {
command """
kubectl apply --server=https://10.0.0.0:443 --insecure-skip-tls-verify=false --namespace: "dev-ns" -f -
helm template -f "portal-chart/deploy/values-dev.yaml" portal-chart
"""
}
I had to create a jenkins job to automate certain tasks that will perform certain operations like Updating the public site, Changing public version to latest public release, Updating Software on public site and Restarting Server these include certain operations such as copy files to a tmp folder, log in to a an on-prem server, go to the folder and unzip the file etc.
I have created the jenkinsfile as follows:
pipeline {
options {
skipDefaultCheckout()
timestamps()
}
parameters {
string(name: 'filename', defaultValue: 'abc', description: 'Enter the file name that needs to be copied')
string(database: 'database', defaultValue: 'abc', description: 'Enter the database that needs to be created')
choice(name: 'Run', choices: '', description: 'Data migration')
}
agent {
node { label 'aws && build && linux && ubuntu' }
}
triggers {
pollSCM('H/5 * * * *')
}
stages {
stage('Clean & Clone') {
steps {
cleanWs()
checkout scm
}
}
stage('Updating the public site'){
steps{
sh "scp ./${filename}.zip <user>#<server name>:/tmp"
sh "ssh <user>#<server name>"
sh "cp ./tmp/${filename}.zip ./projects/xyz/xyz-site/"
sh "cd ./projects/xyz/xyz-site/ "
sh "unzip ./${filename}.zip"
sh "cp -R ./${filename}/* ./"
}
stage('Changing public version to latest public release') {
steps {
sh "scp ./${filename}.sql.gz <user>#<server name>:/tmp"
sh "ssh <user>#<server name>"
sh "mysql -u root -p<PASSWORD>"
sh "show databases;"
sh "create database ${params.database};"
sh "GRANT ALL PRIVILEGES ON <newdb>.* TO 'ixxyz'#'localhost' WITH GRANT OPTION;"
sh "exit;"
sh "zcat tmp/${filename}.sql.gz | mysql -u root -p<PASSWORD> <newdb>"
sh "db.default.url="jdbc:mysql://localhost:3306/<newdb>""
sh "ps aux|grep monitor.sh|awk '{print "kill "$2}' |bash"
}
}
stage('Updating Software on public site') {
steps {
sh "scp <user>#<server>:/tmp/abc<version>_empty_h2.zip"
sh "ssh <user>#<server name>"
sh "su <user>"
sh "mv tmp/<version>_empty_h2.zip ./xyz/projects/xyz"
sh "cd xyz/projects/xyz"
sh "cp latest/conf/local.conf <version>_empty_h2/conf/"
}
}
stage('Restarting Server') {
steps {
sh "rm latest/RUNNING_PID"
sh "bash reload.sh"
sh "nohup bash monitor.sh &"
}
}
}
}
Is there a way I can dynamically obtain the zip filename in the root folder? I used ${filename}.zip , but it doesn't seem to work.
Also, is there a better way to perform these operations using jenkins? Any help is much appreciated.
You could write all your steps in one shell script for each stage and execute under one stage.
Regarding filename.zipeither you can take this as a parameter and pass this value to your stages. OR You can also use find command as a shell command or shell script to find .zip files in a current directory. find <dir> -iname \*.zip find . -iname \*.zip .
Example:
pipeline {
options {
skipDefaultCheckout()
timestamps()
}
parameters {
string(name: 'filename', defaultValue: 'abc', description: 'Enter the file name that needs to be copied')
choice(name: 'Run', choices: '', description: 'Data migration')
}
stage('Updating the public site'){
steps{
sh "scp ./${params.filename}.zip <user>#<server name>:/tmp"
...
}
}
}
For executing script at a certain location based on your question , you could use dir with path where your scripts are placed.
OR you can also give the path directly sh label: 'execute script', script:"C:\\Data\\build.sh"
stage('Your stage name'){
steps{
script {
// Give path where your scripts are placed
dir ("C:\\Data") {
sh label: 'execute script', script:"build.sh <Your Arguments> "
...
}
}
}
}
Credentials are configured in Jenkins but there's an error suggesting they are not.
I've followed documentation provided by Jenkins website.
agent {
node {
label 'master'
}
}
environment {
AWS_ACCESS_KEY_ID = credentials('jenkins-aws-secret-key-id')
AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
}
stages {
stage('checkout') {
steps {
git(url: 'git#bitbucket.org:user/bitbucketdemo.git', branch: 'master', credentialsId: 'jenkins')
echo 'hello'
}
}
stage('packer') {
steps {
echo $AWS_ACCESS_KEY_ID
}
}
}
}```
It should print out the value of the environment variable
I used the Cloudbees AWS Credentials plugin. Once installed, I was able to add my AWS credentials (additional selection in Credentials pull-down menu)
enter image description here
Then use the following snippet in my Jenkinsfile
withCredentials(
[[
$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
credentialsId: 'AWS',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
sh 'packer build -var aws_access_key=${AWS_ACCESS_KEY_ID} -var aws_secret_key=${AWS_SECRET_ACCESS_KEY} example4.json'
}
I'm trying to set up jenkins pipeline using gcloud but I'm getting the following error:
gcloud auth activate-service-account --key-file
./service-account-creds.json WARNING: Could not setup log file in
/.config/gcloud/logs, (Error: Could not create directory
[/.config/gcloud/logs/2019.02.07]: Permission denied.
the code:
stages {
stage('build') {
steps {
withCredentials([file(credentialsId: 'google-container-registry', variable: 'GOOGLE_AUTH')]) {
script {
docker.image('google/cloud-sdk:latest').inside {
sh "echo ${GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
}
}
}
}
Jenkins is running in a container using the imagen jenkins/jenkins
Try this:
withCredentials([file(credentialsId: 'google-container-registry', variable: 'GOOGLE_AUTH')]) {
script {
docker.image('google/cloud-sdk:latest').inside {
sh "echo ${GOOGLE_AUTH} > gcp-key.json"
sh "gcloud auth activate-service-account --key-file=${GOOGLE_AUTH}"
}
}
}