I'm trying to set up jenkins pipeline using gcloud but I'm getting the following error:
gcloud auth activate-service-account --key-file
./service-account-creds.json WARNING: Could not setup log file in
/.config/gcloud/logs, (Error: Could not create directory
[/.config/gcloud/logs/2019.02.07]: Permission denied.
the code:
stages {
stage('build') {
steps {
withCredentials([file(credentialsId: 'google-container-registry', variable: 'GOOGLE_AUTH')]) {
script {
docker.image('google/cloud-sdk:latest').inside {
sh "echo ${GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
}
}
}
}
Jenkins is running in a container using the imagen jenkins/jenkins
Try this:
withCredentials([file(credentialsId: 'google-container-registry', variable: 'GOOGLE_AUTH')]) {
script {
docker.image('google/cloud-sdk:latest').inside {
sh "echo ${GOOGLE_AUTH} > gcp-key.json"
sh "gcloud auth activate-service-account --key-file=${GOOGLE_AUTH}"
}
}
}
Related
I am trying to run kubernetes and helm command in the deploy step of the Jenkinsfile but I am facing Nullpointer exception. Below is my code:
stage ('Create and Deploy to k8s Dev Environment') {
//agent {label 'docker-maven-slave'}
options {
skipDefaultCheckout()
}
steps {
withCredentials([string(credentialsId: K8S_DEV_SECRET_ID)]) {
command """
kubectl apply --server=https://10.0.0.0:443 --insecure-skip-tls-verify=false --namespace: "dev-ns" -f -
helm template -f "portal-chart/deploy/values-dev.yaml" portal-chart
"""
}
}
Below are the logs:
[Pipeline] End of Pipeline
hudson.remoting.ProxyException: java.lang.NullPointerException
Caused: hudson.remoting.ProxyException: com.xxx.jenkins.pipeline.library.utils.dispatch.ShellCommandException: Exception calling shell command, 'kubectl apply ... ': null
at command.call(command.groovy:51)
I was able to resolve this. Below is the code:
stage ('Create and Deploy to k8s Dev Environment') {
//agent {label 'docker-maven-slave'}
options {
skipDefaultCheckout()
}
steps {
command """
kubectl apply --server=https://10.0.0.0:443 --insecure-skip-tls-verify=false --namespace: "dev-ns" -f -
helm template -f "portal-chart/deploy/values-dev.yaml" portal-chart
"""
}
i am new to using jenkins and docker. Currently I ran into an error where my jenkinsfile doesnt have permission to docker.sock. Is there a way to fix this? Dried out of ideas
things i've tried:
-sudo usermod -aG docker $USER //usermod not found
-sudo setfacl --modify user:******:rw /var/run/docker.sock //setfacl not found
-chmod 777 /var/run/docker.sock //still receiving this error after reboot
-chown -R jenkins:jenkins /var/run/docker.sock //changing ownership of '/var/run/docker.sock': Operation not permitted
error image:
def gv
pipeline {
agent any
environment {
CI = 'true'
VERSION = "$BUILD_NUMBER"
PROJECT = "foodcore"
IMAGE = "$PROJECT:$VERSION"
}
tools {
nodejs "node"
'org.jenkinsci.plugins.docker.commons.tools.DockerTool' 'docker'
}
parameters {
choice(name: 'VERSION', choices: ['1.1.0', '1.2.0', '1.3.0'], description: '')
booleanParam(name: 'executeTests', defaultValue: true, description: '')
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
echo 'building client'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
echo 'building server...'
}
}
}
stage("build docker image") {
steps {
sh 'docker build -t $IMAGE .'
}
}
// stage("deploy") {
// steps {
// script {
// docker.withRegistry(ECURL, ECRCRED) {
// docker.image(IMAGE).push()
// }
// }
// }
// }
}
// post {
// always {
// sh "docker rmi $IMAGE | true"
// }
// }
}
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.
https://github.com/spicysomtam/jenkins-deploy-eks-via-terraform
Jenkins file is used to create EKS.how do automate Adding worker nodes to the cluster in the same Jenkins job
stage('Terraform init and Plan eks') {
if (params.action == 'create') {
dir("infra/terraform/eks") {
script {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: awsCredentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh """
export TF_CLI_ARGS_init='-backend-config="bucket=${awsS3}"'
terraform init -reconfigure
terraform workspace new ${plan} || true
terraform workspace select ${plan}
terraform plan -out=${plan} -var-file=${WORKSPACE}/environments/tf.tfvars
terraform apply ${plan}
terraform output config_map_aws_auth > ./config_map_aws_auth.yaml
"""
}
}
}
}
}
stage('add worker nodes'){
def k8sImage = docker.image('pahud/eks-kubectl-docker')
k8sImage.inside('-u 0:0') {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: eksCredentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} AWS_DEFAULT_REGION=us-west-2 CLUSTER_NAME=my-eksctl'
sh 'aws eks --region us-west-2 update-kubeconfig --name my-eksctl'
sh 'kubectl apply -f config_map_aws_auth.yaml'
}
}
}
How do I store terraform output config_map_aws_auth > ./config_map_aws_auth.yaml
so that in the next stage I can run the kubectl command like so
sh 'kubectl apply -f config_map_aws_auth.yaml'
Credentials are configured in Jenkins but there's an error suggesting they are not.
I've followed documentation provided by Jenkins website.
agent {
node {
label 'master'
}
}
environment {
AWS_ACCESS_KEY_ID = credentials('jenkins-aws-secret-key-id')
AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
}
stages {
stage('checkout') {
steps {
git(url: 'git#bitbucket.org:user/bitbucketdemo.git', branch: 'master', credentialsId: 'jenkins')
echo 'hello'
}
}
stage('packer') {
steps {
echo $AWS_ACCESS_KEY_ID
}
}
}
}```
It should print out the value of the environment variable
I used the Cloudbees AWS Credentials plugin. Once installed, I was able to add my AWS credentials (additional selection in Credentials pull-down menu)
enter image description here
Then use the following snippet in my Jenkinsfile
withCredentials(
[[
$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
credentialsId: 'AWS',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
sh 'packer build -var aws_access_key=${AWS_ACCESS_KEY_ID} -var aws_secret_key=${AWS_SECRET_ACCESS_KEY} example4.json'
}
I have Jenkins scripted pipeline with multiple stages, all of the stages require the same password for interaction with third-party API.
node {
stage ('stage1') {
sh 'curl --user login:password http://trird-party-api'
}
stage ('stage2') {
sh 'curl --user login:password http://trird-party-api'
}
}
For obvious reasons I want to keep this password safe, e.g. in Jenkins credentials.
The only secure way I've found is to add withCredentials section, but it must be added to each pipeline stage, e.g:
node {
stage ('stage1') {
withCredentials([string(credentialsId: '02647301-e655-4858-a7fb-26b106a81458', variable: 'mypwd')]) {
sh 'curl --user login:$mypwd http://trird-party-api'
}
}
stage ('stage2') {
withCredentials([string(credentialsId: '02647301-e655-4858-a7fb-26b106a81458', variable: 'mypwd')]) {
sh 'curl --user login:$mypwd http://trird-party-api'
}
}
}
This approach is not OK because real pipeline is really complicated.
Any alternatives?
According to this other stackoverflow question and this tutorial, you should be able to specify the needed credentials in a declarative pipeline like so:
environment {
AUTH = credentials('02647301-e655-4858-a7fb-26b106a81458')
}
stages {
stage('stage1') {
sh 'curl --user $AUTH_USR:$AUTH_PSW http://third-party-api'
}
stage('stage2') {
sh 'curl --user $AUTH_USR:$AUTH_PSW http://third-party-api'
}
With a scripted pipeline, you're pretty much relegated to using withCredentials around the things you want to have access to them. Have you tried surrounding the stages with the credentials, as in:
node {
withCredentials([string(credentialsId: '02647301-e655-4858-a7fb-26b106a81458', variable: 'mypwd')]) {
stage ('stage1') {
sh 'curl --user login:password http://trird-party-api'
}
stage ('stage2') {
sh 'curl --user login:password http://trird-party-api'
}
}
}