Credentials are configured in Jenkins but there's an error suggesting they are not.
I've followed documentation provided by Jenkins website.
agent {
node {
label 'master'
}
}
environment {
AWS_ACCESS_KEY_ID = credentials('jenkins-aws-secret-key-id')
AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
}
stages {
stage('checkout') {
steps {
git(url: 'git#bitbucket.org:user/bitbucketdemo.git', branch: 'master', credentialsId: 'jenkins')
echo 'hello'
}
}
stage('packer') {
steps {
echo $AWS_ACCESS_KEY_ID
}
}
}
}```
It should print out the value of the environment variable
I used the Cloudbees AWS Credentials plugin. Once installed, I was able to add my AWS credentials (additional selection in Credentials pull-down menu)
enter image description here
Then use the following snippet in my Jenkinsfile
withCredentials(
[[
$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
credentialsId: 'AWS',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
sh 'packer build -var aws_access_key=${AWS_ACCESS_KEY_ID} -var aws_secret_key=${AWS_SECRET_ACCESS_KEY} example4.json'
}
Related
I have created a jenkins pipeline for an application. I have following stages in my declarative pipeline.
Checkout
nuget restore
sonar scan start
dotnet build
sonar scan end
build docker image
run container
deploy on google Kubernetes cluster
If I don't include 8th step my pipeline works fine, but if I include 8th step my pipeline works only the first time. For the next runs I will get the below error in the first stage.
I have created a windows machine on Azure and running Jenkins on that machine.
Jenkins file
stages {
stage('Code Checkout') {
steps {
echo 'Cloning project...'
deleteDir()
checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/development']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/shailu0287/JenkinsTest.git']]]
echo 'Project cloned...'
}
}
stage('Nuget Restore') {
steps {
echo "nuget restore"
bat 'dotnet restore \"WebApplication4.sln\"'
}
}
stage('Sonar Scan Start'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "Sonar scan start"
echo "${scannerHome}"
bat "${scannerHome}\\SonarScanner.MSBuild.exe begin /k:\"Pan33r\" /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Build Solution') {
steps {
echo "Build Solution"
bat "\"${tool 'MSBUILD_Home'}\" WebApplication4.sln /p:Configuration=Release /p:Platform=\"Any CPU\" /p:ProductVersion=1.0.0.${env.BUILD_NUMBER}"
}
}
stage('Sonar Scan End'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "${scannerHome}"
echo "sonar scan end"
bat "${scannerHome}\\SonarScanner.MSBuild.exe end /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Building docker image') {
steps{
script {
echo "Building docker image"
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Containers'){
parallel{
stage("Run PreContainer Checks"){
environment{
containerID = "${bat(script: 'docker ps -a -q -f name="c-Shailendra-master"', returnStdout: true).trim().readLines().drop(1).join("")}"
}
steps{
script{
echo "Run PreContainer Checks"
echo env.containerName
echo "containerID is "
echo env.containerID
if(env.containerID != null){
echo "Stop container and remove from stopped container list too"
bat "docker stop ${env.containerID} && docker rm ${env.containerID}"
}
}
}
}
stage("Publish Docker Image to DockerHub"){
steps{
script {
echo "Pushing docker image to docker hub"
docker.withRegistry( '', registryCredential ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}
}
}
stage('Docker Deployment'){
steps{
echo "${registry}:${BUILD_NUMBER}"
echo "Docker Deployment by using docker hub's image"
bat "docker run -d -p 7200:80 --name c-${containerName}-master ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to GKE') {
steps{
echo "Deployment started ..."
step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME, location: env.LOCATION, manifestPattern: 'Kubernetes.yml', credentialsId: env.CREDENTIALS_ID, verify deployments: true])
}
}
}
}
If I remove the last step, all my builds work fine. If I include the last step, only the first build works fine then I have to restart the machine. I am not sure what is the issue with the YML file.
https://github.com/spicysomtam/jenkins-deploy-eks-via-terraform
Jenkins file is used to create EKS.how do automate Adding worker nodes to the cluster in the same Jenkins job
stage('Terraform init and Plan eks') {
if (params.action == 'create') {
dir("infra/terraform/eks") {
script {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: awsCredentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh """
export TF_CLI_ARGS_init='-backend-config="bucket=${awsS3}"'
terraform init -reconfigure
terraform workspace new ${plan} || true
terraform workspace select ${plan}
terraform plan -out=${plan} -var-file=${WORKSPACE}/environments/tf.tfvars
terraform apply ${plan}
terraform output config_map_aws_auth > ./config_map_aws_auth.yaml
"""
}
}
}
}
}
stage('add worker nodes'){
def k8sImage = docker.image('pahud/eks-kubectl-docker')
k8sImage.inside('-u 0:0') {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',
credentialsId: eksCredentialsId,
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh 'AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} AWS_DEFAULT_REGION=us-west-2 CLUSTER_NAME=my-eksctl'
sh 'aws eks --region us-west-2 update-kubeconfig --name my-eksctl'
sh 'kubectl apply -f config_map_aws_auth.yaml'
}
}
}
How do I store terraform output config_map_aws_auth > ./config_map_aws_auth.yaml
so that in the next stage I can run the kubectl command like so
sh 'kubectl apply -f config_map_aws_auth.yaml'
I have a setup at work where the vSphere host are manually restarted before execution of a specific jenkins job, as a noob in the office I automated this process by adding a extra build step to restart vm's with the help of https://wiki.jenkins-ci.org/display/JENKINS/vSphere+Cloud+Plugin! (vSphere cloud plugin).
I would like to now integrate this as a pipeline code, please advise.
I have already checked that this plugin is Pipeline compatible.
I currently trigger the vSphere host restart in pipeline by making it to remotely trigger a job configured with vSphere cloud plugin.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
script {
sh "curl -v 'http://someserver.com/job/Vivin/job/executor_configurator/buildWithParameters?Host=build-114&token=bonkers'"
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
I would like to integrate the vSphere cloud plugin directly in the Pipeline code itself, please help me to integrate.
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere cloud plugin code that is requested
}
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(8)
}
}
}
}
}
}
}
Well I found the solution myself with the help of 'pipeline-syntax' feature found in the menu of a Jenkins pipeline job.
'Pipeline-syntax' feature page contains syntax of all the possible parameters made available via the API of the installed plugins of a Jenkins server, using which we can generate or develop the syntax based on our needs.
http://<jenkins server url>/job/<pipeline job name>/pipeline-syntax/
My Jenkinsfile (Pipeline) now look like this
pipeline {
agent any
stages {
stage('Restarting vSphere') {
steps {
vSphere buildStep: [$class: 'PowerOff', evenIfSuspended: false, ignoreIfNotExists: false, shutdownGracefully: true, vm: 'brewery-133'], serverName: 'vspherecentral'
vSphere buildStep: [$class: 'PowerOn', timeoutInSeconds: 180, vm: 'brewery-133'], serverName: 'vspherecentral'
}
}
stage('Setting Executors') {
steps {
script {
def jenkins = Jenkins.getInstance()
jenkins.getNodes().each {
if (it.displayName == 'brewery-133') {
echo 'brewery-133'
it.setNumExecutors(1)
}
}
}
}
}
}
}
I'm trying to set up jenkins pipeline using gcloud but I'm getting the following error:
gcloud auth activate-service-account --key-file
./service-account-creds.json WARNING: Could not setup log file in
/.config/gcloud/logs, (Error: Could not create directory
[/.config/gcloud/logs/2019.02.07]: Permission denied.
the code:
stages {
stage('build') {
steps {
withCredentials([file(credentialsId: 'google-container-registry', variable: 'GOOGLE_AUTH')]) {
script {
docker.image('google/cloud-sdk:latest').inside {
sh "echo ${GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
}
}
}
}
Jenkins is running in a container using the imagen jenkins/jenkins
Try this:
withCredentials([file(credentialsId: 'google-container-registry', variable: 'GOOGLE_AUTH')]) {
script {
docker.image('google/cloud-sdk:latest').inside {
sh "echo ${GOOGLE_AUTH} > gcp-key.json"
sh "gcloud auth activate-service-account --key-file=${GOOGLE_AUTH}"
}
}
}
We use the Vault plugin in our pipeline to read credentials from Vault. Now we also want to generate TLS certificates with Vault's PKI engine. For that I need the appRole secret id for Jenkins in my pipeline file. The secret is configured in Jenkins as 'Vault App Role Credential' and I don't know how to access it.
What I'd like to do is something like this:
withCredentials([VaultAppRoleCredential(credentialsId: 'vault_credentials'), roleIdVariable: 'roleId', secretIdVariable: 'secretId']) {
stage('generate certificate') {
// authenticate with credentials against Vault
// ...
}
}
My workaround at the moment is to duplicate the credentials and store the roleId and secretId additionally in a username+password credential in Jenkins.
Here is my working example how to use Vault Credentials Token and use it to access vault secrets:
// Specify how to access secrets in Vault
def configuration = [
vaultUrl: 'https://hcvault.global.nibr.novartis.net',
vaultCredentialId: 'poc-vault-token',
engineVersion: 2
]
def secrets = [
[path: 'secret/projects/intd/common/accounts', engineVersion: 2, secretValues:
[
[vaultKey: 'TEST_SYS_USER'],
[vaultKey: 'TEST_SYS_PWD']
]
]
]
... [omitted pipeline]
stage ('Get Vault Secrets') {
steps {
script {
withCredentials([[$class: 'VaultTokenCredentialBinding', credentialsId: 'poc-vault-token', vaultAddr: 'https://hcvault.global.nibr.novartis.net'], usernamePassword(credentialsId: 'artifactory-jenkins-user-password', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
withVault([configuration: configuration, vaultSecrets: secrets]) {
sh """
echo $env.VAULT_ADDR > hcvault-address.txt
echo $env.VAULT_TOKEN > hcvault-token.txt
echo $env.TEST_SYS_USER > sys-user-account.txt
""".stripIndent()
}
}
}
}
}