Permission denied error while running Cypress UI automation scripts with Jenkins in docker linux containers - jenkins-pipeline

We have a UI automation script created using Cypress/JavaScript. Scripts run perfectly fine on the local machine. We created a Jenkins job and are trying to run scripts in linux docker container. Pls see below Jenkinsfile for same.
#Library('jenkins-shared-libraries#v2') _
pipeline {
agent {
kubernetes {
yaml podYamlLinux(
customContainerYamls: [
'''
- name: nodeimg
image: node
options: --user 1001
imagePullPolicy: Always
resources:
requests:
memory: "100Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
tty: true
command:
- cat
securityContext:
privileged: true
'''
]
)
}}
stages {
// Install and verify Cypress
stage('installation') {
steps {
container('nodeimg') {
sh 'npm i'
sh 'npm install cypress --save-dev'
}
}
}
stage('Cypress Test') {
steps {
echo "Running Tests"
container('nodeimg') {
sh 'npm run cypressVerify'
}
}
}
}
post {
// shutdown the server running in the background
always {
echo 'Stopping local server'
publishHTML([allowMissing: false, alwaysLinkToLastBuild: false, keepAll: true,
reportDir: 'cypress/report', reportFiles: 'index.html', reportName: 'HTML
Report', reportTitles: ''])
}
}
}
I have attached pic of package.json file as well.
I have tried different configurations to resolve this but currently getting error message "/tmp/cypressVerify-b91242e3.sh: 1: cypress: Permission denied script returned exit code 126".
It would be great if community help me resolve this.

Related

Jenkins git checkout stage not able to checkout yml file

I have created a jenkins pipeline for an application. I have following stages in my declarative pipeline.
Checkout
nuget restore
sonar scan start
dotnet build
sonar scan end
build docker image
run container
deploy on google Kubernetes cluster
If I don't include 8th step my pipeline works fine, but if I include 8th step my pipeline works only the first time. For the next runs I will get the below error in the first stage.
I have created a windows machine on Azure and running Jenkins on that machine.
Jenkins file
stages {
stage('Code Checkout') {
steps {
echo 'Cloning project...'
deleteDir()
checkout changelog: false, poll: false, scm: [$class: 'GitSCM', branches: [[name: '*/development']], extensions: [], userRemoteConfigs: [[url: 'https://github.com/shailu0287/JenkinsTest.git']]]
echo 'Project cloned...'
}
}
stage('Nuget Restore') {
steps {
echo "nuget restore"
bat 'dotnet restore \"WebApplication4.sln\"'
}
}
stage('Sonar Scan Start'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "Sonar scan start"
echo "${scannerHome}"
bat "${scannerHome}\\SonarScanner.MSBuild.exe begin /k:\"Pan33r\" /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Build Solution') {
steps {
echo "Build Solution"
bat "\"${tool 'MSBUILD_Home'}\" WebApplication4.sln /p:Configuration=Release /p:Platform=\"Any CPU\" /p:ProductVersion=1.0.0.${env.BUILD_NUMBER}"
}
}
stage('Sonar Scan End'){
steps{
withSonarQubeEnv('SonarQube_Home') {
echo "${scannerHome}"
echo "sonar scan end"
bat "${scannerHome}\\SonarScanner.MSBuild.exe end /d:sonar.login=\"squ_e2ecec8e21976c04764cc4940d3d3ddbec9e2898\""
}
}
}
stage('Building docker image') {
steps{
script {
echo "Building docker image"
dockerImage = docker.build registry + ":$BUILD_NUMBER"
}
}
}
stage('Containers'){
parallel{
stage("Run PreContainer Checks"){
environment{
containerID = "${bat(script: 'docker ps -a -q -f name="c-Shailendra-master"', returnStdout: true).trim().readLines().drop(1).join("")}"
}
steps{
script{
echo "Run PreContainer Checks"
echo env.containerName
echo "containerID is "
echo env.containerID
if(env.containerID != null){
echo "Stop container and remove from stopped container list too"
bat "docker stop ${env.containerID} && docker rm ${env.containerID}"
}
}
}
}
stage("Publish Docker Image to DockerHub"){
steps{
script {
echo "Pushing docker image to docker hub"
docker.withRegistry( '', registryCredential ) {
dockerImage.push("$BUILD_NUMBER")
dockerImage.push('latest')
}
}
}
}
}
}
stage('Docker Deployment'){
steps{
echo "${registry}:${BUILD_NUMBER}"
echo "Docker Deployment by using docker hub's image"
bat "docker run -d -p 7200:80 --name c-${containerName}-master ${registry}:${BUILD_NUMBER}"
}
}
stage('Deploy to GKE') {
steps{
echo "Deployment started ..."
step([$class: 'KubernetesEngineBuilder', projectId: env.PROJECT_ID, clusterName: env.CLUSTER_NAME, location: env.LOCATION, manifestPattern: 'Kubernetes.yml', credentialsId: env.CREDENTIALS_ID, verify deployments: true])
}
}
}
}
If I remove the last step, all my builds work fine. If I include the last step, only the first build works fine then I have to restart the machine. I am not sure what is the issue with the YML file.

report folder does not exist error with htmlpublisher

I am trying to write a jenkins pipeline script for one of my playwright test. Below is one simple code which i have done so far.
pipeline {
agent any
stages {
stage('Run Playwright Test') {
steps {
runTest()
}
}
stage('Publish Report'){
steps {
script {
sh 'ls -lrta'
//print REPORT_FILES
}
publishHTML([
allowMissing: false,
alwaysLinkToLastBuild: true,
keepAll: true,
//reportDir: '.',
reportDir : "./TestReport2",
reportFiles: 'index.html',
reportName: "Html Reports",
reportTitles: 'Report title'
])
}
}
}
}
def runTest() {
node('MYNODE') {
docker.image('image details').inside('--user root'){
git branch: 'mybranchName',credentialsId: 'ID, url: 'url'
catchError() {
sh """
cd WebTests
npm install
npx playwright test --project=CHROME_TEST --grep #hello
"""
}
sh "cp -R WebTests/TestReport TestReport2"
sh 'cd TestReport2; ls -lrta'
}
When I use the above code, the test executed successfully however i am seeing an error while trying to publish the report.
Below is the error :
Specified HTML directory '/bld/workspace//TestReport2' does not exist.
observation: when i put a ls -ltr after the runTest code i could not see the TestReport2 folder even if it was copied successfully.
Another thing i tried is when i put the code to publish the HTML as part of the runTest() it worked fine and i am able to see the reports generated. Something is going on with the TestReport2 folder when the block of code for runTest() is completed.
Does anyone have an eye on what is the root cause. Any suggestion will be appreciated

Failure loading tarball from GitHub to Heroku using Terraform heroku_build resource

I am working on creating a CI Pipeline using Github Actions, Terraform and Heroku. My example application is a Jmix application from Mario David (rent-your-stuff) that I am building according to his Youtube videos. Unfortunately, the regular Github integration he suggests has been turned off due to a security issue. If you attempt to use Heroku's "Connect to GitHub" button, you get an Internal Service Error.
So, as an alternative, I have changed my private repo to public and I'm trying to directly download via the Terraform heroku_build Source.URL (see the "heroku_build" section):
terraform {
required_providers {
heroku = {
source = "heroku/heroku"
version = "~> 5.0"
}
herokux = {
source = "davidji99/herokux"
version = "0.33.0"
}
}
backend "remote" {
organization = "eraskin-rent-your-stuff"
workspaces {
name = "rent-your-stuff"
}
}
required_version = ">=1.1.3"
}
provider "heroku" {
email = var.HEROKU_EMAIL
api_key = var.HEROKU_API_KEY
}
provider "herokux" {
api_key = var.HEROKU_API_KEY
}
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
resource "heroku_build" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
buildpacks = ["heroku/gradle"]
source {
url = "https://github.com/ericraskin/rent-your-stuff/archive/refs/heads/master.zip"
}
}
resource "heroku_formation" "eraskin-rsys-staging" {
app_id = heroku_app.eraskin-rys-staging.id
type = "web"
quantity = 1
size = "Standard-1x"
depends_on = [heroku_build.eraskin-rsys-staging]
}
Whenever I try to execute this, I get the following build error:
-----> Building on the Heroku-20 stack
! Push rejected, Failed decompressing source code.
Source archive detected as: Zip archive data, at least v1.0 to extract
More information: https://devcenter.heroku.com/articles/platform-api-deploying-slugs#create-slug-archive
My assumption is that Heroku can not download the tarball, but I can successfully download it without any authentication using wget.
How do I debug this? Is there a way to ask Heroku to show the commands that the build stack is executing?
For that matter, is there a better approach given that the normal GitHub integration pipeline is broken?
I have found a workaround for this issue, based on the notes from Heroku. They suggest using a third-party GitHub Action Deploy to Heroku instead of Terraform. To use it, I removed my heroku_build and heroku_formation from my main.tf file, so it just contains this:
resource "heroku_app" "eraskin-rys-staging" {
name = "eraskin-rys-staging"
region = "us"
}
resource "heroku_addon" "eraskin-rys-staging-db" {
app_id = heroku_app.eraskin-rys-staging.id
plan = "heroku-postgresql:hobby-dev"
}
My GitHub workflow now contains:
on:
push:
branches:
- master
pull_request:
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: Terraform Format
id: fmt
working-directory: ./infrastructure
run: terraform fmt
- name: Terraform Init
id: init
working-directory: ./infrastructure
run: terraform init
- name: Terraform Validate
id: validate
working-directory: ./infrastructure
run: terraform validate -no-color
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
working-directory: ./infrastructure
run: terraform plan -no-color -input=false
continue-on-error: true
- name: Update Pull Request
uses: actions/github-script#v6
if: github.event_name == 'pull_request'
env:
PLAN: "terraform\n${{ steps.plan.outputs.stdout }}"
with:
script: |
const output = `#### Terraform Format and Style 🖌\`${{ steps.fmt.outcome }}\`
#### Terraform Initialization ️⚙️\`${{ steps.init.outcome }}\`
#### Terraform Plan 📖\`${{ steps.plan.outcome }}\`
#### Terraform Validation 🤖\`${{ steps.validate.outcome }}\`
<details><summary>Show Plan</summary>
\`\`\`\n
${process.env.PLAN}
\`\`\`
</details>
*Pusher: #${{ github.actor }}, Action: \`${{ github.event_name }}\`*`;
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: output
})
- name: Terraform Plan Status
if: steps.plan.outcome == 'failure'
run: exit 1
- name: Terraform Apply
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
working-directory: ./infrastructure
run: terraform apply -auto-approve -input=false
heroku-deploy:
name: 'Heroku-Deploy'
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
runs-on: ubuntu-latest
needs: terraform
steps:
- name: Checkout App
uses: actions/checkout#v3
- name: Deploy to Heroku
uses: akhileshns/heroku-deploy#v3.12.12
with:
heroku_api_key: ${{secrets.HEROKU_API_KEY}}
heroku_app_name: ${{secrets.HEROKU_APP_NAME}}
heroku_email: ${{secrets.HEROKU_EMAIL}}
buildpack: https://github.com/heroku/heroku-buildpack-gradle.git
branch: master
dontautocreate: true
The workflow has two "phases". On the pull request, it runs the tests in my application, followed by terraform fmt, terraform init and terrform plan. On a merge to my master branch, it runs the terraform apply. When that completes, it runs the second job that runs the akhileshns/heroku-deploy#v3.12.12 GitHub action.
As far as I can tell, it works. YMMV, of course. ;-)

Jenkins/ MacOS - dial unix /var/run/docker.sock: connect:permission denied

i am new to using jenkins and docker. Currently I ran into an error where my jenkinsfile doesnt have permission to docker.sock. Is there a way to fix this? Dried out of ideas
things i've tried:
-sudo usermod -aG docker $USER //usermod not found
-sudo setfacl --modify user:******:rw /var/run/docker.sock //setfacl not found
-chmod 777 /var/run/docker.sock //still receiving this error after reboot
-chown -R jenkins:jenkins /var/run/docker.sock //changing ownership of '/var/run/docker.sock': Operation not permitted
error image:
def gv
pipeline {
agent any
environment {
CI = 'true'
VERSION = "$BUILD_NUMBER"
PROJECT = "foodcore"
IMAGE = "$PROJECT:$VERSION"
}
tools {
nodejs "node"
'org.jenkinsci.plugins.docker.commons.tools.DockerTool' 'docker'
}
parameters {
choice(name: 'VERSION', choices: ['1.1.0', '1.2.0', '1.3.0'], description: '')
booleanParam(name: 'executeTests', defaultValue: true, description: '')
}
stages {
stage("init") {
steps {
script {
gv = load "script.groovy"
CODE_CHANGES = gv.getGitChanges()
}
}
}
stage("build frontend") {
steps {
dir("client") {
sh 'npm install'
echo 'building client'
}
}
}
stage("build backend") {
steps {
dir("server") {
sh 'npm install'
echo 'building server...'
}
}
}
stage("build docker image") {
steps {
sh 'docker build -t $IMAGE .'
}
}
// stage("deploy") {
// steps {
// script {
// docker.withRegistry(ECURL, ECRCRED) {
// docker.image(IMAGE).push()
// }
// }
// }
// }
}
// post {
// always {
// sh "docker rmi $IMAGE | true"
// }
// }
}
docker.sock permissions will be lost if you restart system or docker service.
To make it persistence setup a cron to change ownership after each reboot
#reboot chmod 777 /var/run/docker.sock
and When you restart the docker, make sure to run the below command
chmod 777 /var/run/docker.sock
Or you can put a cron for it also, which will execute in each every 5 minutes.

Pass environment variable through PM2 to NextJS

I have 2 package.json scripts that look like this:
"start": "next start -p $PORT",
"pm2_staging": "pm2 restart ecosystem.config.js --env staging",
And an ecosystem.config.js that looks like this:
module.exports = {
apps: [
{
name: 'test.co.uk',
script: 'npm',
args: 'start',
env_staging: {
API: 'staging',
NODE_ENV: 'production',
PORT: 3001,
},
},
],
};
I then run the following:
TEST_VAR='test' npm run pm2_staging
I would expect the following to happen:
The PM2 restart command fires
ecosystem.config.js fires the npm start command and sets some environment variables
The app starts and all env vars are available, including TEST_VAR (set in the original command)
What actually happens is all the env vars from the ecosystem are correctly set, but TEST_VAR is not available in the app. Why is this and how do I go about setting secret keys from CI tools if I can't do this?
I ran into the same problem tonight. After looking every where I found the config needed. The env variable needs to go in your ecosystem.config.js like below. In my case I put it at the root of my server
module.exports = {
apps : [{
name: 'nextjs',
script: 'yarn',
args:"start",
cwd:"/var/www/myapp/",
instances: 2,
autorestart: true,
watch: false,
max_memory_restart: '1G',
env: {
NODE_ENV: 'development'
},
env_production: {
NODE_ENV: 'production',
API_URL: 'YOUR ENV URL',
PORT:8000
}
}]
};
package.json like so
"scripts": {
"dev": "next",
"build": "next build",
"start": "next start -p 8000"
},
...
and executed something like this
#!/bin/bash
cd /var/www/myapp/
git pull && yarn install && yarn build
cd ~/
pm2 start ecosystem.config.js --env production
Then the API_URL would be available in const API_URL = process.env.API_URL in your app.
These urls helped
https://willandskill.se/en/setup-a-next-js-project-with-pm2-nginx-and-yarn-on-ubuntu-18-04/
https://pm2.keymetrics.io/docs/usage/application-declaration/
Another idea would be passing all the env parameters of PM2 to the server via the following.
ecosystem.config.js being:
module.exports = {
apps: [{
name: "my-service-1",
script: "dist/server.js",
env_production: process.env, // pass all the env params of the container to node
}],
};
#webface
The environment variables in your example will not be available in Nextjs. To be available to both the client and server, variables must be prefixed with NEXT_PUBLIC (ie. NEXT_PUBLIC_API_VERSION).
These environment variables must be passed into the build process to be available at runtime.

Resources