Binding volume with 'withFileSystemBind' -> Permission Denied - testcontainers

i bind a volume to my GenericContainer as follows:
#Container
public GenericContainer ap = new GenericContainer(DockerImageName.parse("myImage:latest"))
.withExposedPorts(AP_PORT)
.withEnv("SPRING_PROFILES_ACTIVE", "integrationtest")
.withFileSystemBind("/home/user/tmp/rdf4jRepos/", "/mnt/spring/", BindMode.READ_WRITE)
.withLogConsumer(new Slf4jLogConsumer(log))
.waitingFor(Wait.forHttp("/actuator/health"));
But i've a permission denied problem.
I added the following into the spring-boot app, that runs in the GenericContainer:
(rdfRepositoryHome = /mnt/spring)
File repoHome = new File(rdfRepositoryHome);
System.out.println( "getAbsolutePath: " + repoHome.getAbsolutePath());
File f2 = new File(repoHome, "testRepo");
System.out.println( "repoHome.isDirectory(): " + repoHome.isDirectory() );
System.out.println( "Execute: " + repoHome.canExecute() );
System.out.println( "Write: " + repoHome.canWrite() );
System.out.println( "READ: " + repoHome.canRead() );
output:
2021-08-04 16:11:31.566 INFO 326439 --- [tream-274971679] d.f.i.s.p.a.i.ITReadProfile : STDOUT: getAbsolutePath: /mnt/spring
2021-08-04 16:11:31.567 INFO 326439 --- [tream-274971679] d.f.i.s.p.a.i.ITReadProfile : STDOUT: repoHome.isDirectory(): true
2021-08-04 16:11:31.567 INFO 326439 --- [tream-274971679] d.f.i.s.p.a.i.ITReadProfile : STDOUT: Execute: true
2021-08-04 16:11:31.567 INFO 326439 --- [tream-274971679] d.f.i.s.p.a.i.ITReadProfile : STDOUT: Write: false
2021-08-04 16:11:31.567 INFO 326439 --- [tream-274971679] d.f.i.s.p.a.i.ITReadProfile : STDOUT: READ: true
How can i bind a volume, that is writable?
Thanks
Update 19.08.2021:
I added a test to another project, that is public available:
Running on image:
maven:3.6.1-jdk-11
.gitlab-ci.yml
TestCase
Dockerfile:
Test Job line 117
Test Job line 1800
Update 30.8.2021:
The problem should be reproducable (independent from gitlab ci and DinD!) with
git clone https://gitlab.com/linkedopenactors/loa-suite.git
git checkout feature/testcontainer
cd loa-suite/
mvn clean install -DskipTests
cd integrationtests/
mvn -Dit.test=ITLastSync verify

Related

Jenkinsfile Creating Scratch Org Failing

I am trying to create a Continuous Integration between BitBucket and Salesforce using Jenkins and I am having trouble with the Scratch Org creation. The Jenkinsfile I BELIEVE is set up correctly. Here it is:
node {
def SF_JENKINSUSER = env.SF_JENKINS_USER
def SF_USERNAME = env.SF_JENKINS_USER + '.' + env.SF_DEV
def SF_URL = env.SF_TESTURL
def SF_PROD = env.SF_PRODURL
def SF_DEV_HUB = env.SF_DEVHUB
stage('Checkout Source') {
checkout scm
}
withEnv(["HOME=${env.WORKSPACE}"]) {
withCredentials([string(credentialsId: 'SF_CONSUMER_KEY_BIND', variable: 'SF_CONSUMER_KEY'), file(credentialsId: 'SERVER_KEY_CREDENTALS_ID', variable: 'server_key_file')]) {
stage('Authorize DevHub Org') {
try {
rc = command "sfdx force:auth:jwt:grant -r ${SF_PROD} -i ${SF_CONSUMER_KEY} -u ${SF_JENKINSUSER} -f ${server_key_file} --setdefaultdevhubusername -a ${SF_DEV_HUB}"
if ( rc != 0 ) {
echo '========== ERROR: ' + rc
error 'Salesforce org authorization failed.'
}
else {
command "sfdx force:org:list"
echo '========== LOGGED IN =========='
}
}
catch (err) {
echo "========== DEVHUB AUTHORIZATION FAILURE: ${err} =========="
}
}
// Create a new scratch org to test the repo
stage('Create Test Scratch Org') {
try {
rc = command "sfdx force:org:create -s -f config\\project-scratch-def.json -a TestScratch -w 10 -d 1"
if (rc != 0) {
error 'Salesforce test scratch org creation failed.'
}
}
catch (err) {
echo "========== SCRATCH ORG CREATION FAILURE: ${err} =========="
}
}
}
} }
def command(script) {
if ( isUnix() ) {
return sh(returnStatus: true, script: script);
}
else {
return bat(returnStatus: true, script: script);
}
Apologies about the formatting there. Now, the results of this I cannot figure out. It says the connected status of the orgs are JwtGrantFailure and it's looking for a server.key file instead of the scratch json file in the command line. Here are the pertinent parts of the output from this job:
E:\DevOps_Root\JENKINS\workspace\TestingCIPipeline2>sfdx force:org:list
=== Orgs
ALIAS USERNAME ORG ID CONNECTED STATUS
(D) DevHub sa.jenkins#[...].com 00D300000000UicEAE JwtGrantError
No active scratch orgs found. Specify --all to see all scratch orgs
[Pipeline] echo
========== LOGGED IN ==========
[Pipeline] }
[Pipeline] // stage
[Pipeline] stage
[Pipeline] { (Create Test Scratch Org)
[Pipeline] isUnix
[Pipeline] bat
E:\DevOps_Root\JENKINS\workspace\TestingCIPipeline2>sfdx force:org:create -s -f config\project-scratch-def.json -a TestScratch -w 10 -d 1
ERROR running force:org:create: ENOENT: no such file or directory, open
'E:\DevOps_Root\JENKINS\workspace\Pipe#tmp\secretFiles\e0ab232f-1958-42d1-b3bb-aed5e00a562f\server.key'
[Pipeline] echo
========== SCRATCH ORG COMMAND FAILURE: 1
Why would the job be looking for the server.key file when I have already run the withCredentials successfully? What am I missing here?
Any insights would be greatly appredciated.
Ok so .. this one confounded me for a while, but I finally got the script to create a scratch org.
I logged into the Jenkins Virtual Server through Remote Desktop Connection, opened windows explorer, navigated to the Jenkins User .sfdx folder and deleted the following files:
alias.json
key.json
user#domain.json
stash.json
After I did that, I made some updates in the Jenkinsfile and pushed the changes up to the repository. The job ran, and the Scratch Org was created.
My new issue is trying to figure out how to have the same job run again because we will have multiple repos working this single job.
Anyway, I hope this helps some of you out who are facing the same issue.

how to pass selenium-standalone port configuration from the command line

I created 3 jenkins jobs linked to the same github project, i'm using wdio v5 and cucumber, i want to run each job on a different port this is why i'm trying to pass the port from the jenkins post-build task : execute shell
I tryed this -- --seleniumArgs.seleniumArgs= ['-port', '7777']
then this
-- --seleniumArgs.seleniumArgs= ["-port", "7777"]
then
-- --seleniumArgs.seleniumArgs= '-port: 7777'
but nothing works
i found a solution :
so this is the wdio.conf.js file :
var myArgs = process.argv.slice(2);
Port= myArgs[1]
exports.config = {
////////////////////////
services: ['selenium-standalone'],
seleniumArgs: {
seleniumArgs: ['-port', Port]
},
//////////////////////
}
myArg will receive an array with the arguments passed in the command line
and this is the command
npm test 7777 -- --port 7777
the 7777 is the argument number 2, thus the index 1 in the array,
the index 0 is : wdio.conf.js, which is in the "test" script in package.json
===> "test": "wdio wdio.conf.js"

Jenkins pipeline - How to read the success status of build?

Below is the output after running the build(with success):
$ sam build
2019-06-02 15:36:37 Building resource 'SomeFunction'
2019-06-02 15:36:37 Running PythonPipBuilder:ResolveDependencies
2019-06-02 15:36:39 Running PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Package: sam package --s3-bucket <yourbucket>
[command] && echo "Yes" approach did not help me.
I tried to use this in Jenkins pipeline
def samAppBuildStatus = sh(script: '[cd sam-app-folder; sam build | grep 'Succeeded' ] && echo true', returnStatus: true) as Boolean
as one-liner script command, but does not work
How to grab the success build status using bash script? for Jenkins pipeline
Use this to grab the exit status of the command:
def samAppBuildStatus = sh returnStatus: true, script: 'cd sam-app-folder; sam build | grep "Succeeded"'
or this if you don't want to see any stderr in the output:
def samAppBuildStatus = sh returnStatus: true, script: 'cd sam-app-folder; sam build 2>&1 | grep "Succeeded"'
then later in your Jenkinsfile you can do something like this:
if (!samAppBuildStatus){
echo "build success [$samAppBuildStatus]"
} else {
echo "build failed [$samAppBuildStatus]"
}
The reason for the ! is because the definitions of true and false between shell and groovy differ (0 is true for shell).

Running GitlabRunner locally with private regsitry on Mac OSX

I'm trying to run GitlabRunner locally but ..
This works ...
❯ docker pull registry.gitlab.com/{MY_PROJECT}
❯ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.gitlab.com/{MY_PRIVATE_IMAGE} latest XXXX 2 days ago 605MB
❯ gitlab-runner verify
WARNING: Running in user-mode.
WARNING: Use sudo for system-mode:
WARNING: $ sudo gitlab-runner...
Verifying runner... is alive runner={XXXX}
❯ cat /.gitlab-runner/config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "macbook-{XXXX}"
url = "https://gitlab.com/"
token = "XXXXXXX"
executor = "docker"
[runners.docker]
tls_verify = false
image = "registry.gitlab.com/{MY_PRIVATE_IMAGE}:latest"
privileged = true
disable_cache = false
volumes = ["/cache"]
shm_size = 0
pull_policy = "if-not-present"
[runners.cache]
❯ cat ../../../.docker/config.json
{
"auths": {
"https://index.docker.io/v1/": {},
"https://registry.gitlab.com": {},
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
In my project when I try to execute runner ..
❯ gitlab-runner exec docker lint
WARNING: You most probably have uncommitted changes.
WARNING: These changes will not be tested.
Running with gitlab-ci-multi-runner 9.4.0 (ef0b1a6)
on ()
Using Docker executor with image registry.gitlab.com/{MY_PRIVATE_IMAGE} ...
map[]
Using docker image sha256:XXXX for predefined container...
Pulling docker image registry.gitlab.com/{MY_PRIVATE_IMAGE} ...
ERROR: Preparation failed: Error response from daemon: Get https://registry.gitlab.com/v2/{MY_PRIVATE_IMAGE}/manifests/latest: denied: access forbidden
Will be retried in 3s ...
Using Docker executor with image registry.gitlab.com/{MY_PRIVATE_IMAGE} ...
map[]
Using docker image sha256:XXX for predefined container...
ERROR: Preparation failed: Error response from daemon: Get {MY_PRIVATE_IMAGE}/manifests/latest: denied: access forbidden
open your ~/.docker/config.json file and replace the credsStore entry with an empty string, docker login <your-registry> again and it should work out

Getting SerializableException in Jenkinsfile on curl call

I'm working on a pipeline script that isn't even building anything. It clones a repo and then gets some info about the repo, and also uses the BitBucket REST API to get other information about the repository.
The following is an excerpt of the Jenkinsfile:
stageName = 'GET-COMMITS-AND-USERS'
stage (stageName) {
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: params.JP_MechIdCredentials, usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
def uniqueCommitterMap = {}
def format = 'yyyy-MM-dd'
def now = new Date()
def aWhileAgo = now - params.JP_DaysInPastToLookFor.toInteger()
def uniqueCommitterEmails = sh(returnStdout: true, script:"git log --date=short --pretty=format:'%ce' --after='${aWhileAgo.format(format)}' --before='${now.format(format)}' | sort -u")
now = null
aWhileAgo = null
println "uniqueCommitterEmails[${uniqueCommitterEmails}]"
def uniqueCommitterEmailList = uniqueCommitterEmails.split(/[ \t\n]+/)
uniqueCommitterEmails = null
println "uniqueCommitterEmailList[${uniqueCommitterEmailList}] size[${uniqueCommitterEmailList.size()}]"
for (int ctr = 0; ctr < uniqueCommitterEmailList.size(); ++ ctr) {
println "entry[${uniqueCommitterEmailList[ctr]}]"
println "entry[${uniqueCommitterEmailList[ctr].split('#')}]"
uniqueCommitterMap[uniqueCommitterEmailList[ctr].split("#")[0]] = uniqueCommitterEmailList[ctr]
}
println "uniqueCommitterMap[${uniqueCommitterMap}]"
println "end of uCM."
uniqueCommitterEmailList = null
def cmd = "curl -u ${USERNAME}:${PASSWORD} https://.../rest/api/1.0/projects/${params.JP_ProjectName}/repos/${params.JP_RepositoryName}/permissions/users?limit=9999"
USERNAME = null
PASSWORD = null
println "cmd[${cmd}]"
def usersJson = sh(returnStdout: true, script:cmd.toString())
println "Past curl call." // Don't get here
The following is an excerpt of the console output when I run this job with appropriate parameters:
[Pipeline] echo
end of uCM.
cmd[curl -u ****:**** https://.../rest/api/1.0/projects/.../repos/.../permissions/users?limit=9999]
[Pipeline] echo
[Pipeline] sh
[workspace] Running shell script
[Pipeline] }
[Pipeline] // withCredentials
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[DOSSIER] Response Code: 201
java.io.NotSerializableException: java.io.StringWriter
at org.jboss.marshalling.river.RiverMarshaller.doWriteObject(RiverMarshaller.java:860)
at org.jboss.marshalling.river.BlockMarshaller.doWriteObject(BlockMarshaller.java:65)
at org.jboss.marshalling.river.BlockMarshaller.writeObject(BlockMarshaller.java:56)
at org.jboss.marshalling.MarshallerObjectOutputStream.writeObjectOverride(MarshallerObjectOutputStream.java:50)
at org.jboss.marshalling.river.RiverObjectOutputStream.writeObjectOverride(RiverObjectOutputStream.java:179)
at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:344)
at java.util.HashMap.internalWriteEntries(HashMap.java:1777)
at java.util.HashMap.writeObject(HashMap.java:1354)
at sun.reflect.GeneratedMethodAccessor28.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
As you can see, it appears to execute the "sh" step to call "curl" for the BitBucket REST API, but it doesn't get past that. I can't figure out what object it's complaining about.
Update:
I'm running Jenkins 2.19.2.
The pipeline has the following settings:
"Do not allow concurrent builds": on
10 total defined parameters, one a Credentials parameter, which is referenced in this block
To answer your question I ran Jenkins v2.32.2 from the official Dockerfile and created the following test pipeline:
node() {
stage('serialize') {
def USERNAME = 'myusername'
def PASSWORD = 'mypassword'
def cmd = "echo curl -u ${USERNAME}:${PASSWORD} https://.../${params.TEST_PARAM1}/permissions/users?limit=9999"
USERNAME = null
PASSWORD = null
println "cmd[${cmd}]"
def usersJson = sh(returnStdout: true, script:cmd)
println "Past curl call."
}
}
I also added a text parameter to the build job to have something similar than your params.JP_ProjectName variables.
And this is my output when running with the text parameter set to "defaultValue modified":
Started by user admin
[Pipeline] node
Running on master in /var/jenkins_home/workspace/42217046
[Pipeline] {
[Pipeline] stage
[Pipeline] { (serialize)
[Pipeline] echo
cmd[echo curl -u myusername:mypassword https://.../defaultValue modified/permissions/users?limit=9999]
[Pipeline] sh
[42217046] Running shell script
+ echo curl -u myusername:mypassword https://.../defaultValue modified/permissions/users?limit=9999
[Pipeline] echo
Past curl call.
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
As you can see, the pipeline finished successully. And I can see no issue with the pipeline.
Maybe you can update your question with a screenshot of your job configuration and the version number of your jenkins installation.
I came across the same issue, but it seems that the issue is not caused by sh at all. It is probably caused by a variable you've defined above the sh step, which is not Serializable.

Resources