I've been wondering if it's possible to limit shell commands a user can run in a Jenkins job?
Example: We store an universal password to access our Subversion repositories in Jenkins, and we don't want people to just cat the file, echo it out and display it in the buildlog for the job.
Exactly how can you limit the number of shell commands and directories users can utilize?
This is outside the scope of Jenkins, that's purely your responsibility for addressing this, main reason being that's impossible to do it correct from Jenkins.
There are two solutions
* Start using docker containers as build slaves
* Try to use OS level limitations
Regarding keeping secrets secret the final answer is you cannot really secure it from those writing scripted jobs.
And yes, keep the master isolated for special jobs.
Related
There is a list of env variables available for GoCD at:
https://docs.gocd.org/current/faq/environment_variables.html
However I'm looking for something like: GO_BUILD_ERROR or similar.
I would like to have the failure reason or message when a build fails to pass this to an external script or message.
There seems to be nothing in the documentation.
GoCD doesn't have any such variables. The reason I feel is mostly because GoCD is very generic in terms of what commands constitute a build for a material. You might want to parse the logs manually to figure that out.
Also in the context of GoCD environment variables are used as input to the stages and not as output from them. If you're planning to build a plugin / wrapper for the commands that run consider storing them as properties in the jobs that way they can also be queried upon later if required.
Here is the context of my problem:
a gitlab ci yml pipeline
several jobs in the same internship
all jobs use a task gradle requiring the use of his cache
all jobs share the same gradle cache
My problem:
sometimes, when there are several pipelines at the same time, I get :
What went wrong:
Could not create service of type FileHasher using GradleUserHomeScopeServices.createCachingFileHasher().
Timeout waiting to lock file hash cache (/cache/.gradle/caches/5.1/fileHashes). It is currently in use by another Gradle instance.
Owner PID: 149
Our PID: 137
Owner Operation:
Our operation:
Lock file: /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
I can't find any documentation about the lock system used by gradle. I don't understand why locks are positioned when the gradle action doesn't write to cache dir.
Does anyone know how locks work? Or can I simply change the duration of the timeout to allow concomitant tasks to wait their turn long enough before failing?
Translated with www.DeepL.com/Translator
I tried to tun gradle without a daemon, did not work.
I fixed this by killing all java processes in Activity Monitor(MacOS). Hope it helps.
You typically get this error when trying to share the Gradle cache amongst multiple Gradle processes that run on different hosts. I assume your CI pipelines run on different hosts or they at least run isolated from each other (e.g., as part of different Docker containers).
Unfortunately, such a scenario is currently not supported by Gradle. Gradle developer Stefan Oehme wrote this comment wrt. sharing the Gradle user home:
Gradle processes will hold locks if they are uncontended (to gain performance). Contention is announced through inter-process communication, which does not work when the processes are isolated in Docker containers.
And more clearly he states in a follow-up comment (highlighting by me):
There might be other issues that we haven't yet discovered though, since sharing a user home between machines is not a use case we have designed for.
In other words: sharing a Gradle user home or even just the cache part of it across different machines or otherwise isolated processes is currently not officially supported by Gradle. (See also my related question.)
I guess the only way to solve this for your scenario is to either:
make sure that the Gradle processes in your CI pipelines can communicate with each other (e.g., by running them on the same host), or
don’t directly share the Gradle user home, e.g., by creating copies for all CI pipelines, or
don’t run the CI pipelines in parallel.
Another scenario where it could happen is if some of these Gradle related files are on a cloud file system like OneDrive that needs a re-authentication.
Re-authenticate to the cloud file system
"Invalidate caches and restart" in Android Studio
1.First edit your config file /etc/sysconfig/jenkins change to root user JENKINS_USER="root"
2.Modify /var/lib/jenkins file permissions to root chown -R root:root jenkins
3.Restart your service service jenkins restart
Your Exception:
What went wrong: Could not create service of type FileHasher using
GradleUserHomeScopeServices.createCachingFileHasher().
Timeout waiting to lock file hash cache
(/cache/.gradle/caches/5.1/fileHashes). It is currently in use by another Gradle instance.
Owner PID: 149
Our PID: 137
Owner Operation:
Our operation:
Lock file: /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
Thid worked for me:
rm /cache/myshop/reunion/.gradle/caches/5.1/fileHashes/fileHashes.lock
(Remove the lock file)
I would like the ansible to wait for my input in the command line for interactive script running in remote machine. "Expect" will not suite my requirement as the interactive questions keep changing.
eg
xxx.pl
This must be the user which is running service. [root:root]': y ----> i should be allowed to change in realtime
handling utilities? [/usr/bin]: y ---> same with this
This is not possible with Ansible.
Ansible packs all task scripts/parameters before sending it for execution to remote host and there is no way (as of Ansible 2.4) to get any feedback during task execution – only final result of task.
For anyone (like myself) curious about this and landing on this page per google. Looks as if it's now possible through the prompts feature! (I don't think it's even so much as a built-in module. It seems like it's just built-in. Can't find it under Ansible.Builtin Plugin Index. It's just under Playbook Advanced Features.
I am trying to schedule a batch in Jenkins (Windows environment) for Windows EXE program (Implemented through .NET).
This program refers to some shared location in the network (viz. \shared network.net\sample path) for the sake of reading from and writing into files.
When I run this program independently out of Jenkins, it works fine, as it considers my login as user who actually has access over shared path.
However, when I run it through Jenkins, there is issue over access. Through my program logs I checked and found that it uses 'NT AUTHORITY\SYSTEM' as user.
I need to make Jenkins job run through particular user's authentication, which will have relevant access over shared path.
Please advise.
The Authorize Project Plugin allows you to run a job as a specific user.
Or, if you are executing from a bat script, you should be able to change the user in your script before running your program.
Several options:
Use "net use" to map the network location under the job's session using your credentials.
In your Windows slave you can go to services-> Jenkins slave->properties. there under "Log On" section you can specify the user you want the service to run under.
I would definitely go with the first option as it is much more manageable (tomorrow you'll replace your slave and have to do it all over again, instead of just migrating the job and mapping the session again).
Good Luck!
Being fairly new to the Linux environment, and not having local resources to inquire on, I would like to ask what is the preferred method of starting a process at startup as a specific user on a Ubuntu 12.04 system. The reasoning for such a setup is that this machine(s) will be hosting an Input/Output Controller (IOC) in an industrial setting. If the machine fails or restarts, this process must boot automatically..... everytime.
My internet searches have provided two such area's to perform this task:
/etc/rc.local
/etc/init.d/
I ask for the specific advantages and disadvantages of each approach. I'll add that some of these machines are clients and some are servers, but all need to run an IOC, and preferably in the same manner.
Within what ever method above is deemed to be the most appropriate, a bash shell script must be run as my specified user. It is my understanding all start up process are owned by root. So I question if this is the best practice:
sudo -u <user> start_ioc.sh
If this is the case, then I believe it is required to create a file under:
/etc/sudoers.d/
Using:
sudo visudo -f <filename>
Where within this file you assign the appropriate rights and paths to the user. Most of my searches has shown this as the proper format:
<user or group> <host or IP>=(<user or group to run as>)NOPASSWD:<list of comma separated applications>
root ALL=(user)NOPASSWD:/usr/bin/start_ioc.sh
So for final additional information, the ultimate reason for this approach, which may also be flawed logic, is that the IOC process needs to have access to a network attached server (NAS). Allowing root access to the NAS is I believe a no-no, where the user can have the appropriate permissions assigned.
This may not be the best answer, but it is how I decided to complete this task:
Exactly as this post here:
how to run script as another user without password
I did use rc.local to initiate the process at startup. It seems to be working quite well.