I need to get user list from specific group to array with bash inside jenkins job. I thought that I can use jenkins rest api with curl like download api/json, but I can't find any details about this
Does Jenkins have some opportunity to give me this list? I even did not find how to check groups with ui
Related
In the Jenkins API it's possible to query for information regarding a specific label (MY_LABEL) by invoking:
https://JENKINS_URL/label/MY_LABEL/api/json?pretty=true
Question: Is there any endpoint that could list ALL the available labels on a Jenkins server?
Found out one way to do it. Interpret the output of JENKINS_URL/computer/api/json?pretty=true
I am using the Azure CLI to perform a health check on some Azure VMs. The health checks are deployed through a Jenkins stage, using bash. The stage itself may take several hours to complete, during which, several az 'vm run-commands' are executed that all require the proper credentials.
I also have several Jenkins pipelines that deploy different products and that are supposed to be able to run in parallel. All of them have the same health checks stage.
When I execute 'az login' to generate an auth token and 'az account set' to set the subscription, as far as I understood, this data is written to a profile file (~/.azure/azureProfile.json). So this is well and all, but whenever I trigger a parallel pipeline on this Jenkins container, if I use a different Azure subscription, the profile file will naturally get overwritten with the different credentials, which causes the other health check to fail whenever it gets to the next vm run-command execution since it's looking for a Resource Group, which exists in a different subscription.
I was thinking of potentially creating a new unique Linux user as part of each stage run and then removing it once it's done, so all pipelines will have separate profile files. This is a bit tricky though, since this is a Jenkins docker container using an alpine image and I would need to create the users with each pipeline rather than in the dockerfile, which brings me to a whole other drama - to give the Jenkins user sufficient privileges to create and delete users and so on...
Also, since the session credentials are stored in the ~/.azure/accessTokens.json and azureProfile.json files by default, I could theoretically generate a different directory for each execution, but I couldn't find a way to alter those default files/location in the Azure docs.
How do you think is the best/easier approach to workaround this?
Setting the AZURE_CONFIG_DIR environment variable does the trick as described here.
I would try to keep az login as it is, remove az account set and use --subscription argument for each command instead.
You can see that ~/.azure/azureProfile.json contains tenantId and user information for each subscription and ~/.azure/accessTokens.json contains all tokens.
So, if you precise each time your subscription explicitly you will not depend on common user context.
I have my Account 1 for subscription xxxx-xxxx-xxxxx-xxxx, and Account 2 for subscription yyyy-yyyy-yyyy-yyyy and I do:
az login # Account 1
az login # Account 2
az group list --subscription "xxxx-xxxx-xxxxx-xxxx"
az group list --subscription "yyyy-yyyy-yyyy-yyyy"
and it works well under the same unix user
I have a .sh file that is stored in GCS. I am trying to schedule the .sh file through google cloud shell.
I can run the same file using gsutil cat gs://miptestauto/baby.sh | sh command but not able to schedule it.
Following is my code for scheduling the file:
16 17 * * * gsutil cat gs://miptestauto/baby.sh | sh
It displays the message as "auto saving..done" but the scheduled job is not get displayed when I use crontab -l
# contents of .sh file
bin/bash
bq load --source_format=CSV babynames.baby_destination13 gs://testauto/yob2010.txt name:string,gender:string,count:integer
Please can anyone tell me how schedule it using google cloud shell.
I am not using compute engine/app engine. Just wanted to schedule it using the cloud shell.
thank you in advance :)
As per the documentation, Cloud Shell is intended for interactive use only. The Cloud Shell instances are provisioned on a per-user, per-session basis and sessions are terminated after an hour of inactivity.
In order to schedule a daily cron job, the instance needs to be up and running all time but this doesn’t happen with Cloud Shell and I believe your jobs are not running because of this.
When you start Cloud Shell, it provisions a f1-micro instance which is the same machine type you can get for free if you are eligible for “Always Free”. Therefore you can create a f1-micro instance, configure the cron job on it and leave it running so it can execute the daily job.
You can check free usage limits at https://cloud.google.com/compute/pricing#freeusage
You can also use the Cloud Scheduler product https://cloud.google.com/scheduler which is a serverless managed Cron like scheduler.
To schedule a script you first have to create a project if you don’t have one. I assume you already have a project so if that’s the case just create the instance that you want for scheduling this script.
To create the new instance:
At the Google Cloud Platform Console click on Products & Services which is the icon with the four bars at the top left hand corner.
On the menu go to the Compute section and hover on Compute Engine and then click on VM Instances.
Go to the menu bar above the instance section and there you will see a Create Instance button. Click it and fill in the configuration values that you want your new instance to have. The values that you select will determine your VM instance features. You can choose, among other values, the name, zone and machine type for your new instance.
In the Machine type section click the drop-down menu tab to select an “f1-micro instance”.
In the Identity and API access section, give access scope to the Storage API so that you can read and write to your bucket in case you need to do so; the default access scope only allows you to read. Also enable BigQuery API.
Once you have the instance created and access to the bucket, just create your cron job inside your new instance: In the user account under which the cron job will execute, run crontab -e and edit this file to run the cron job that will execute your baby.sh script.The following documentation link should help you with this.
Please note, if you want to view output from your script you may need to redirect it to your current terminal.
I am having multiple jenkins instances like Jenkins A, Jenkins B and Jenkins C.
Now I am trying to make a report which will have the details about all of the three jenkins at one place.
report about : "Total Build", "Success", "Failed".
( from jenkins A,Jenkins B, Jenkins C)
Is there any shell script which runs on every Jenkins Instances and combine the Script Output at one place?
Make sure anonymous has read access to all your jobs in jenkins. Use powershell to invoke the job's url in the following format
http://jenkinsA:8080/view/viewname/job/jobname/1/console
http://jenkinsA:8080/view/viewname/job/jobname/2/console
http://jenkinsB:8080/view/viewname/job/jobname/1/console
http://jenkinsC:8080/view/viewname/job/jobname/2/console
Keep a count for the number of possible builds for each job. Each time your count is incremented, create an http request in powershell to invoke the url. If the request returns 404, you know the build does not exist and if it is http 200, the build exists. extract the line 'Finished:' which will give you either success or failure.
Based on the results, increment your success or failure count.
Hope it helps!
I've been wondering if it's possible to limit shell commands a user can run in a Jenkins job?
Example: We store an universal password to access our Subversion repositories in Jenkins, and we don't want people to just cat the file, echo it out and display it in the buildlog for the job.
Exactly how can you limit the number of shell commands and directories users can utilize?
This is outside the scope of Jenkins, that's purely your responsibility for addressing this, main reason being that's impossible to do it correct from Jenkins.
There are two solutions
* Start using docker containers as build slaves
* Try to use OS level limitations
Regarding keeping secrets secret the final answer is you cannot really secure it from those writing scripted jobs.
And yes, keep the master isolated for special jobs.