I know, that there is a way to enable or disable jenkins job by using commands
curl -u user:password -X POST http://server/job/jobname/enable
curl -u user:password -X POST http://server/job/jobname/disable
But what I need - is to get the status of job "enabled/disables" and write it to the bash $status variable.
Is there a way to do it?
You can check if job is enabled or disabled using the API
http://server:port/job/jobname/api/xml?xpath=*/buildable
So, with crumb, you can use something like this:
CRUMB=$(curl -s 'http://USER:PASSWORD#SERVER:PORT/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,":",//crumb)')
myStatus=$(curl -X POST -H "$CRUMB" "http://USER:PASSWORD#SERVER:PORT/job/jobname/api/xml?xpath=*/buildable")
And in variable myStatus you get
<buildable>true</buildable>
or
<buildable>false</buildable>
Related
I'm trying to send a slack notification using the .gitlab-ci.yml and I need to pass the Commit's message in the message like this:
"The version ${CI_COMMIT_TAG} Version is available!"
But i'm still not able to get the environment variable desired when receiving the notification on my channel and passing it like this in the file:
script:
- "curl -X POST -H 'Content-type: application/json' --data '{\"text\":\"The version ${CI_COMMIT_TAG} version is available!\"} ' https://hooks.slack.com/services/....../......"
Do you have any clues ? I'm not used to Curl and Yaml
Thanks and Have a good day!
--data '...'
Variable expansion in bash does not work within single quotes. Use double quotes instead.
Alternatively, use a data file to avoid formatting JSON inline:
curl -d "#data.json" ...
I use the below curl command to disable the alert which works fine.
curl -k -u admin:password https://URL -X POST
But i am trying to hide the username and password in the below shell script but getting unauthorized exception,
SCRIPT_DIR=/tmp
USER=$(cat $SCRIPT_DIR/.nonprodusr.txt)
PWD=$(cat $SCRIPT_DIR/.nonprod.txt)
curl -k -u $USER:$PWD https://url -X POST
You can use a netrc file : https://everything.curl.dev/usingcurl/netrc
/tmp/netrc :
login admin
password Passw0rd
And use it with this option :
curl -k --netrc-file /tmp/.netrc https://url -X POST
One can use a configuration file to add user/password as well as other configurations.
curl --config /home/me/curl-configuration.txt <url>
Contents of "/home/me/curl-configuration.txt":
--user <username>#<password>
I am trying to test the Sumo Logic API by updating the information of my collector. The second curl command is the one that is causing the issue 'curl: (55) Failed sending PUT request'. It works in my terminal but not in the bash script.
#!/bin/bash
readonly etag=$(curl -u '<accessId>:<accessKey>' -I -X GET https://api.sumologic.com/api/v1/collectors/<id> | grep -Fi etag | awk '{print $2}' | tr -d \''"\')
echo ${etag}
curl -vvv -u '<accessId>:<accessKey>' -X PUT -H "Content-Type: application/json" -H "If-Match: \"${etag}\"" -T updated_collector.json https://api.sumologic.com/api/v1/collectors/<id>
set -x
The first curl command is assigned to the variable called 'etag' which stores the necessary etag. The etag is used in the second curl command to make a request to update the information stored in the 'updated_collector.json'. The updated_collector.json file is not the issue as I have successfully updated the information via the terminal with it. I suspect the content-type is not being sent in the header because someone ran the script on their end and it was not showing that information with the -vvv tag.
Here you can find the Sumo Logic Collector API Methods and Examples from which I got the curl commands to test the API: https://help.sumologic.com/APIs/Collector-Management-API/Collector-API-Methods-and-Examples
Update: I retieved the etag and then ran the second command in a bash script. I manually inserted the etag into the ${etag} portion of the second curl command. I then ran the script and it worked. Therefore, the etag variable isn't correctly formatted inside the second curl command. I do not know how to fix this.
The issue was partially the syntax but after fixing that, I was still getting an error. "If-Match: \"${etag}\" in my command should be "If-Match: ${etag}" instead. I had to add the --http1.1 flag for it to work. I'm sure this is a sumo logic issue. I am able to execute GET requests no problem using http2.0.
In my Ruby app I'm trying to get all agent users from my service desk board. It means all users with status: 'ServiceDesk'. Is it possible using only base auth?
In curl I was trying something like:
curl -D -u USERNAME:PASSWORD -X GET -H "Content-Type: application/json" https://company_name.atlassian.net/rest/api/2/user/assignable/search?project=SERVICEDESK
But all what I get is an error:
Warning: The file name argument '-u' looks like a flag.
curl: (3) URL using bad/illegal format or missing URL
{"errorMessages":["Internal server error"],"errors":{}}%
Is there any way to get those data with basic auth?
I think there's an issue with the curl command, try this
curl -D- -u USERNAME:PASSWORD https://company_name.atlassian.net/rest/api/2/user/assignable/search?project=SERVICEDESK
I am trying to do a script to get me access of advance scan option of nessus in localhost. So I want advance scan operation through shell script without GUI. I want all operations like login, advance scan and export report are performed through shell script without GUI access.
Why do you want to do it with bash script?
You can do this much easier with the nessus API.
Have a look at the link below
https://github.com/jfalken/nessus_enterprise_rest_client
the simplest way of doing automatisation in nessus is to use the nessus API.
its located at https://NessusServerIP:8834/ - if you visit it, you will be greeted by the API-Documentation.
There are various API-Implementations available - if you google 'Nessus API client' you'll get a glimpse.
If you, as you said, want to to run bash-skripts than the simplest way is probably using CURL for the API-Requests.
A typical workflow will look like this:
authorize yourself to the NessusAPI (either via TOKEN or API-Key)
launch or configure a scan (and wait until it finished)
export a report (and wait until it finished)
download the exported report
CURL #1 (authorize using token):
curl -X POST --data '{"username":"NessusUser","password":"YourPassword"}' -k "https://NessusServerIp:8834/session"
--header "Content-Type:application/json" | python -m json.tool
..which will yield you following JSON yielding an Token which you need for the other API-Calls:
{"token": "e411e443521adee4496d79823a510cc68c5bf05aeda6e6eb"}
CURL #2 (launch a scan):
curl -X POST -H 'X-Cookie: token=e411e443521adee4496d79823a510cc68c5bf05aeda6e6eb' -H 'Content-Type:application/json'
--data '{"scan_id":"21", "alt_targets":[127.0.0.1]}'
-k "https://NessusServerIp:8834/scans/21/launch" | python -m json.tool
...which will be answered with a JSON like this, containing the ID of the just startet scan:
{"scan_uuid":"c1c30d8f-5f79-2e4b-2d03-05b8b3c595f1e768e03195abdfa2"}
CURL #3 (exporting a scan):
curl -X POST -H 'X-Cookie: token=766ef7a2302780c189ba563b89c5eb3706140c0ef1e4de8b' -H
'Content-Type:application/json' --data '{"scan_id":"33", "format":"html"}' -k
"https://NessusServerIP:8834/scans/33/export" | python -m json.tool
...which will yield this JSON response, containing a token to the exported file and the file_id:
{"token":"3e13ab381c480caa1e377411c0b561970c46e5d78894c5a0cb2be0e7f00fefe0","file":1434780027}
...so now we are ready to download the report. in this case, since i have specified "format: html" in the last call, its a .html you will need to safe the outcome into.
Curl #4 (download exported report):
curl -X GET -H 'X-Cookie: token=7d155aef4359d02addea29d8d56bca4a5045ca61efeb38ee' -H 'Content-Type:application/json'
--data '{"scan_id":"21", "alt_targets":127.0.0.1}'
-k "https://NessusServerIP:8834/scans/17/export/945237343/download" > report.html
...which should leave you with a report.html in the folder you started your script.
Now... how do you automatize this? Well write a Bash-Skript, put in this calls, parse the answers to extract the information you need - and then enjoy! :)
ps: i use the python -m json.tool to beautify the otherwise not very beautiful output of CURL.
Hope i have helped,
Gewure