I have set up os x server and bot which do build/test process. I need run this bot from command line.
Is it possible to run integration (CI on os x server) from command line?
It's quite late response, nevertheless hope it would help.
As you might know Xcode Server exposes WEB API.
So in order to launch integration just issue this command replacing YOUR_BOT_NAME part with actual bot name.
curl -sk -X POST -d '{ shouldClean: false }' https://localhost:20343/api/bots/`curl -sk https://localhost:20343/api/bots | jq '.results[] | select(.name == "YOUR_BOT_NAME") | ._id' | tr -d '"'`/integrations
Note this command utilises JQ command line JSON processor which is available via Homebrew:
brew install jq
Build can be done with xcodebuild. For tests, I don't know.
Related
I have a bash script that takes all the feature files from a specific directory and uses /rest/raven/1.0/import/feature?projectKey=XYZ XRAY-JIRA API to create TEST CASES in JIRA.
XRAY version 4.2.1_j7
I am running this script in a JENKINS-PIPELINE but the problem is when I run it for the first time it creates test cases which are correct but when I re-run the build it starts creating the same test cases again (duplicating them), any suggestion / reason why this is happening
My bash script:
#!/bin/bash
find <DIR_PATH> -type f -name "*.feature" | while read fname;
do
curl -H "Content-Type:multipart/form-data" -X $USERNAME:$PASSWORD -F "file=#$fname"
https://<JIRA_URL>/rest/raven/1.0/import/feature?projectKey=XYZ
done
Sample feature file:
Feature Facebook Login
#Login
Scenario: Log in to FB app
Given: User is at FB login page
When User enters username and password
Then User is logged in successfully
Please suggest me how and where can I debug to fix the issue
Thanks
First, I would highly recommend you to upgrade to the latest version is your current version is "rather old". Version 6.0 was just released few days ago.
I don't any open bug related to that, except this bug that been solved many releases ago.
You can try importing using a zip file, in a single request (which is more efficient btw). Maybe this approach implicitly addresses your problem, in the version in that you have.
Example:
rm -f features.zip
zip -r features.zip src/test/resources/calculator/ -i \*.feature
curl -H "Content-Type: multipart/form-data" -u admin:admin -F "file=#features.zip" "http://192.168.56.102/rest/raven/1.0/import/feature?projectKey=CALC"
If the problem persists, then mostly there's a bug there; please reach out Xray support team, so the team can properly analyze it with you.
I am trying to test the Sumo Logic API by updating the information of my collector. The second curl command is the one that is causing the issue 'curl: (55) Failed sending PUT request'. It works in my terminal but not in the bash script.
#!/bin/bash
readonly etag=$(curl -u '<accessId>:<accessKey>' -I -X GET https://api.sumologic.com/api/v1/collectors/<id> | grep -Fi etag | awk '{print $2}' | tr -d \''"\')
echo ${etag}
curl -vvv -u '<accessId>:<accessKey>' -X PUT -H "Content-Type: application/json" -H "If-Match: \"${etag}\"" -T updated_collector.json https://api.sumologic.com/api/v1/collectors/<id>
set -x
The first curl command is assigned to the variable called 'etag' which stores the necessary etag. The etag is used in the second curl command to make a request to update the information stored in the 'updated_collector.json'. The updated_collector.json file is not the issue as I have successfully updated the information via the terminal with it. I suspect the content-type is not being sent in the header because someone ran the script on their end and it was not showing that information with the -vvv tag.
Here you can find the Sumo Logic Collector API Methods and Examples from which I got the curl commands to test the API: https://help.sumologic.com/APIs/Collector-Management-API/Collector-API-Methods-and-Examples
Update: I retieved the etag and then ran the second command in a bash script. I manually inserted the etag into the ${etag} portion of the second curl command. I then ran the script and it worked. Therefore, the etag variable isn't correctly formatted inside the second curl command. I do not know how to fix this.
The issue was partially the syntax but after fixing that, I was still getting an error. "If-Match: \"${etag}\" in my command should be "If-Match: ${etag}" instead. I had to add the --http1.1 flag for it to work. I'm sure this is a sumo logic issue. I am able to execute GET requests no problem using http2.0.
I am trying to deploy a bash server on Cloud run in order to easily trigger gcloud commands with parameters that would be passed with the POST request to the service.
I take the inspiration mainly from here.
At the moment the bash server looks like this:
#!/usr/bin/env bash
PORT=${PORT-8080}
echo "Listening on port $PORT..."
while true
do
rm -f out
mkfifo out
trap "rm -f out" EXIT
echo "Waiting.."
cat out | nc -Nv -l 0.0.0.0 "${PORT}" > >( # parse the netcat output, to build the answer redirected to the pipe "out".
# A POST request is expected, so the request is read until the '}' ending the json payload.
# while
read -d } PAYLOAD;
# do
# The contents of the payload are extracted by stripping the headers of the request
# Then every entry of the json is exported :
# export KEY=VALUE
for s in $(echo $PAYLOAD} | \
sed '/./{H;$!d} ; x ; s/^[^{]*//g' | \
sed '/./{H;$!d} ; x ; s/[^}]*$//g' | \
jq -r "to_entries|map(\"\(.key)=\(.value|tostring)\")|.[]" );
do
export $s;
echo $s;
done
echo "Running the gcloud command..."
# gsutil mb -l $LOCATION gs://$BUCKET_NAME
printf "%s\n%s\n%s\n" "HTTP/1.1 202 Accepted" "Content-length: 0" "Connection: Keep- alive" > out
# done
)
continue
done
The Dockerfile for deployment looks like this:
FROM google/cloud-sdk:alpine
RUN apk add --upgrade jq netcat-openbsd coreutils \
&& apk add --no-cache --upgrade bash
COPY main.sh .
ENTRYPOINT ["./main.sh"]
(Cloud SDK image with netcat-openbsd -server-, jq -JSON processing part-, and bash in addition)
With this quite simple setup I can deploy the service and it listens to incoming requests.
When receiving a POST request with a payload looking like
{"LOCATION": "MY_VALID_LOCATION", "BUCKET_NAME": "MY_VALID_BUCKET_NAME"}
the (here commented out) cloud SDK command runs correctly and creates a bucket with the specified name in the specified region, in the same project as the the Cloud Run service.
However, I receive a 502 error at the end of the process. (I'm expecting a 202 Accepted response).
The server seems to work correctly locally. However it seems that Cloud Run cannot receive the HTTP response.
How can I make sure the HTTP response is correctly transmitted back ?
Thanks to #guillaumeblaquiere suggestion I managed to obtain a solution to what I was ultimately trying to do: running bash scripts on Cloud Run.
Two important points were to be able to : 1) be able to access the payload of the incoming HTTP request, 2) have access to google cloud SDK.
To do so, the trick was to build a Docker image based on not only on the Google SDK base image, but also on the shell2http one that deals with the server aspects.
The Dockerfile therefore looks like this:
FROM google/cloud-sdk:alpine
RUN apk add --upgrade jq
COPY --from=msoap/shell2http /app/shell2http /shell2http
COPY main.sh .
ENTRYPOINT ["/shell2http","-cgi"]
CMD ["/","/main.sh"]
Thanks to the latter image, the HTTP processing is handled by a Go server and the incoming HTTP is piped into the stdin of the main.sh script.
The -cgi option also allows different environment variables to be set, for instance the $HTTP_CONTENT_LENGTH variable that contains the length of the payload (therefore of the JSON containing the different parameters we want to extract and pass to further actions).
As such, with the following as the first line of our main.sh script:
read -n$HTTP_CONTENT_LENGTH PAYLOAD;
the PAYLOAD is set with the incoming JSON payload.
Using jq it is then possible to do whatever we want.
All the code is gathered in this repository:
https://github.com/cylldby/bash2run
In addition, this solution is used in this project in order to have a very simple solution to trigger a Google workflow from an eventarc trigger.
Big thanks again to #guillaume_blaquiere for the tip !
I'm trying to get an nVidia GPU working on my MacPro, using the https://github.com/mayankk2308/purge-wrangler script, and I've run into a problem.
The installation instructions fail on my MacPro, but they succeed on my MacBookPro. Both are running the same OSX version (Mojave 10.14.2), and the same version of curl (curl 7.63.0).
Here are the instructions ...
curl -s https://raw.githubusercontent.com/mayankk2308/purge-wrangler/master/resources/webdrv-release.sh | bash
This returns and executes the following script ...
curl -s "https://api.github.com/repos/mayankk2308/purge-wrangler/releases/latest" | grep '"browser_download_url":' | sed -E 's/.*"([^"]+)".*/\1/' | xargs curl -L -s -0 > purge-wrangler.sh && chmod +x purge-wrangler.sh && ./purge-wrangler.sh && rm purge-wrangler.sh
The problem is that curl is returning a single-line response on my MacPro and a multi-line response everywhere else. I'm trying to understand what is different on my machines to cause such radically different behavior.
This must be a server side switch, but I'm not sure what it is.
NOTE: the sed(1) line is easily updated to support both single/multi line responses, but I'd like to get to the bottom of what's different on my two machines.
Anyone have any pointers?
The problem turns out to be the following two lines in my ~/.curlrc ...
# Disguise as IE 9 on Windows 7.
user-agent = "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)"
Specifying this user agent affects how https://api.github.com returns it's JSON responses (single or multi-line).
I honestly have no idea why this line was in my ~/.curlrc file, but I suspect others must have hit this same problem and just worked around it.
I would like to ask is there any way to disable build triggers for all TeamCity projects by running a script?
I have a scenario that I need to disable all the build triggers to prevent the builds from running. This is because sometimes, I might need to perform some upgrading process on build agent machines which will take more than one day.
I do not wish to manually click on Disable buttons for every build triggers on every different TeamCity projects. Is there a way to automate this process?
Thanks in advance.
Use Team City REST API.
Given your Team City is deployed at http://dummyhost.com and you enabled guest access with system admin role (otherwise just switch from guestAuth to httpAuth in URL and specify user with password in request, details are in documentation) you can do next:
Get all build configurations
GET http://dummyhost.com/guestAuth/app/rest/buildTypes/
For each build configuration get all triggers
GET http://dummyhost.com/guestAuth/app/rest/buildTypes/id:***YOUR_BUILD_CONFIGID***/triggers/
For each trigger disable it
PUT http://dummyhost.com/guestAuth/app/rest/buildTypes/id:***YOUR_BUILD_CONFIGID***/triggers/***YOUR_TRIGGER_ID***/disabled
See full documentation here
You can pause the build queue. See this video. This way you needn't touch the build configurations at all; you're just bringing the TC to a halt.
For agent-specific upgrades, it's best to disable only the agent you're working on. See here.
Neither of these is "by running a script" as you asked, but I take it you were only asking for a scripted solution to avoid a lot of GUI clicking.
Another solution might be to simply disable the agent, so no more builds will run.
Here is a bash script to bulk pause all (still not paused) build configurations by project name pattern via TeamCity REST API:
TEAMCITY_HOST=http://teamcity.company.com
CREDS="-u domain\user:password"
curl $CREDS --request GET "$TEAMCITY_HOST/app/rest/buildTypes/" \
| sed -r "s/(<buildType)/\n\\1/g" | grep "Project Name Regex" \
| grep -v 'paused="true"' | grep -Po '(?<=buildType id=")[^"]*' \
| xargs -I {} curl -v $CREDS --request PUT "$TEAMCITY_HOST/app/rest/buildTypes/id:{}/paused" --header "Content-Type: text/plain" --data "true"