Github: Upload release assets with Bash - bash

I would like to learn about release asset upload through Github API.
Apart from this
Github reference,
I haven't found any recent example.
I created the following Bash script:
#!/bin/sh
## Make a draft release json with a markdown body
release='"tag_name": "v1.0.0", "target_commitish": "master", "name": "myapp", '
body="This is an automatic release\\n====\\n\\nDetails follows"
body=\"$body\"
body='"body": '$body', '
release=$release$body
release=$release'"draft": true, "prerelease": false'
release='{'$release'}'
url="https://api.github.com/repos/$owner/$repo/releases"
succ=$(curl -H "Authorization: token $perstok" --data $release $url)
## In case of success, we upload a file
upload=$(echo $succ | grep upload_url)
if [[ $? -eq 0 ]]; then
echo Release created.
else
echo Error creating release!
return
fi
# $upload is like:
# "upload_url": "https://uploads.github.com/repos/:owner/:repo/releases/:ID/assets{?name,label}",
upload=$(echo $upload | cut -d "\"" -f4 | cut -d "{" -f1)
upload="$upload?name=$theAsset"
succ=$(curl -H "Authorization: token $perstok" \
-H "Content-Type: $(file -b --mime-type $theAsset)" \
--data-binary #$theAsset $upload)
download=$(echo $succ | egrep -o "browser_download_url.+?")
if [[ $? -eq 0 ]]; then
echo $download | cut -d: -f2,3 | cut -d\" -f2
else
echo Upload error!
fi
Of course perstok, owner and repo variables export the personal access token, the owner's name and the repo name and theAsset is the asset filename to upload.
Is this the proper way to upload release assets?
Do I need to add an Accept header? I found some examples with
-H "Accept: application/vnd.github.manifold-preview"
but they seem outdated to me.
In case of Windows executables is there a specific media (mime) type?

You have another example which does not use Accept header in this gist:
# Construct url
GH_ASSET="https://uploads.github.com/repos/$owner/$repo/releases/$id/assets?name=$(basename $filename)"
curl "$GITHUB_OAUTH_BASIC" --data-binary #"$filename" -H "Authorization: token $github_api_token" -H "Content-Type: application/octet-stream" $GH_ASSET
with GITHUB_OAUTH_BASIC being
${GITHUB_OAUTH_TOKEN:?must be set to a github access token that can add assets to $repo} \
${GITHUB_OAUTH_BASIC:=$(printf %s:x-oauth-basic $GITHUB_OAUTH_TOKEN)}
A Content-Type: application/octet-stream should be universal enough to support any file, without worrying about its MIME.

I found an official answer:
during the preview period, you needed to provide a custom media type in the Accept header:
application/vnd.github.manifold-preview+json
Now that the preview period has ended, you no longer need to pass this custom media type.
Anyway, while not required, it is recommended to use the following Accept header:
application/vnd.github.v3+json
In this way a specific version of the API is requested, instead of the current one, and an application will keep working in case of future breaking changes.

Related

Bash script to loop through remote directory and pipe files 1 at a time to CURL

I am trying to transfer all files residing in a specified directory on Server1 to Server3 via a script running on Server2.
The transfer to Server3 has to happen through an API and thus must use the following CURL call:
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
If it is just 1 file, I can do it successfully, but i'm trying to iterate through the directory on Server1 and send the file directly to the CURL call. So far I've got:
files="( $(ssh me#server1 ls dir/*) )"
while read f
do
name=$(basename ${f})
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
done <<< "$files"
The loop seems to be reading the "(" from the array of files into the 1st file name, which obviously causes a problem. I can't get beyond that to be able to tell if POSTING the current file in the loop via --data-binary will actually do what I think (or am hoping) it will.
Any ieas?
The error in the original message was enclosing the ssh command with "()". I am working on a similar issue. In the past I've used Rsync but I want a solution that doesn't require installing extra software. Here is an example that I'm working with to move files off of a Nodejs dev server to backup, running in Bash on Debian:
files=$(ssh chris#estack ls ~/tmp/gateway)
#echo $files
for FILE in $files
do
if [[ "$FILE" = "node_modules" || "$FILE" = ".git" ]]
then
echo "skip $FILE";
continue
fi
echo Copy ~/tmp/gateway/$FILE
#scp -Cpr chris#estack:~/tmp/gateway/$FILE ~/tmp/tmp
done

Using wget in shell trouble with variable that has \

I'm trying to run a script for pulling finance history from yahoo. Boris's answer from this thread
wget can't download yahoo finance data any more
works for me ~2 out of 3 times, but fails if the crumb returned from the cookie has a "\" character in it.
Code that sometimes works looks like this
#!usr/bin/sh
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
wget --no-check-certificate --save-cookies=cookie.txt https://finance.yahoo.com/quote/$symbol/?p=$symbol -O C:/trip/stocks/stocknamelist/crumb.store
crumb=$(grep 'root.*App' crumb.store | sed 's/,/\n/g' | grep CrumbStore | sed 's/"CrumbStore":{"crumb":"\(.*\)"}/\1/')
echo $crumb
fileloc=$"https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo $fileloc
wget --no-check-certificate --load-cookies=cookie.txt $fileloc -O c:/trip/stocks/temphistory/hs$symbol.csv
rm cookie.txt crumb.store
But that doesn't seem to process in wget the way I intend either, as it seems to be interpreting as described here:
https://askubuntu.com/questions/758080/getting-scheme-missing-error-with-wget
Any suggestions on how to pass the $crumb variable into wget so that wget doesn't error out if $crumb has a "\" character in it?
Edited to show the full script. To clarify I've got cygwin installed with wget package. I call the script from cmd prompt as (example where the script above is named "stocknamedownload.sh, the stock symbol I'm downloading is "A" from the startdate 19800101)
c:\trip\stocks\StockNameList>bash stocknamedownload.sh A 19800101
This script seems to work fine - unless the crumb returned contains a "\" character in it.
The following implementation appears to work 100% of the time -- I'm unable to reproduce the claimed sporadic failures:
#!/usr/bin/env bash
set -o pipefail
symbol=$1
today=$(date +%Y%m%d)
tomorrow=$(date --date='1 days' +%Y%m%d)
first_date=$(date -d "$2" '+%s')
last_date=$(date -d "$today" '+%s')
# store complete webpage text in a variable
page_text=$(curl --fail --cookie-jar cookies \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol") || exit
# extract the JSON used by JavaScript in the page
app_json=$(grep -e 'root.App.main = ' <<<"$page_text" \
| sed -e 's#^root.App.main = ##' \
-e 's#[;]$##') || exit
# use jq to extract the crumb from that JSON
crumb=$(jq -r \
'.context.dispatcher.stores.CrumbStore.crumb' \
<<<"$app_json" | tr -d '\r') || exit
# Perform our actual download
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
curl --fail --cookie cookies "$fileloc" >"hs$symbol.csv"
Note that the tr -d '\r' is only necessary when using a native-Windows jq mixed with an otherwise native-Cygwin set of tools.
You are adding quotes to the value of the variable instead of quoting the expansion. You are also trying to use tools that don't know what JSON is to process JSON; use jq.
wget --no-check-certificate \
--save-cookies=cookie.txt \
"https://finance.yahoo.com/quote/$symbol/?p=$symbol" \
-O C:/trip/stocks/stocknamelist/crumb.store
# Something like thist; it's hard to reverse engineer the structure
# of crumb.store from your pipeline.
crumb=$(jq 'CrumbStore.crumb' crumb.store)
echo "$crumb"
fileloc="https://query1.finance.yahoo.com/v7/finance/download/$symbol?period1=$first_date&period2=$last_date&interval=1d&events=history&crumb=$crumb"
echo "$fileloc"
wget --no-check-certificate \
--load-cookies=cookie.txt "$fileloc" \
-O c:/trip/stocks/temphistory/hs$symbol.csv

How to verify a curl request in bash script?

I have a curl request like this :
curl -s -u $user:$password -X GET -H "Content-Type: application/json" $url
Which returns a json as response. So I will parse the response using jq to get some specific data. Like this :
curl -s -u $user:$password -X GET -H "Content-Type: application/json" $url | jq '<expression>'
Now if the curl request fails then obviously the parsing operation throws ugly error. I want to avoid this. How to store the response first and then later parse it if the request is successful. I don't want to display the json whole response. Also if I add -w "%{http_code}" in my request it appends the status code with the JSON response which messes up the parsing. How to solve this ? I basically want to first check if the curl request is successful or not then get the JSON response and parse it.I also want to get the status code, so that if it fails I can display the status code. But status code is now messing up with json response.
You can combine the --write and --fail options:
# separating the (verbose) curl options into an array for readability
curl_args=(
--write "%{http_code}\n"
--fail
--silent
--user "$user:$password"
--request GET
--header "Content-Type: application/json"
)
if ! output=$(curl "${curl_args[#]}" "$url"); then
echo "Failure: code=$output"
else
# remove the "http_code" line from the end of the output, and parse it
sed '$d' <<<"$output" | jq '...'
fi
Also note: quote your variables!
I found glenn jackman's answer good, but a bit confusingly written, so I rewrote it, and altered it so I can use it as a safer alternative to curl | jq.
#!/bin/bash
# call this with normal curl arguments, especially url argument, e.g.
# safecurl.sh "http://example.com:8080/something/"
# separating the (verbose) curl options into an array for readability
curl_args=(
-H 'Accept:application/json'
-H 'Content-Type:application/json'
--write '\n%{http_code}\n'
--fail
--silent
)
echo "${curl_args[#]}"
# prepend some arguments, but pass on whatever arguments this script was called with
output=$(curl "${curl_args[#]}" "$#")
return_code=$?
if [ 0 -eq $return_code ]; then
# remove the "http_code" line from the end of the output, and parse it
echo "$output" | sed '$d' | jq .
else
# echo to stderr so further piping to jq will process empty output
>&2 echo "Failure: code=$output"
fi
Note: This code does not test for services that ignore the requested content type and respond with HTML. You'd need to test for grep -l '</html>' for that.

Bash script to backup data to google drive [duplicate]

This question already has answers here:
Upload and update files in google drive via cmd [closed]
(3 answers)
Closed 6 years ago.
I want to create a bash script that will log into my google drive account and backup my main folder. I've done a bit of research and I know we need some sort of OAuth for using the Google Drive API but I'm still fairly new so I thought I can get some solid guidance from the stack community.
I guess you don't need to re-invent the wheel, use gdrive and you're good to go
SETUP:
wget -O drive "https://drive.google.com/uc?id=0B3X9GlR6EmbnMHBMVWtKaEZXdDg"
mv drive /usr/sbin/drive
chmod 755 /usr/sbin/drive
Now, simply run drive to start the authentication process. You'll get a link like this to paste and browse to in to your web browser:
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?client_id=123456789123-7n0vf5akeru7on6o2fjinrecpdoe99eg.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state
Authenticate and provide permission for the application to access your Google Drive, and then you'll be provided a verification code to copy and paste back in to your shell:
Enter verification code: 4/9gKYAFAJ326XIP6JJHAEhs342t35LPiA5QGW0935GHWHy9
USAGE:
drive upload --file "/some/path/somefile.ext"
SRC:
Backing up a Directory to Google Drive on CentOS 7
NOTE:
If you really want to build a bash script take a look at this gist , i.e.:
automatically gleans MIME type from file
uploads multiple files
removes directory prefix from filename
works with filenames with spaces
uses dotfile for configuration and token
interactively configuring
uploads to target folder if last argument looks like a folder id
quieter output
uses longer command line flags for readability
throttle by adding curl_args="--limit-rate 500K" to $HOME/.gdrive.conf
#!/bin/bash
# based on https://gist.github.com/deanet/3427090
#
# useful $HOME/.gdrive.conf options:
# curl_args="--limit-rate 500K --progress-bar"
browser="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36"
destination_folder_id=${#: -1}
if expr "$destination_folder_id" : '^[A-Za-z0-9]\{28\}$' > /dev/null
then
# all but last word
set -- "${#:0:$#}"
else
# upload to root
unset destination_folder_id
fi
if [ -e $HOME/.gdrive.conf ]
then
. $HOME/.gdrive.conf
fi
old_umask=`umask`
umask 0077
if [ -z "$username" ]
then
read -p "username: " username
unset token
echo "username=$username" >> $HOME/.gdrive.conf
fi
if [ -z "$account_type" ]
then
if expr "$username" : '^[^#]*$' > /dev/null || expr "$username" : '.*#gmail.com$' > /dev/null
then
account_type=GOOGLE
else
account_type=HOSTED
fi
fi
if [ -z "$password$token" ]
then
read -s -p "password: " password
unset token
echo
fi
if [ -z "$token" ]
then
token=`curl --silent --data-urlencode Email=$username --data-urlencode Passwd="$password" --data accountType=$account_type --data service=writely --data source=cURL "https://www.google.com/accounts/ClientLogin" | sed -ne s/Auth=//p`
sed -ie '/^token=/d' $HOME/.gdrive.conf
echo "token=$token" >> $HOME/.gdrive.conf
fi
umask $old_umask
for file in "$#"
do
slug=`basename "$file"`
mime_type=`file --brief --mime-type "$file"`
upload_link=`curl --silent --show-error --insecure --request POST --header "Content-Length: 0" --header "Authorization: GoogleLogin auth=${token}" --header "GData-Version: 3.0" --header "Content-Type: $mime_type" --header "Slug: $slug" "https://docs.google.com/feeds/upload/create-session/default/private/full${destination_folder_id+/folder:$destination_folder_id/contents}?convert=false" --dump-header - | sed -ne s/"Location: "//p`
echo "$file:"
curl --request POST --output /dev/null --data-binary "#$file" --header "Authorization: GoogleLogin auth=${token}" --header "GData-Version: 3.0" --header "Content-Type: $mime_type" --header "Slug: $slug" "$upload_link" $curl_args
done

Triggering builds of dependent projects in Travis CI

We have our single page javascript app in one repository and our backend server in another. Is there any way for a passing build on the backend server to trigger a build of the single page app?
We don't want to combine them into a single repository, but we do want to make sure that changes to one don't break the other.
Yes, it is possible to trigger another Travis job after a first one succeeds. You can use the trigger-travis.sh script.
The script's documentation tells how to use it -- set an environment variable and add a few lines to your .travis.yml file.
It's possible yes and it's also possible to wait related build result.
I discover trigger-travis.sh from the previous answer but before that I was implementing my own solution (for full working source code: cf. pending pull request PR196 and live result)
References
Based on travis API v3 documentation:
trigger a build triggering-builds
get build information resource/builds
You will need a travis token, and setup this token as secreet environment variable on travis portal.
Following this doc, I were able to trigger a build, and wait for him.
1) make .travis_hook_qa.sh
(extract) - to trigger a new build :
REQUEST_RESULT=$(curl -s -X POST \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token ${QA_TOKEN}" \
-d "$body" \
https://api.travis-ci.org/repo/${QA_SLUG}/requests)
(it's trigger-travis.sh equivalent) You could make some customization on the build definition (with $body)
2) make .travis_wait_build.sh
(extract) - to wait a just created build, get build info :
BUILD_INFO=$(curl -s -X GET \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token ${QA_TOKEN}" \
https://api.travis-ci.org/repo/${QA_SLUG}/builds?include=build.state\&include=build.id\&include=build.started_at\&branch.name=master\&sort_by=started_atdesc\&limit=1 )
BUILD_STATE=$(echo "${BUILD_INFO}" | grep -Po '"state":.*?[^\\]",'|head -n1| awk -F "\"" '{print $4}')
BUILD_ID=$(echo "${BUILD_INFO}" | grep '"id": '|head -n1| awk -F'[ ,]' '{print $8}')
You will have to wait until your timeout or expected final state..
Reminder: possible travis build states are created|started (and then) passed|failed

Resources