Bash script to backup data to google drive [duplicate] - bash

This question already has answers here:
Upload and update files in google drive via cmd [closed]
(3 answers)
Closed 6 years ago.
I want to create a bash script that will log into my google drive account and backup my main folder. I've done a bit of research and I know we need some sort of OAuth for using the Google Drive API but I'm still fairly new so I thought I can get some solid guidance from the stack community.

I guess you don't need to re-invent the wheel, use gdrive and you're good to go
SETUP:
wget -O drive "https://drive.google.com/uc?id=0B3X9GlR6EmbnMHBMVWtKaEZXdDg"
mv drive /usr/sbin/drive
chmod 755 /usr/sbin/drive
Now, simply run drive to start the authentication process. You'll get a link like this to paste and browse to in to your web browser:
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?client_id=123456789123-7n0vf5akeru7on6o2fjinrecpdoe99eg.apps.googleusercontent.com&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive&state=state
Authenticate and provide permission for the application to access your Google Drive, and then you'll be provided a verification code to copy and paste back in to your shell:
Enter verification code: 4/9gKYAFAJ326XIP6JJHAEhs342t35LPiA5QGW0935GHWHy9
USAGE:
drive upload --file "/some/path/somefile.ext"
SRC:
Backing up a Directory to Google Drive on CentOS 7
NOTE:
If you really want to build a bash script take a look at this gist , i.e.:
automatically gleans MIME type from file
uploads multiple files
removes directory prefix from filename
works with filenames with spaces
uses dotfile for configuration and token
interactively configuring
uploads to target folder if last argument looks like a folder id
quieter output
uses longer command line flags for readability
throttle by adding curl_args="--limit-rate 500K" to $HOME/.gdrive.conf
#!/bin/bash
# based on https://gist.github.com/deanet/3427090
#
# useful $HOME/.gdrive.conf options:
# curl_args="--limit-rate 500K --progress-bar"
browser="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36"
destination_folder_id=${#: -1}
if expr "$destination_folder_id" : '^[A-Za-z0-9]\{28\}$' > /dev/null
then
# all but last word
set -- "${#:0:$#}"
else
# upload to root
unset destination_folder_id
fi
if [ -e $HOME/.gdrive.conf ]
then
. $HOME/.gdrive.conf
fi
old_umask=`umask`
umask 0077
if [ -z "$username" ]
then
read -p "username: " username
unset token
echo "username=$username" >> $HOME/.gdrive.conf
fi
if [ -z "$account_type" ]
then
if expr "$username" : '^[^#]*$' > /dev/null || expr "$username" : '.*#gmail.com$' > /dev/null
then
account_type=GOOGLE
else
account_type=HOSTED
fi
fi
if [ -z "$password$token" ]
then
read -s -p "password: " password
unset token
echo
fi
if [ -z "$token" ]
then
token=`curl --silent --data-urlencode Email=$username --data-urlencode Passwd="$password" --data accountType=$account_type --data service=writely --data source=cURL "https://www.google.com/accounts/ClientLogin" | sed -ne s/Auth=//p`
sed -ie '/^token=/d' $HOME/.gdrive.conf
echo "token=$token" >> $HOME/.gdrive.conf
fi
umask $old_umask
for file in "$#"
do
slug=`basename "$file"`
mime_type=`file --brief --mime-type "$file"`
upload_link=`curl --silent --show-error --insecure --request POST --header "Content-Length: 0" --header "Authorization: GoogleLogin auth=${token}" --header "GData-Version: 3.0" --header "Content-Type: $mime_type" --header "Slug: $slug" "https://docs.google.com/feeds/upload/create-session/default/private/full${destination_folder_id+/folder:$destination_folder_id/contents}?convert=false" --dump-header - | sed -ne s/"Location: "//p`
echo "$file:"
curl --request POST --output /dev/null --data-binary "#$file" --header "Authorization: GoogleLogin auth=${token}" --header "GData-Version: 3.0" --header "Content-Type: $mime_type" --header "Slug: $slug" "$upload_link" $curl_args
done

Related

Using Bash to make a POST request

I have a 100 Jetpacks that I have to sign in to configure. I am trying to do it in a bash script but I am having no luck. I can connect to the wifi no problem but my POST request are not achieving anything. Any Advice? Here is link to my github. I have copies of what I captured on Burp suite https://github.com/Jdelgado89/post_Script
TYIA
#!/bin/bash
nmcli device wifi rescan
nmcli device wifi list
echo "What's they last four?"
read last4
echo "What's the Key?"
read key
nmcli device wifi connect Ellipsis\ \Jetpack\ $last4 password $key
echo "{"Command":"SignIn","Password":"$key"}" > sign_on.json
echo "{"CurrentPassword":"$key","NewPassword":"G1l4River4dm1n","SecurityQuestion":"NameOfStreet","SecurityAnswer":"Allison"}" > change_admin.json
echo "{"SSID":"GRTI Jetpack","WiFiPassword":"G1l4River3r","WiFiMode":0,"WiFiAuthentication":6,"WiFiEncription":4,"WiFiChannel":0,"MaxConnectedDevice":8,"PrivacySeparator":false,"WMM":true,"Command":"SetWifiSetting"}" > wifi.json
cat sign_on.json
cat change_admin.json
cat wifi.json
sleep 5
curl -X POST -H "Cookie: jetpack=6af5e293139d989bdcfd66257b4f5327" -H "Content-Type: application/json" -d #sign_on.json http://192.168.1.1/cgi-bin/sign_in.cgi
sleep 5
curl -X POST -H "Cookie: jetpack=6af5e293139d989bdcfd66257b4f5327" -H "Content-Type: application/json" -d #change_admin.json http://192.168.1.1/cgi-bin/settings_admin_password.cgi
sleep 5
curl -X POST -H "Cookie: jetpack=6af5e293139d989bdcfd66257b4f5327" -H "Content-Type: application/json" -d #wifi.json http://192.168.1.1/cgi-bin/settings_admin_password.cgi
This is not correct:
echo "{"Command":"SignIn","Password":"$key"}" > sign_on.json
The double quotes are not being put literally into the file, they're just terminating the shell string beginning with the previous double quote. So this is writing
{Command:SignIn,Password:keyvalue}
into the file, with no double quotes. You need to escape the nested double quotes.
echo "{\"Command\":\"SignIn\",\"Password\":\"$key\"}" > sign_on.json
However, it would be best if you used the jq utility instead of formatting JSON by hand. See Create JSON file using jq.
jq -nc --arg key "$key" '{"Command":"SignIn","Password":$key}' >sign_on.json

How to access GitHub API using personal access tokens in shell script?

how is it going?
So I used a bash script to create a remote repository using a password to access a endpoint like this:
NEWVAR="{\"name\":\"$githubrepo\",\"private\":\"true\"}"
curl -u $USERNAME https://api.github.com/user/repos -d "$NEWVAR"
However, GitHub is going to not allow developers to access endpoints using passwords anymore. So my question is how do I create a remote repository using a personal access token?
use --header to transmit authorization:
#!/usr/bin/env sh
github_user='The GitHub user name'
github_repo='The repository name'
github_oauth_token='The GitHub API auth token'
# Create the JSON data payload arguments needed to create
# a GitHub repository.
json_data="$(
jq \
--null-input \
--compact-output \
--arg name "$github_repo" \
'{$name, "private":true}'
)"
if json_reply="$(
curl \
--fail \
--request POST \
--header 'Accept: application/vnd.github.v3+json' \
--header "Authorization: token $github_oauth_token" \
--header 'Content-Type: application/json' \
--data "$json_data" \
'https://api.github.com/user/repos'
)"; then
# Save the JSON answer of the repository creation
printf '%s' "$json_reply" >"$github_repo.json"
printf 'Successfully created the repository: %s\n' "$github_repo"
else
printf 'Could not create the repository: %s\n' "$github_repo" >&2
printf 'The GitHub API replied with this JSON:\n%s\n' "$json_reply" >&2
fi
See my answer here for a featured implementation example:
https://stackoverflow.com/a/57634322/7939871

Bash script to loop through remote directory and pipe files 1 at a time to CURL

I am trying to transfer all files residing in a specified directory on Server1 to Server3 via a script running on Server2.
The transfer to Server3 has to happen through an API and thus must use the following CURL call:
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
If it is just 1 file, I can do it successfully, but i'm trying to iterate through the directory on Server1 and send the file directly to the CURL call. So far I've got:
files="( $(ssh me#server1 ls dir/*) )"
while read f
do
name=$(basename ${f})
curl -X POST https://content.dropboxapi.com/2/files/upload \
--header "Authorization: Bearer $token" \
--header "Dropbox-API-Arg: {\"path\": \"/xfer/$name\",\"mode\": \"add\",\"autorename\": true,\"mute\": false,\"strict_conflict\": false}" \
--header "Content-Type: application/octet-stream" \
--data-binary #$f
done <<< "$files"
The loop seems to be reading the "(" from the array of files into the 1st file name, which obviously causes a problem. I can't get beyond that to be able to tell if POSTING the current file in the loop via --data-binary will actually do what I think (or am hoping) it will.
Any ieas?
The error in the original message was enclosing the ssh command with "()". I am working on a similar issue. In the past I've used Rsync but I want a solution that doesn't require installing extra software. Here is an example that I'm working with to move files off of a Nodejs dev server to backup, running in Bash on Debian:
files=$(ssh chris#estack ls ~/tmp/gateway)
#echo $files
for FILE in $files
do
if [[ "$FILE" = "node_modules" || "$FILE" = ".git" ]]
then
echo "skip $FILE";
continue
fi
echo Copy ~/tmp/gateway/$FILE
#scp -Cpr chris#estack:~/tmp/gateway/$FILE ~/tmp/tmp
done

curl 400 bad request (in bash script)

I trying to do execute the following script in bash
#!/bin/bash
source chaves.sh
HEAD='"X-Cachet-Token:'$CACHET_KEY'"'
SEARCH="'{"'"status"'":1,"'"id"'":"'"7"'","'"enabled"'":true}'"
echo $SEARCH
if curl -s --head --request GET http://google.com.br | grep "200 OK" > /dev/null; then
echo 'rodou'
curl -X PUT -H '"Content-Type:application/json;"' -H '"'X-Cachet-Token:$CACHET_KEY'"' -d $SEARCH $CACHET_URL/7
else
echo 'não deu'
curl -X PUT -H '"Content-Type: application/json;"' -H $x -d '{"status":1,"id":"7","enabled":true}' $CACHET_URL/7
fi
But keep receiving a 400 bad request from the server.
When i try to run the same line (echo in the script, Ctrl+c and Ctrl+v) directly in terminal, the command run without problems.
The source file have the directions to path and a variable token i need to use, but as far as i have tested is reading ok.
edit 1 - hidding some sensitive content
edit 2 - posting the exit line (grabed trought Ctrl+c, Ctrl+v)
The command i neet to input in server is:
curl -X PUT -H "Content-Type:application/json;" -H
"X-Cachet-Token:4A7ixgkU4hcCWFReQ15G" -d
'{"status":1,"id":"7","enabled":true}'
http://XXX.XXX.XXX.XXX/api/v1/components/7
And the exit i grabed trought echo comand, give me the exact exit i want, but don't run inside script, only outside.
I'm a bit new to the curl, any help can be apreciate.
Sorry for the bad english and tks in advance.

Github: Upload release assets with Bash

I would like to learn about release asset upload through Github API.
Apart from this
Github reference,
I haven't found any recent example.
I created the following Bash script:
#!/bin/sh
## Make a draft release json with a markdown body
release='"tag_name": "v1.0.0", "target_commitish": "master", "name": "myapp", '
body="This is an automatic release\\n====\\n\\nDetails follows"
body=\"$body\"
body='"body": '$body', '
release=$release$body
release=$release'"draft": true, "prerelease": false'
release='{'$release'}'
url="https://api.github.com/repos/$owner/$repo/releases"
succ=$(curl -H "Authorization: token $perstok" --data $release $url)
## In case of success, we upload a file
upload=$(echo $succ | grep upload_url)
if [[ $? -eq 0 ]]; then
echo Release created.
else
echo Error creating release!
return
fi
# $upload is like:
# "upload_url": "https://uploads.github.com/repos/:owner/:repo/releases/:ID/assets{?name,label}",
upload=$(echo $upload | cut -d "\"" -f4 | cut -d "{" -f1)
upload="$upload?name=$theAsset"
succ=$(curl -H "Authorization: token $perstok" \
-H "Content-Type: $(file -b --mime-type $theAsset)" \
--data-binary #$theAsset $upload)
download=$(echo $succ | egrep -o "browser_download_url.+?")
if [[ $? -eq 0 ]]; then
echo $download | cut -d: -f2,3 | cut -d\" -f2
else
echo Upload error!
fi
Of course perstok, owner and repo variables export the personal access token, the owner's name and the repo name and theAsset is the asset filename to upload.
Is this the proper way to upload release assets?
Do I need to add an Accept header? I found some examples with
-H "Accept: application/vnd.github.manifold-preview"
but they seem outdated to me.
In case of Windows executables is there a specific media (mime) type?
You have another example which does not use Accept header in this gist:
# Construct url
GH_ASSET="https://uploads.github.com/repos/$owner/$repo/releases/$id/assets?name=$(basename $filename)"
curl "$GITHUB_OAUTH_BASIC" --data-binary #"$filename" -H "Authorization: token $github_api_token" -H "Content-Type: application/octet-stream" $GH_ASSET
with GITHUB_OAUTH_BASIC being
${GITHUB_OAUTH_TOKEN:?must be set to a github access token that can add assets to $repo} \
${GITHUB_OAUTH_BASIC:=$(printf %s:x-oauth-basic $GITHUB_OAUTH_TOKEN)}
A Content-Type: application/octet-stream should be universal enough to support any file, without worrying about its MIME.
I found an official answer:
during the preview period, you needed to provide a custom media type in the Accept header:
application/vnd.github.manifold-preview+json
Now that the preview period has ended, you no longer need to pass this custom media type.
Anyway, while not required, it is recommended to use the following Accept header:
application/vnd.github.v3+json
In this way a specific version of the API is requested, instead of the current one, and an application will keep working in case of future breaking changes.

Resources