Cannot download a file on OneDrive programmatically from Japan? - bash

I made a script that downloads several files located in my professional OneDrive. This script works perfectly from a French computer, a US computer but it can't work from a Japanese computer.
To permit you understand the problem, I will detail the program:
1- I establish the token system (I got inspired by Jay Lee detailed answer) and retrieve the token in the access_token variable.
2- To download the file, in my case I cannot use
curl -w %{time_total} https://graph.microsoft.com/v1.0/me/drive/items/01M...WU/content -H "Authorization: Bearer $access_token"
Thus, this how I proceed:
#I get the item properties
itemProperties=$(curl ${ODf1Mb} -H "Authorization: Bearer $access_token")
#In these properties I select the downloadUrl that will permit me to download the file
downloadUrl=$(echo -e "$itemProperties" | grep "#microsoft.graph.downloadUrl" | awk -F'[",]' '{ print $9 }')
#Finally I execute this URL storing the download time in a variable (I do all this stuff for this)
dload=$(curl -w %{time_total} ${downloadUrl} -H "Authorization: Bearer $access_token")
As I said at the begin, for French and US computers it will work but on the Japanese machine it doesn't. I do get the itemProperties and the downloadUrl but when I call the downloadUrl with CURL it seems that it cannot reach the server because I have this:
As we can see we do not even have the Total weight to be downloaded. As an element of comparison, this is the result in a French machine:
I know, there is a warning relating to command substitution but I haven't tried to fix it yet because it makes its job.
Note -> the downloadUrl has this format:
https://lpl-my.sharepoint.com/personal/{user}_{company infra domain}_com/_layouts/15/download.aspx?
I just cannot figure out what is the problem. I can access to the https://lpl-my.sharepoint.com through the browser so I don't think the server IP is banned.

Check your ping / traceroute to see if lpl-my.sharepoint.com resolves to the same network location.
Also, I have seen other folks run curl with -v to see verbose traces and see if what the difference is.

Related

How to download a big file from google drive via curl in Bash?

I wanna make a very simple bash script for downloading files from google drive via Drive API, so in this case there is a big file on google drive and I installed OAuth 2.0 Playground on my google drive account, then in the Select the Scope box, I choose Drive API v3, and https://www.googleapis.com/auth/drive.readonly to make a token and link.
After clicking Authorize APIs and then Exchange authorization code for tokens. I copied the Access tokenlike below.
#! /bin/bash
read -p 'Enter your id : ' id
read -p 'Enter your new token : ' token
read -p 'Enter your file name : ' file
curl -H "Authorization: Bearer $token" "https://www.googleapis.com/drive/v3/files/$id?alt=media" -o "$file"
but it won't work, any idea ?
for example the size of my file is 12G, when I run the code I will get this as output and after a second it back to prompt again ! I checked it in two computers with two different ip addresses.(I also add alt=media to URL)
-bash-3.2# bash mycode.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 166 100 166 0 0 80 0 0:00:02 0:00:02 --:--:-- 80
-bash-3.2#
the content of file that it created is like this
{
"error": {
"errors": [
{
"domain": "global",
"reason": "downloadQuotaExceeded",
"message": "The download quota for this file has been exceeded."
}
],
"code": 403,
"message": "The download quota for this file has been exceeded."
}
}
You want to download a file from Google Drive using the curl command with the access token.
If my understanding is correct, how about this modification?
Modified curl command:
Please add the query parameter of alt=media.
curl -H "Authorization: Bearer $token" "https://www.googleapis.com/drive/v3/files/$id?alt=media" -o "$file"
Note:
This modified curl command supposes that your access token can be used for downloading the file.
In this modification, the files except for Google Docs can be downloaded. If you want to download the Google Docs, please use the Files: export method of Drive API. Ref
Reference:
Download files
If I misunderstood your question and this was not the direction you want, I apologize.
UPDATE AS FOR MARCH 2021
Simply follow this guide here. It worked for me.
In summary:
For small files to download run
wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME
While if you are trying to download a quite large file you should try to run
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
Simply substitute FILEID and FILENAME with your custom values.
FILEID can be found in your file share link (after the /d/ as illustrated in the article mantioned above).
FILENAME is simply the name you want to save the download as. Remember to include the right extension. For Example FILENAME = my_file.pdf if the file is a pdf.
This is a known bug
It has been reported in this Issue Tracker post. This is caused because as you can read in the documentation:
(about download url)
Short lived download URL for the file. This field is only populated
for files with content stored in Google Drive; it is not populated for
Google Docs or shortcut files.
So you should use another field.
You can follow the report by clicking on the star next to the issue
number to give more priority to the bug and to receive updates.
As you can read in the comments of the report, the current workaround is:
Use webContentlink instead
or
Change www.googleapis.com to content.googleapis.com

Bash Script with a loop of 2 variables

I'm having some hard time to work this one out. I need to feed GITLAP API with issues that are created based on a file that i have. Normally the output of the file is the following:
Microsoft xxx xxxxx - Remote Code Execution xxxxxx- April 2018 xxxxx Updates
Red Hat Enterprise xxxx - java-1.8.0-xxxxx Multiple xxxxxxx- RHSA-xxxxxx
So far so good, I already deal with this the following way:
while read in; do
curl --request POST --header "PRIVATE-TOKEN: xxxxxxxxxxxxx" https://gitlab.com/api/v3/projects/xxxxxx/issues?title="$in";
done < ~/input_file
The problem is now that i need to add a second variable to this because i need to introduce a description on each issue and now my input file change for the following:
Microsoft Malware Protection - Remote Code Execution Vulnerability - April 2018 Security Updates 40697
Red Hat Enterprise Linux 6 - java-1.8.0-openjdk Multiple Vulnerabilities - RHSA-2018:1188 40861
I would like to construct something like this:
while read in;
curl --request POST --header "PRIVATE-TOKEN: xxxxxxxxxxx" "https://gitlab.com/api/v4/projects/xxxxx/issues?title=$in&description=https://myspecialink.com/portal/notifications/show/$id";
done < ~/input_file
for example:
$in: must be everything except the bold number that i signalize above.
$id: must be only the numbers in bold above.
Can someone help me on point me the best way of achieve this?
Use bash parameter expansion operators to split the input.
while read in; do
id=${in##* } # Remove everything up to last space
in=${in% *} # Remove everything from last space
curl --request POST --header "PRIVATE-TOKEN: xxxxxxxxxxx" "https://gitlab.com/api/v4/projects/xxxxx/issues?title=$in&description=https://myspecialink.com/portal/notifications/show/$id";
done

upload zip file to google drive using curl

I am trying to upload a zip file to Google drive account using curl.
The file is uploaded successfully but the filename is not getting updated. It gets uploaded with default filename i.e. "Untitled".
I am using below command.
curl -k -H "Authorization: Bearer cat /tmp/token.txt" -F "metadata={name : 'backup.zip'} --data-binary "#backup.zip" https://www.googleapis.com/upload/drive/v2/files?uploadType=multipart
You can use Drive API v3 to upload the zip file. The modified curl code is as follows.
curl -X POST -L \
-H "Authorization: Bearer `cat /tmp/token.txt`" \
-F "metadata={name : 'backup.zip'};type=application/json;charset=UTF-8" \
-F "file=#backup.zip;type=application/zip" \
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart"
In order to use this, please include https://www.googleapis.com/auth/drive in the scope.
The answer above works fine and was the command I used in uploading my file to Google Drive using Curl. However, I didn't understand what scope was and all of the initial setup required to make this command work. Hence, for documentation purposes. I'll give a second answer.
Valid as at the time of writing...
Visit the Credentials page and create a new credential (this is assuming you have created a project). I created credentials for TVs and Limited devices, so the work flow was similar to:
Create credentials > OAuth client ID > Application Type > TVs and Limited Input devices > Named the client > Clicked Create.
After doing this, I was able to copy the Client ID and Client Secret when viewing the newly created credential.
NB: Only the variables with double asterisk from the Curl commands should be replaced.
Next step was to run the Curl command:
curl -d "client_id=**client_id**&scope=**scope**" https://oauth2.googleapis.com/device/code
Scope in this situation can be considered to be the kind of access you intend to have with the credential having the inputted client_id. More about scope from the docs For the use case in focus, which is to upload files, the scope chosen was https://www.googleapis.com/auth/drive.file.
On running the curl command above, you'll get a response similar to:
{ "device_code": "XXXXXXXXXXXXX", "user_code": "ABCD-EFGH",
"expires_in": 1800, "interval": 5, "verification_url":
"https://www.google.com/device" }
Next step is to visit the verification_url in the response in your browser, provide the user_code and accept requests for permissions. You will be presented with a code when all prompts have been followed, this code wasn't required for the remaining steps (but there may be some reasons to use it for other use cases).
Next step is to use the Curl command:
curl -d client_id=**client_id** -d client_secret=**client_secret** -d device_code=**device_code** -d grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Adevice_code https://accounts.google.com/o/oauth2/token
You will get a response similar to:
{ "access_token": "XXXXXXXXX", "expires_in": 3599,
"refresh_token": "XXXXXXXXX", "scope":
"https://www.googleapis.com/auth/drive.file", "token_type": "Bearer"
}
Now you can use the access token and follow the accepted answer with a Curl command similar to:
curl -X POST -L \
-H "Authorization: Bearer **access_token**" \
-F "metadata={name : 'backup.zip'};type=application/json;charset=UTF-8" \
-F "file=#backup.zip;type=application/zip" \
"https://www.googleapis.com/upload/drive/v3/files?uploadType=multipart"

cURL call works with number but not with variable containing number

I've ran into a strange issue. I'm trying to script my router to collect usage stats and other stuff. I'm making one cURL to the auth URL to get a valid session id, then another using that session id to the page I need.
Here is my script:
SESSION_ID=$(curl --silent -D - -X POST http://10.0.0.1/login.cgi -d'admin_username=admin&admin_password=admin' | grep 'SESSION' | sed 's/Set-Cookie: SESSION=//' | sed 's/; path=\///')
echo $SESSION_ID # 1234567890
curl -v -H "Cookie: SESSION=$SESSION_ID" http://10.0.0.1/modemstatus_dslstatus.html
If I manually take SESSION_ID and insert it in place of '"$SESSION_ID"' everything is dandy. cURL shows the headers (via -v) and they are correct. Running the command while manually inserting the session id produces identical headers.
I'm sure it's something small. Please teach me something :)
Check for carriage returns \r in your variables which wouldn't appear with a simple echo in some cases.

Using CURL to download file and view headers and status code

I'm writing a Bash script to download image files from Snapito's web page snapshot API. The API can return a variety of responses indicated by different HTTP response codes and/or some custom headers. My script is intended to be run as an automated Cron job that pulls URLs from a MySQL database and saves the screenshots to local disk.
I am using curl. I'd like to do these 3 things using a single CURL command:
Extract the HTTP response code
Extract the headers
Save the file locally (if the request was successful)
I could do this using multiple curl requests, but I want to minimize the number of times I hit Snapito's servers. Any curl experts out there?
Or if someone has a Bash script that can respond to the full documented set of Snapito API responses, that'd be awesome. Here's their API documentation.
Thanks!
Use the dump headers option:
curl -D /tmp/headers.txt http://server.com
Use curl -i (include HTTP header) - which will yield the headers, followed by a blank line, followed by the content.
You can then split out the headers / content (or use -D to save directly to file, as suggested above).
There are three options -i, -I, and -D
> curl --help | egrep '^ +\-[iID]'
-D, --dump-header FILE Write the headers to FILE
-I, --head Show document info only
-i, --include Include protocol headers in the output (H/F)

Resources