Download file from the shared folder of Google Drive by using rclone - ubuntu-20.04

I'm wondering if rclone is able to donwload file from the shared folder of Google Drive. If yes, what is the command to do it?
rclone sync cloud_name:(what is the shared folder name?)file_name destination_path

You need to use rclone config to create a remote: for the Google 'Shared Drive'. See https://rclone.org/drive/
The line Configure this as a Shared Drive (Team Drive)?
Then the sync would be:
rclone sync SharedDriveName:"Directory/Directory" YourOtherRemote:"Directory/Directory"
Useful flags are:
-P or --progress (see progress during transfer)
-vv (see detailed logs)
--create-empty-src-dirs (to recreate empty directories)
-u or --update (Skip files that are newer on the destination)
--drive-server-side-across-configs (if you want to sync native Google docs)
--dry-run (as a practise)

Related

Issues with copying a file path with documents to a local Repository Raspberry Pi

I am trying to create a USB stick to be a local Git repository to have a backup of code and automate version control. I have found many articles on how to do this, but I am having an issue copying my documents from my existing folder to a Repo on my local device. Once I have the files on my local device repo, I will clone my repo to another one on the flash drive. I can figure out Git for the most part, lots of documentation. However I am having an issue with copying from my local file system to another local file system Repo. Here is what I am doing:
cd /home/pi/Desktop/Aaron Maker Project/AM_git_controller/
BASH: cd: too many arguments
also when trying to copy the files over:
cp -r home/pi/Desktop/Aaron Maker Project/Code_pack1.1/ ~home/pi/Desktop/Aaron Maker Project/AM_git_controller/
cp: target '~home/pi/Desktop/Aaron Maker Project/AM_git_controller/' is not a directory
I am copy pasting the file paths from file manager, so its not in how I am inputting the file path
I realize this is a very obscure question, this is my first post, anything helps!

how to download files from google drive via command line

I would need to download files or folders from my google drive, via command line.
Thought to a script, a batch file, windows platform.
Seen that I could use gdrive app but I have some troubles with syntax.
I tried:
gdrive-windows-x64.exe download -r --path "G:\My Drive\myfolder"
but it gets me error as "invalid arguments"
Also I'm interested to a way to zip the content of a folder upon my google drive...again via command line
someone can help me?
thanks a lot
marco
Drive is an option.
Create a new folder and do the Initializing setting up;
Create the sub folder mirroring the remote folder structure;
Cd to the sub folder and run $ drive pull
Click here for more pulling documentation.
You have available a Command-line utility for working with Google Drive in github here:
https://github.com/google/skicka
Examples:
skicka download /folder1 ~/folder2
The contents of your ~/folder2 directory will match the contents of ~/folder1.
For download to local:
skicka download /local ~/remote
gdrive is an option written in GoLang. This does require connecting a Google account. This command downloads a Google Drive directory:
gdrive download --recursive DRIVEID

How to automate rsync in OS X with remote server

My remote Server is samba server, which is accessible by both Mac and Windows machine.I created a common folder in Samba server. I want my local folder to be in sync with the common folder on my samba Server. Because when internet connection is lost i am unable to access the files copied to common folder.For that sake i want to sync my local folder with it.
My goals are:
When i remove files from local folder,it should get removed in common folder in samba server
Similarly when i modify or delete files in common folder, it should get reflected in local folder of my machine.
I tried rsync:
rsync --progress -avzC --stats --force Source root#remoteserver:/path
But how do I automate syncing from both sides?
Note: For this i can rely on Dropbox, Boxsync or some cloud share app support. But i want to implement my own functionality, I don't want to rely on Third party API.

Downloading folders from aws s3, cp or sync?

If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ?
Any help would be highly appreciated.
For example,
if I want to download all the contents of "this folder" to my desktop, would it look like this ?
aws s3 sync s3://"myBucket"/"this folder" C:\\Users\Desktop
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files.
aws s3 cp s3://myBucket/dir localdir --recursive
The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files.
aws s3 sync s3://mybucket/dir localdir
Just experiment to get the result you want.
Documentation:
cp command
sync command
Just used version 2 of the AWS CLI. For the s3 option, there is also a --dryrun option now to show you what will happen:
aws s3 --dryrun cp s3://bucket/filename /path/to/dest/folder --recursive
In case you need to use another profile, especially cross account. you need to add the profile in the config file
[profile profileName]
region = us-east-1
role_arn = arn:aws:iam::XXX:role/XXXX
source_profile = default
and then if you are accessing only a single file
aws s3 cp s3://crossAccountBucket/dir localdir --profile profileName
In the case you want to download a single file, you can try the following command:
aws s3 cp s3://bucket/filename /path/to/dest/folder
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket/yourfolder /local/path
In the above command, replace the following fields:
yourbucket/yourfolder >> your S3 bucket and the folder that you want to download.
/local/path >> path in your local system where you want to download all the files.
sync method first lists both source and destination paths and copies only differences (name, size etc.).
cp --recursive method lists source path and copies (overwrites) all to the destination path.
If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.
Question: Will aws s3 sync s3://myBucket/this_folder/object_file C:\\Users\Desktop create also the "this_folder" in C:\Users\Desktop?
If not, what would be the solution to copy/sync including the folder structure of S3? I mean I have many files in different S3 bucket folders sorted by year, month, day. I would like to copy them locally with the folder structure to be kept.

Extra Copy of New Rsync Files

I am attempting to mirror a directory on a remote server using rsync. However, I would like a copy of all newly created files to be stored in a separate directory on the local machine.
For example, if a new file is added on the remote server, I would like it to mirror regularly (for example, to ~/mirror), but save an additional copy of only the new file in another folder, (for example, ~/staging). To be clear, only the new files should appear in staging.
My first approach was to allow rsync to update the timestamps, and then use that to make a copy. However, I would now like to preserve timestamps.
Can anyone provide ideas on a simple approach? I am open to use of additional utilities other than rsync.
You might consider making hardlinks in the extra directory.
ln --force --target-directory=~/staging ~/mirror/*
Edit:
If this is a Linux system, incron will trigger on inotify events and would allow you to make copies of files as they are added to a directory you specify.

Resources