How to load my bash script result to a specific location - bash

How can I load my output generated from my bash script into a gcs location
My bash command is like:
echo " hello world"
I want this output(hello world) to be shown in a location in gcs.
How to write a location command in Bash?

First, you should follow the install Cloud SDK instructions in order to use the cp command form gsutil tool on the machine you're running the script.
Cloud SDK requires Python; supported versions are Python 3 (preferred, 3.5 to 3.8) and Python 2 (2.7.9 or higher)
Run one of the following:
Linux 64-bit archive file from your command-line, run:
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-362.0.0-linux-x86_64.tar.gz
For the 32-bit archive file, run:
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-362.0.0-linux-x86.tar.gz
Depending on your setup, you can choose other installation methods
Extract the contents of the file to any location on your file system (preferably your Home directory). If you would like to replace an existing installation, remove the existing google-cloud-sdk directory and extract the archive to the same location.
Run gcloud init to initialize the SDK:
./google-cloud-sdk/bin/gcloud init
After you have installed the Cloud SDK, you should create a bucket to uploadthe files that will contain the output generated by your script.
Use the gsutil mb command and a unique name to create a bucket:
gsutil mb -b on -l us-east1 gs://my-awesome-bucket/
This uses a bucket named "my-awesome-bucket". You must choose your own, globally-unique, bucket name.
Then you can redirect your output to a local file and upload to Google Cloud Storage like this:
#!/bin/bash
TIMESTAMP=$(date +'%s')
BUCKET="my-awesome-bucket"
echo "Hello world!" > "logfile.$TIMESTAMP.log"
gsutil cp logfile.$TIMESTAMP.log gs://$BUCKET/logfile.$TIMESTAMP.log

Related

AWS S3 download all files with same name with shell

There are files from an AWS s3 bucket that I would like to download, they all have the same name but are in different subfolders. There are no credentials required to download and connect to this bucket. I would like to download all the files called "B01.tif" in s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/, and save them with the name of the subfolder they are in (for example: S2A_7VEG_20170205_0_L2AB01.tif).
Path example:
s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/2017/2/S2A_7VEG_20170205_0_L2A/B01.tif
I was thinking of using a bash script that prints the output of ls to download the file with cp, and save it on my pc with a name generated from the path.
Command to use ls:
aws s3 ls s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/2017/2/ --no-sign-request
Command to download a single file:
aws s3 cp s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/2017/2/S2A_7VEG_20170205_0_L2A/B01.tif --no-sign-request B01.tif
Attempt to download multiple files:
VAR1=B01.tif
for a in s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/:
for b in s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/2017/:
for c in s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/2017/2/:
NAME=$(aws s3 ls s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/$a$b$c | head -1)
aws s3 cp s3://sentinel-cogs/sentinel-s2-l2a-cogs/7/V/EG/$NAME/B01.tif --no-sign-request $NAME$VAR1
done
done
done
I don't know if there is a simple way to go automatically through every subfolder and save the files directly. I know my ls command is broken, because if there are multiple subfolders it will only take the first one as a variable.
It's easier to do this in a programming language rather than as a Shell script.
Here's a Python script that will do it for you:
import boto3
BUCKET = 'sentinel-cogs'
PREFIX = 'sentinel-s2-l2a-cogs/7/V/EG/'
FILE='B01.tif'
s3_resource = boto3.resource('s3')
for object in s3_resource.Bucket(BUCKET).objects.filter(Prefix=PREFIX):
if object.key.endswith(FILE):
target = object.key[len(PREFIX):].replace('/', '_')
object.Object().download_file(target)

why do I need to create the symlinks and what does folder/in/path corresponds to ? when Installing aws cli 2 on mac for current user

I am trying to install AWS cli 2 for the current user, on mac as per blog
https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html#cliv2-mac-install-cmd-current-user
AWS got installed correctly, I am not able to understand the fourth point, why do I need to create the symlinks and what does folder/in/path corresponds to
4. Finally, you must create a symlink file in your $PATH that points to the actual aws and aws_completer programs. Because standard user permissions typically don't allow writing to folders in the path, the installer in this mode doesn't try to add the symlinks. You must manually create the symlinks after the installer finishes. If your $PATHincludes a folder you can write to, you can run the following command without sudo if you specify that folder as the target's path. If you don't have a writable folder in your $PATH, then you must use sudo in the commands to get permissions to write to the specified target folder.
$ sudo ln -s /folder/installed/aws-cli/aws /folder/in/path/aws
$ sudo ln -s /folder/installed/aws-cli/aws_completer /folder/in/path/aws_completer
There are two ways to configure path of the aws program which is under the folder aws-cli, First wayAdd the path of folder aws-cli to our PATH variable using the following command export PATH=$PATH:$HOME/aws-cli //assuming aws-cli is installed at $HOMEThis is sufficient to start using aws command.Second wayPATH variable contains path of /usr/local/bin folder=fA and this folder contains links to all the executable programs. So creating a symlink to the /aws-cli/aws in that folder=fA is another way our system can find aws-cli and it is more robust as there is no direct dependency on the PATH variable and that is what the AWS documentation is also referring to So in my case the commands would like
>> sudo ln -s /Users/akshayjain/aws-cli/aws /usr/local/bin/aws
>> sudo ln -s /Users/akshayjain/aws-cli/aws_completer /usr/local/bin/aws_completer
With either of way you can confirm your installation with following command aws --version

0 Byte File Extracted When Downloading Google Group Data

I am trying to download Google Group data onto my Windows 10 computer with bash 10.0. When I try retrieving the file from my bin, it is a 0-byte file called t.0, and has no text when I access it in Atom 1.27.2.
Here is the script crawler.sh I used as a guide, it requires installation of bash-4, sort, wget (1.17.1), sed (4.2.2-7), and awk(1.3.3). I have the newest versions:
https://github.com/icy/google-group-crawler
This is the forum I am trying to download:
https://groups.google.com/forum/#!forum/django-developers
I made the script executable with chmod 755, and they are in my local bin.
I entered the following code from the README into the main method of crawler.sh (~line 254):
export _GROUP="django-developers"
export _RSS_NUM=50
export _HOOK_FILE=/path/to.sh
After that, I ran the testing command from the README in the shell:
./crawler.sh -sh # first run for testing
./crawler.sh -sh > wget.sh # save your script
bash wget.sh # downloading mbox files
I am new to using bash, so I am unsure of whether this is a simple error on my part, or an actual issue with the download.

aws s3 cp to a local file was not replacing file

I have a shell script that is running aws s3 cp s3://s3file /home/usr/localfile. The file already exists in that directory, so the cp command is essentially getting the latest copy from S3 to get the latest version.
However, I noticed today that the file was not the latest version; it didn't match the file on S3. Looking at the shell script's stdout from the last two runs, it looks like the command ran - the output is: download: s3://s3file to usr/localfile. But when I compared the copies, they didn't match. The changed timestamp on the file when I view it on the local machine via WinSCP (a file transfer client) didn't change either
I manually ran the command in a shell just now and it copied the file from S3 to the local machine and successfully got the latest copy.
Do I need to add a specific option for this, or is it typical behavior for files to not override a file after aws s3 cp?

Possible to run zip process on Heroku?

Generating Office docs in OpenXML. Part of the process is using zip to combine directories and files into an archive. This works fine locally
var p = 'cd ' + target + '/; zip -r ../' + this.fname + ' .; cd ..;';
return exec.exec(p, function(err, stdout, stderr) { ... }
But fails on Heroku Cedar, with an error /bin/sh: zip: not found. Logging in via shell (heroku run bash) and running ls /bin, it appears that the zip binary does not exist. gzip does exist, but I think that's different.
Is it possible to run zip on the Heroku from a shell process? From this link below it seems like it should be possible. (That article uses Ruby, I use Node, but I think the shell shouldn't care who's calling it?)
Rails: How can I use system zip on Heroku to make a docx from an xml template?
It says here
How to unzip files in a Heroku Buildpack
that though heroku doesn't include the zip command, the jar command is available.
However, why not use an npm like this one to process your files from within the node app itself:
https://www.npmjs.org/package/zipfile

Resources