how to add tmp directory to amazon ses email in bash script? - bash

I have a temp directory containing two files
#Creating a temporary directory to figure out the size
tmpSizeDir=`mktemp -d`/
trap "rm -rf $tmpSizeDir" EXIT
cp -vf "${DIRECTORY}${FILE_NAME}.csv" $tmpSizeDir
cp -vf "${DIRECTORY}${FILE_NAME}.PDF" $tmpSizeDir
I also have code to send an amazon ses email
echo "{ \"Subject\": { \"Data\": \"$subject\", \"Charset\": \"UTF-8\"}, \"Body\": { \"Text\": { \"Data\": \"$body\", #\"Charset\": \"UTF-8\" } } }" > message.json
aws ses send-email --from $MAIL_SENDER --recipient file://tmpDestinationDir --message file://message.json
How do I add a directory as an attachment to an amazon ses email?

To do it: firstly you have to change the method that you are using.
aws ses send-email is not suited for attachments. If you want to use attachments, you have to use send-raw-email.
Here is the documentation for the CLI: https://docs.aws.amazon.com/cli/latest/reference/ses/send-raw-email.html
To send a directory: firstly create a zip archive, then convert the zip archive to base64 and add it to the content of the email. Afterwards, the rest in your case stays almost the same. The one thing that you have to take into account is that in sending the raw emails you are providing by yourself all of the headers, values etc, so you will have to convert your message a bit.

Related

How to download a big file from google drive via curl in Bash?

I wanna make a very simple bash script for downloading files from google drive via Drive API, so in this case there is a big file on google drive and I installed OAuth 2.0 Playground on my google drive account, then in the Select the Scope box, I choose Drive API v3, and https://www.googleapis.com/auth/drive.readonly to make a token and link.
After clicking Authorize APIs and then Exchange authorization code for tokens. I copied the Access tokenlike below.
#! /bin/bash
read -p 'Enter your id : ' id
read -p 'Enter your new token : ' token
read -p 'Enter your file name : ' file
curl -H "Authorization: Bearer $token" "https://www.googleapis.com/drive/v3/files/$id?alt=media" -o "$file"
but it won't work, any idea ?
for example the size of my file is 12G, when I run the code I will get this as output and after a second it back to prompt again ! I checked it in two computers with two different ip addresses.(I also add alt=media to URL)
-bash-3.2# bash mycode.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 166 100 166 0 0 80 0 0:00:02 0:00:02 --:--:-- 80
-bash-3.2#
the content of file that it created is like this
{
"error": {
"errors": [
{
"domain": "global",
"reason": "downloadQuotaExceeded",
"message": "The download quota for this file has been exceeded."
}
],
"code": 403,
"message": "The download quota for this file has been exceeded."
}
}
You want to download a file from Google Drive using the curl command with the access token.
If my understanding is correct, how about this modification?
Modified curl command:
Please add the query parameter of alt=media.
curl -H "Authorization: Bearer $token" "https://www.googleapis.com/drive/v3/files/$id?alt=media" -o "$file"
Note:
This modified curl command supposes that your access token can be used for downloading the file.
In this modification, the files except for Google Docs can be downloaded. If you want to download the Google Docs, please use the Files: export method of Drive API. Ref
Reference:
Download files
If I misunderstood your question and this was not the direction you want, I apologize.
UPDATE AS FOR MARCH 2021
Simply follow this guide here. It worked for me.
In summary:
For small files to download run
wget --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O FILENAME
While if you are trying to download a quite large file you should try to run
wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=FILEID" -O FILENAME && rm -rf /tmp/cookies.txt
Simply substitute FILEID and FILENAME with your custom values.
FILEID can be found in your file share link (after the /d/ as illustrated in the article mantioned above).
FILENAME is simply the name you want to save the download as. Remember to include the right extension. For Example FILENAME = my_file.pdf if the file is a pdf.
This is a known bug
It has been reported in this Issue Tracker post. This is caused because as you can read in the documentation:
(about download url)
Short lived download URL for the file. This field is only populated
for files with content stored in Google Drive; it is not populated for
Google Docs or shortcut files.
So you should use another field.
You can follow the report by clicking on the star next to the issue
number to give more priority to the bug and to receive updates.
As you can read in the comments of the report, the current workaround is:
Use webContentlink instead
or
Change www.googleapis.com to content.googleapis.com

Slack API file.upload to a user?

I'm trying to follow the tutorial found here https://api.slack.com/methods/files.upload .
curl -F file=#example.txt -F "initial_comment=I play the drums." -F channels=C024BE91L -F thread_ts=1532293503.000001 -H "Authorization: Bearer xoxp-xxxxxxxxx-xxxx" https://slack.com/api/files.upload
I'm able to upload files to a specific channel. However, how do I upload files to a user through direct message?
Similar to how sending direct messages work you can simply use the user ID for the channel and the file will be uploaded in a direct message channel between that user and the owner of the token.
Alternatively you can first open a direct message channel from your app with im.open and then use the channel ID of that IM in files.upload.

mailx not working but sendmail is working

Recently my production server has been upgraded. after that our mailx command is not working. it is sending the mail without attachment and then there is junk character in mail.
error is like.
Hello Team,
Please find the attached list of files which have been purged.
Regards,
Axiom Tech Support
begin 644 purge_files_2018-07-07.log.gz
M'XL("&,005L``W!U<F=E7V9I;&5S7S(P,3#M,#<M,#<N;&]G`-2=6V^<-Y*&
M[^=7]/4"M'DF*W>)DVQF,3/Q1#[V8K!H%,DJ6[`L"9*3&<^OGY=JM91(:K5R
ML=W?.#8LRVZ#1=;A>8N'_-V6U_CIK:LKE[ZR^&G_]ZO5R6>Y7+GY*U]]7OUR
MN;K0U>4O5^]E/?#SK\?IU?6KC]<?_O0?__<__K2Z^>_OSPS4?[5ZB\&=GK]?
MG7S]=C6'N%*1<;VZ.!MRM?K\#<]7KEK\R9?K`XWYN?&&^_&^.?EI&>/=,<?N
MJ3'_\.-//Y_\_QAVO!_VG_]V\N[KO[WY;O7?[WY:_=>/WYS<#EY/SV1AGO+4
MK/_YAS?+F?(_,.UG%^^?F.)TN!&_YLO+Z]?\S].+3VM\N?FJO,:XKE]?R]6O
the existing command was like
uuencode purge_files_2018-07-07.log.gz purge_files_2018-07-07.log.gz | mailx "Subject:Purge file";echo -e "\nHello Team,\n\nPlease find the attached list of files which have been purged -s onkar.tiwar90#gmail.com
now I have replaced it with
echo "Subject:Purge file";echo -e "\nHello Team,\n\nPlease find the attached list of files which have been purged.\n\nRegards,\nAxiom Tech Support";/usr/bin/uuencode purge_files_2018-07-07.log.gz purge_files_2018-07-07.log.gz)|/usr/sbin/sendmail -t "onkar.tiwar90#gmail.com"
So my question is why mailx is not working but sendmail is working. actually i will have to change in multiple scripts so I am seeking the solution.
Mailx upgrade switched it to use MIME as mail content instead of plain text. Your email client does not recognise uuencoded content within MIME.
You can stop using uuencode and switch to
mailx -a <filename>

Access Slack files from a slack bot

I need a slack bot that's able to receive and save files send from slack chatrooms.
The problem is: slack doesn't send file contents, but an array of links pointing to the file. Most of them, including download link are private and cannot be accessed via bot. It does send one public link, but that link points at the file preview, which does not have the file itself (here's an example).
How can I access uploaded files via bot?
You can access private URLs from your bot by providing an access token in the HTTP header when you are doing you CURL request.
Your token needs to have the scope files.read in order to get access.
The format is:
Authorization: Bearer A_VALID_TOKEN
Replace A_VALID_TOKEN with your slack access token.
I just tested it with a simple PHP script to retrieve a file by its "url_private" and it works nicely.
Source: Slack API documententation / file object / Authentication
Example for using the Python requests library to fetch an example file:
import requests
url = 'https://slack-files.com/T0JU09BGC-F0UD6SJ21-a762ad74d3'
token = 'xoxp-8853424449-8820034832-8891394196-faf6f0'
requests.get(url, headers={'Authorization': 'Bearer %s' % token})
for those wanting to accomplish this with Bash & cURL, here's a helpful function! It will download the file to the current directory with a filename that uniquely identifies the file, even if the file has the same name as others in your file listing.
function slack_download {
URL="$1";
TOKEN="$2"
FILENAME=`echo "$URL" | sed -r 's/.*\/(T.+)\/([^\/]+)$/\1-\2/'`;
curl -o "$FILENAME" -H "Authorization: Bearer $TOKEN" "$URL";
}
# Usage:
# Downloads as ./TJOLLYDAY-FANGBEARD-NSFW_PIC.jpg
slack_download "https://files.slack.com/files-pri/TJOLLYDAY-FANGBEARD/NSFW_PIC.jpg" xoxp-12345678901-01234567890-123456789012-abcdef0123456789abcdef0123456789
Tested with Python3 - just replace SLACK_TOKEN with your token.
Downloads and creates an output file.
#!/usr/bin/env python3
# Usage: python3 download_files_from_slack.py <URL>
import sys
import re
import requests
url = " ".join(sys.argv[1:])
token = 'SLACK_TOKEN'
resp = requests.get(url, headers={'Authorization': 'Bearer %s' % token})
headers = resp.headers['content-disposition']
fname = re.findall("filename=(.*?);", headers)[0].strip("'").strip('"')
assert not os.path.exists(fname), print("File already exists. Please remove/rename and re-run")
out_file = open(fname, mode="wb+")
out_file.write(resp.content)
out_file.close()

Amazon S3 Command Line Copy all objects to themselves setting Cache control

I have an Amazon S3 bucket with about 300K objects in it and need to set the Cache-control header on all of them. Unfortunately it seems like the only way to do this, besides one at a time, is by copying the objects to themselves and setting the cache control header that way:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Is the documentation for the Amazon S3 CLI copy command but I have been unsuccessful setting the cache control header using it. Does anyone have an example command that would work for this. I am trying to set cache-control to max-age=1814400
Some background material:
Set cache-control for entire S3 bucket automatically (using bucket policies?)
https://forums.aws.amazon.com/thread.jspa?messageID=567440
By default, aws-cli only copies a file's current metadata, EVEN IF YOU SPECIFY NEW METADATA.
To use the metadata that is specified on the command line, you need to add the '--metadata-directive REPLACE' flag. Here are some examples.
For a single file
aws s3 cp s3://mybucket/file.txt s3://mybucket/file.txt --metadata-directive REPLACE \
--expires 2100-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
For an entire bucket:
aws s3 cp s3://mybucket/ s3://mybucket/ --recursive --metadata-directive REPLACE \
--expires 2100-01-01T00:00:00Z --acl public-read --cache-control max-age=2592000,public
A little gotcha I found, if you only want to apply it to a specific file type, you need to exclude all the files, then include the ones you want.
Only jpgs and pngs
aws s3 cp s3://mybucket/ s3://mybucket/ --exclude "*" --include "*.jpg" --include "*.png" \
--recursive --metadata-directive REPLACE --expires 2100-01-01T00:00:00Z --acl public-read \
--cache-control max-age=2592000,public
Here are some links to the manual if you need more info:
http://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html#options

Resources