Create a file from Shell script Variable - shell

I am trying to create a file from a shell script variable (.sh file ext)
#!/bin/bash
LOGNAME=`date +"error/postgresql.log.%Y-%m-%d-%H00.csv"`
echo Log File Name is: ${LOGNAME}
echo > ${LOGNAME}
aws rds download-db-log-file-portion --db-instance-identifier randomDBname --log-file-name ${LOGNAME} --starting-token 0 --output -n text > ${LOGNAME} --profile aws
sleep 20
I receive this error:
error/postgresql.log.%Y-%m-%d-%H00.csv: No such file or directory
I have tried the following and all have the same error:
echo >> ${LOGNAME}
echo -n >> ${LOGNAME}
echo "Starting Log" > ${LOGNAME}
The echo log file name works fine and prints the log file name without issue. Any ideas what is causing this?

Resolution:
#!/bin/bash
LOGNAME=`date +"error/postgresql.log.%Y-%m-%d-%H00.csv"`
FILENAME=`date +"error-postgresql.log.%Y-%m-%d-%H00.csv"`
echo Log File Name is: ${LOGNAME}
aws rds download-db-log-file-portion --db-instance-identifier randomDBname --log-file-name ${LOGNAME} --starting-token 0 --output csv > ${FILENAME} --profile aws
Found the issue, it was the / in error/postgresql.

Related

File redirection not working in shell script for aws cli output

I'm creating ec2 instances and would like to get the user_data for each instance using the command:
aws ec2 describe-instance-attribute --instance-id i-xxxx --attribute userData --output text --query "UserData.Value" | base64 --decode > file.txt
When running this directly via terminal it works, I'm able to get the userData printed into file.txt. However, I need this to run in a shell script I have, which gets the instance id as a parameter.
The lines in test.sh are the following:
#!/bin/bash
echo "$(aws ec2 describe-instance-attribute --instance-id $1 --attribute userData --output text --query "UserData.Value" | base64 --decode)" > file.txt
Where $1 is the instance-id. When running:
./test.sh i-xxxxxxx
It creates an empty file.txt. I have changed the line in the script to:
echo "$(aws ec2 describe-instance-attribute --instance-id $1 --attribute userData --output text --query "UserData.Value" | base64 --decode)"
and it prints the userData to stdout. So why it is not working for file redirection?
Thank you,

aws ec2 run-instances: script as the plain text is ignored

I'm trying to pass the script as the --user-data parameter.
If the same is run through --user-data file://some_file.sh all works. Also, it works if launch instance through AWS GUI by adding user-data in the correspondent launch configuration box.
My CLI command is
aws ec2 run-instances --image-id ami-0cc0a36f626a4fdf5 --count 1 --instance-type t2.micro --key-name key_name --security-group-ids sg-00000000 --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=some_name}]" --output table --user-data "sudo touch /tmp/install.log && sudo chmod 777 /tmp/install.log && echo $(date) >> /tmp/install.log"
if the same run as a script, it's content formatted as below
#!/bin/bash
sudo touch /tmp/install.log
sudo chmod 777 /tmp/install.log
echo $(date) >> /tmp/install.log
Also, I'd like to mention that I tried to pass string in different formats like :
--user-data echo "some text"
--user-data "command_1\n command_2\n"
--user-data "command_1 && command_2"
--user-data "command_1; command_2;"
--user-data "#!/bin/bash; command_1; command_2;"
User-data after launch is seeing but not executed
$ curl -L http://169.254.169.254/latest/user-data/
The first line must start with #!.
Then, subsequent lines are executed. They must be separated by a proper newline. It looks like \n is not interpreted correctly.
From how to pass in the user-data when launching AWS instances using CLI:
$ aws ec2 run-instances --image-id ami-16d4986e --user-data '#!/bin/bash
> poweroff'
As an experiment, I put this at the end of the run-instances command:
aws ec2 run-instances ... --user-data '#!
echo bar >/tmp/foo
'
When I logged into the instance, I could see the /tmp/foo file.

error in awscli call doesnt send to logfile

we have to check for the status of instance and iam trying to capture if any error to logfile. logfile has instance inforamtion but the error is not being writen to logfile below is code let me know what needs to be corrected
function wait-for-status {
instance=$1
target_status=$2
status=unknown
while [[ "$status" != "$target_status" ]]; do
status=`aws rds describe-db-instances \
--db-instance-identifier $instance | head -n 1 \
| awk -F \ '{print $10}'` >> ${log_file} 2>&1
sleep 30
echo $status >> ${log_file}
done
Rather than using all that head/awk stuff, if you want a value out of the CLI, you should use the --query parameter. For example:
aws rds describe-db-instances --db-instance-identifier xxx --query 'DBInstances[*].DBInstanceStatus'
See: Controlling Command Output from the AWS Command Line Interface - AWS Command Line Interface
Also, if your goal is to wait until an Amazon RDS instance is available, then you should use db-instance-available — AWS CLI Command Reference:
aws rds wait db-instance-available --db-instance-identifier xxx

aws cli - download rds postgres logs in a bash script

I wrote a simple bash script to download my RDS postgres files.
But the kicker is that is all works fine tine terminal, but when I try the same thing in the script I get an error:
An error occurred (DBLogFileNotFoundFault) when calling the DownloadDBLogFilePortion operation: DBLog File: "error/postgresql.log.2017-11-05-23", is not found on the DB instance
The command in question is this:
aws rds download-db-log-file-portion --db-instance-identifier foobar --starting-token 0 --output text --log-file error/postgresql.log.2017-11-05-23 >> test.log
It all works fine, but when I put the exact same line in the bash script I get the error message that there are no db log files - which is nonsense, they are there.
This is the bash script:
download_generate_report() {
for filename in $( aws rds describe-db-log-files --db-instance-identifier $1 | awk {'print $2'} | grep $2 )
do
echo $filename
echo $1
aws rds download-db-log-file-portion --db-instance-identifier $1 --starting-token 0 --output text --log-file $filename >> /home/ubuntu/pgbadger_script/postgres_logs/postgres_$1.log.$2
done
}
Tnx,
Tom
I re-wrote your script a little and it seems to work for me. It barked about grep. This uses jq.
for filename in $( aws rds describe-db-log-files --db-instance-identifier $1 | jq -r '.DescribeDBLogFiles[] | .LogFileName' )
do
aws rds download-db-log-file-portion --db-instance-identifier $1 --output text --no-paginate --log-file $filename >> /tmp/postgres_$1.log.$2
done
Thanks you Ian, I have an issue, with aws cli 2.4, because the log files download truncated.
to solve this I changed --no-paginate with --starting-token 0 more info in the RDS Reference
finally in bash:
#/bin/bash
set -x
for filename in $( aws rds describe-db-log-files --db-instance-identifier $1 | jq -r '.DescribeDBLogFiles[] | .LogFileName' )
do
aws rds download-db-log-file-portion --db-instance-identifier $1 --output text --starting-token 0 --log-file $filename >> $filename
done

Run bash script against 2 text files as variables

I need to run the following command in a bash script or any script for that matter:
$ aws rds download-db-log-file-portion --db-instance-identifier $LIST1 --log-file-name $LIST2 --region us-west-2
I have a text file with all the hostnames $LIST1 and another file with all the log files names $LIST2.
Basically I would like to know the best way to take each entry from $LIST1 and download all the logs for that entry from $LIST2.
$LIST1 sample:
host1
host2
host3
$LIST2 sample:
error/mysql-error.log
error/mysql-error.log.0
error/mysql-error.log.1
Example of a regular run of the command:
$ aws rds download-db-log-file-portion --db-instance-identifier host1 --log-file-name error/mysql-error.log--region us-west-2
The problem is I have 100+ hosts and each host has about 90 logs.
Here is a bash solution:
while read -r host
do
while read -r logfile
do
aws rds download-db-log-file-portion --db-instance-identifier $host --log-file-name $logfile --region us-west-2
done < logfiles.txt
done < hosts.txt

Resources