Upload not working using grive and cron - bash

I'm currently running a small database on a centos 7 server.
I've one script for creating backups and another script for uploading them to googledrive using grive. However the script only uploads my files when I run it manually (bash /folder/script.sh). When it is run via crontab the script runs but it wont upload. I cant find any error messages in /var/log/cron or /var/log/messages.
Cron log entry:
Dec 7 14:09:01 localhost CROND[6409]: (root) CMD (/root/backupDrive.sh)
Here is the script:
#!/bin/bash
# Get latest file
file="$(ls -t /backup/database | head -1)"
echo $file
# Upload file to G-Drive
cd /backup/database && drive upload -f $file

Add full path to drive or add its path to $PATH.

Related

Can I store ssh command output inside a variable(date/unique alphanumeric) folder

I'm creating a bash script in which I will run list of ssh command and output it inside a folder as a text files.
What I'm doing is creating a list of command logs and saving it in a pre defined folder.
#!/bin/bash
tail -200 /opt/cpanel/ea-php74/root/var/log/php-fpm/error.log > /root/SERVERLOGS/php-fpm-logs.txt
tail -500 /var/log/apache2/error_log > /root/SERVERLOGS/apache-logs.txt
cd /var/lib/redis/ && ls -lsh > /root/SERVERLOGS/redisfilesize.txt
tail -500 /var/log/redis/redis.log > /root/SERVERLOGS/redis-logs.txt
df -h > /root/SERVERLOGS/harddisk.txt
free -m > /root/SERVERLOGS/RAM.txt
top -n 1 -b > /root/SERVERLOGS/top-output.txt
Whenever my server application is down I'll run it and get the output inside the SERVERLOGS folder.
This works okay but if I run this command multiple times then the updated files is saved in the SERVERLOGS folder.
What I want to do is that whenever the application is down and I run my script it should create one unique folder and save inside it instead of SERVERLOGS folder.
I can create unique folder using
mkdir /root/SERVERLOGS/$(date +"%d-%m-%Y-%h-%m")
But I don't know how to put the other commands inside the folder created by it...
Thanks for any input in this.
Assign a variable at the beginning of the script with that directory name.
outputdir="/root/SERVERLOGS/$(date +"%d-%m-%Y-%h-%m")"
mkdir "$outputdir"
and then use filenames like "$outputdir/harddisk.txt" and "$outputdir/top-output.txt" in the rest of the script.

Unable to restore ROS

Hi I have continuous command running on my server
while [ 1 -eq 1 ]
do
name=`/home/ubuntu/backup$now.zip`
now=$(date +%Y-%m-%dT%H:%M)
realm-backup /var/lib/realm/object-server $name
aws s3 cp $name s3://tm-ep-realm-backups/
sleep 900
done
That works fine, now I launch new EC2 instance and paste compressed files into /var/lib/realm/object-server, but the server doesn't launch, am I missing something?
https://realm.io/docs/realm-object-server/#server-recovery-from-a-backup
The second argument to realm-backup must be an empty directory, not
a zip file.
You can then zip the directory yourself after realm-backup if you want to.
When you paste the backup files in to the directory of the new server,
you must unzip yourself if you use zip files.
When you start the server, there must be a directory of your Realms, not a zip file.

Bash Script that works with Jenkins to move into a specific directory and remove the oldest folder in that directory

I have a devbox that I ssh into as the Jenkins user, and as the title says, I want to run a bash script that will move to a specific directory and remove the oldest directory. I know the location of the specific directory.
For example,
ssh server [move/find/whatever into home/deploy and find the oldest directory in deploy and delete it and everything inside it]
Ideally this is a one-liner. Not sure how to run multiple lines while sshing as a part of a Jenkins task. I read some Stack Overflow posts on them, but don't understand it. Specifically 'here documents'.
The file structure would look like home/deploy and inside the deploy directory has 3 folders: oldest, new, and newest. It should pick out the oldest (because of it's creation date, and rm -rf it)
I know this task removes the oldest directory:
rm -R $(ls -lt | grep '^d' | tail -1 | tr " " "\n" | tail -1)
Is there any way I can adjust the above code to remove a directory inside of a directory that I know?
You could pass a script to ssh. Save the below script as
#!/bin/bash
cd ~/deploy
rm -R $( ls -td */ | tail -n 1 )
delete_oldest.sh and pass it to ssh like below
ssh server -your-arguments-here < delete_oldest.sh
Edit:
If you wish to place the script on the remote machine, first you could
copy the script from the local machine to the remote machine to your
home folder using scp like this :
scp delete_oldest.sh your_user_name#remotemachine:~
Then you can do something like :
ssh your_user_name#remotemachine './delete_oldest.sh'
'./delete_oldest.sh' assumes that you're currently at your home folder on the remote machine which will be the case when you use ssh, as the default landing folder will always be the home folder.
Please try it with a test folder before you proceed.

How to properly access network location while executing bash script in cygwin's cron

I've created a bash script to take a backup of a folder to a remote location via cygwin cron however I'm experiencing an issue. The script at the end will execute a command like this one
/usr/bin/tar -zcvf //192.168.1.108/Backup/Folder/Folder.Backup.2015-12-03.1219.tar.gz /cygdrive/d/Folder
Although when I use the command it produces and then executes in the context of a cygwin bash shell it works correctly, when I run it via a cron job it fails because it doesn't recognize the remote location path correctly. If I change the path to a local /cygdrive location or to ~/ it works correctly even via cron so somehow I'm thinking that the network shares are not being correctly viewed by cygwin in it's cron environment.
Any ideas how I could solve this issue?
Here's my bash script
#!/usr/bin/bash
#the path needs to be set to execute gzip command or tar command breaks
export PATH=$PATH:/usr/bin:/bin:/usr/local/bin:/usr/local/sbin:/sbin
if [ $# -ne 3 ]
then
echo "USAGE: backup-clients <path> <name_prefix> <source>";
exit 1;
fi
DATE=`date "+%Y-%m-%d.%H%M"`;
FILEPATH="$1/$2.Backup.$DATE.tar.gz";
COMMAND="/usr/bin/tar -zcvf $FILEPATH $3";
echo "COMMAND="$COMMAND;
eval $COMMAND;
Which I run with the command
/usr/bin/bash /cygdrive/d/mybackupscript.bash "//192.168.1.108/Backup/Folder" "Folder" "/cygdrive/d/Folder"
I really appreciate any help you can provide.

Unable to use cd in .prog

while executing the following .prog script No such file or directory is thrown
#!/usr/bin/ksh
param1="$5"
echo "Parameter1 : $param1"
l_outgoing="outgoing"
l_out_path="$INTERFACE_HOME/$l_outgoing"
echo "$l_out_path"
cd $l_out_path
The script works fine till echo "$l_out_path" and it gives out the correct directory
The Script was made in Windows and migrated to Unix Sever.
using the command dos2unix worked!!
No other changes was done

Resources