I create a script to automatically upload my files to google cloud storage, my vm is in the same project as my Google Cloud Bucket...
So I create this script but I can't run it properly
#!/bin/bash
TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
SRCDIR=opt/R
DESDIR= gsutil gs cp FILENAME -$TIME.tar.gz gs://my-storage-name
tar -cpzf $DESDIR/$FILENAME $SRCDIR
any help?
#!/bin/bash
TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
gsutil cp {path of the source-file} gs://my-storage-name/backup-$TIME.tar.gz
It will save your file with the name of backup-$TIME.tar.gz
eg.backup-Jun-09-21.tar.gz
Related
I want to export data from Vertica table on to S3, i am able to do it using vertica's S3Export Function.
But i want exported data in compressed form eg. gzip.
Pelase help, let me know how to do it using S3Export function.
Thanks.
You cannot gzip while using the S3Export fnct in Vertica.
You can create single objects vbr backups and tar.gz them and copy to S3, i find this to be much faster and more space efficient than the S3Export.
example :
#!/bin/bash
Initiate backup location
/opt/vertica/bin/vbr.py -t init -c /home/dbadmin/scripts/dba/vbr/vbr_conf/obj-bak.ini
Run Backup
/opt/vertica/bin/vbr.py -t backup -c /home/dbadmin/scripts/dba/vbr/vbr_conf/obj-bak.ini
#Move into the backup directory
cd /backup_area/
export date=`date +"%Y%m%d"`
tar czf bkp_daily_$date.tar.gz * --remove-files
Copy the gz file to AWS S3
aws s3 mv /backup_area/bkp_daily_$date.tar.gz s3://vertica-object-backup/
Hi I have continuous command running on my server
while [ 1 -eq 1 ]
do
name=`/home/ubuntu/backup$now.zip`
now=$(date +%Y-%m-%dT%H:%M)
realm-backup /var/lib/realm/object-server $name
aws s3 cp $name s3://tm-ep-realm-backups/
sleep 900
done
That works fine, now I launch new EC2 instance and paste compressed files into /var/lib/realm/object-server, but the server doesn't launch, am I missing something?
https://realm.io/docs/realm-object-server/#server-recovery-from-a-backup
The second argument to realm-backup must be an empty directory, not
a zip file.
You can then zip the directory yourself after realm-backup if you want to.
When you paste the backup files in to the directory of the new server,
you must unzip yourself if you use zip files.
When you start the server, there must be a directory of your Realms, not a zip file.
I'm currently running a small database on a centos 7 server.
I've one script for creating backups and another script for uploading them to googledrive using grive. However the script only uploads my files when I run it manually (bash /folder/script.sh). When it is run via crontab the script runs but it wont upload. I cant find any error messages in /var/log/cron or /var/log/messages.
Cron log entry:
Dec 7 14:09:01 localhost CROND[6409]: (root) CMD (/root/backupDrive.sh)
Here is the script:
#!/bin/bash
# Get latest file
file="$(ls -t /backup/database | head -1)"
echo $file
# Upload file to G-Drive
cd /backup/database && drive upload -f $file
Add full path to drive or add its path to $PATH.
I created a Maven project in OpenShift. I want to access file in data/uploads folder for this.
I created deploy in .openshift\action_hooks
ln -s ${OPENSHIFT_DATA_DIR}uploads ${OPENSHIFT_REPO_DIR}src/main/webapp
But it won't create symlink, and even if I create a symlink directly it still can't access with url. Any suggestions please?
My fault on the original answer. The uploads directory has to exist before you create the symlink to it:
if [ ! -d ${OPENSHIFT_DATA_DIR}uploads ]; then
mkdir ${OPENSHIFT_DATA_DIR}uploads
fi
ln -s ${OPENSHIFT_DATA_DIR}uploads ${OPENSHIFT_REPO_DIR}src/main/webapp
I have a number of files to move within S3 and need to issue a number of "s3cmd cp --recursive" so I have a big list of these commands (about 1200). I basically just want it to do the first one and then the next one and so on. It seems like it should be really simple:
#!/bin/bash
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
s3cmd cp --recursive s3://bucketname/fromfolder s3://bucketname/tofolder/
...
When I run this using "./s3mvcopy.sh" it just immediately returns and doesn't do anything.
Any ideas? Thanks in advance.
Make sure you have your .s3cfg command under your home directory.
Then in the file make sure you have the following:
[default]
access_key = <your access key>
secret_key = <your secret access key>