Hi I have continuous command running on my server
while [ 1 -eq 1 ]
do
name=`/home/ubuntu/backup$now.zip`
now=$(date +%Y-%m-%dT%H:%M)
realm-backup /var/lib/realm/object-server $name
aws s3 cp $name s3://tm-ep-realm-backups/
sleep 900
done
That works fine, now I launch new EC2 instance and paste compressed files into /var/lib/realm/object-server, but the server doesn't launch, am I missing something?
https://realm.io/docs/realm-object-server/#server-recovery-from-a-backup
The second argument to realm-backup must be an empty directory, not
a zip file.
You can then zip the directory yourself after realm-backup if you want to.
When you paste the backup files in to the directory of the new server,
you must unzip yourself if you use zip files.
When you start the server, there must be a directory of your Realms, not a zip file.
Related
I create a script to automatically upload my files to google cloud storage, my vm is in the same project as my Google Cloud Bucket...
So I create this script but I can't run it properly
#!/bin/bash
TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
SRCDIR=opt/R
DESDIR= gsutil gs cp FILENAME -$TIME.tar.gz gs://my-storage-name
tar -cpzf $DESDIR/$FILENAME $SRCDIR
any help?
#!/bin/bash
TIME=`date +%b-%d-%y`
FILENAME=backup-$TIME.tar.gz
gsutil cp {path of the source-file} gs://my-storage-name/backup-$TIME.tar.gz
It will save your file with the name of backup-$TIME.tar.gz
eg.backup-Jun-09-21.tar.gz
I am creating an amazon emr cluster where one of the steps is a bash script run by script-runner.jar:
aws emr create cluster ... --steps '[ ... {
"Args":["s3://bucket/scripts/script.sh"],
"Type":"CUSTOM_JAR",
"ActionOnFailure":"TERMINATE_CLUSTER",
"Jar":"s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar",
}, ... ]'...
as described in https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-hadoop-script.html
script.sh needs other files in its commands: think awk ... -f file, sed ... -f file, psql ... -f file, etc.
On my laptop with both script.sh and files in my working directory, everything works just fine. However, after I upload everything to s3://bucket/scripts, the cluster creation fails with:
file: No such file or directory
Command exiting with ret '1'
I have found the workaround posted below, but I don't like it for the reasons specified. If you have a better solution, please post it, so that I can accept it.
I am using the following work around in script.sh:
# Download the SQL file to a tmp directory.
tmpdir=$(mktemp -d "${TMPDIR:-/tmp/}$(basename $0).XXXXXXXXXXXX")
aws s3 cp s3://bucket/scripts/file ${tmpdir}
# Run my command
xxx -f ${tmpdir}/file
# Clean up
rm -r ${tmpdir}
This approach works but:
Running script.sh locally means that I have to upload file to s3 first, which makes development harder.
There are actually a few files involved...
I have a devbox that I ssh into as the Jenkins user, and as the title says, I want to run a bash script that will move to a specific directory and remove the oldest directory. I know the location of the specific directory.
For example,
ssh server [move/find/whatever into home/deploy and find the oldest directory in deploy and delete it and everything inside it]
Ideally this is a one-liner. Not sure how to run multiple lines while sshing as a part of a Jenkins task. I read some Stack Overflow posts on them, but don't understand it. Specifically 'here documents'.
The file structure would look like home/deploy and inside the deploy directory has 3 folders: oldest, new, and newest. It should pick out the oldest (because of it's creation date, and rm -rf it)
I know this task removes the oldest directory:
rm -R $(ls -lt | grep '^d' | tail -1 | tr " " "\n" | tail -1)
Is there any way I can adjust the above code to remove a directory inside of a directory that I know?
You could pass a script to ssh. Save the below script as
#!/bin/bash
cd ~/deploy
rm -R $( ls -td */ | tail -n 1 )
delete_oldest.sh and pass it to ssh like below
ssh server -your-arguments-here < delete_oldest.sh
Edit:
If you wish to place the script on the remote machine, first you could
copy the script from the local machine to the remote machine to your
home folder using scp like this :
scp delete_oldest.sh your_user_name#remotemachine:~
Then you can do something like :
ssh your_user_name#remotemachine './delete_oldest.sh'
'./delete_oldest.sh' assumes that you're currently at your home folder on the remote machine which will be the case when you use ssh, as the default landing folder will always be the home folder.
Please try it with a test folder before you proceed.
I'm currently running a small database on a centos 7 server.
I've one script for creating backups and another script for uploading them to googledrive using grive. However the script only uploads my files when I run it manually (bash /folder/script.sh). When it is run via crontab the script runs but it wont upload. I cant find any error messages in /var/log/cron or /var/log/messages.
Cron log entry:
Dec 7 14:09:01 localhost CROND[6409]: (root) CMD (/root/backupDrive.sh)
Here is the script:
#!/bin/bash
# Get latest file
file="$(ls -t /backup/database | head -1)"
echo $file
# Upload file to G-Drive
cd /backup/database && drive upload -f $file
Add full path to drive or add its path to $PATH.
I'd like to write bash script to recursively list all files (with fullpaths) on sftp and interact with paths locally afterwards (so only thing for what sftp is needed is getting the paths). Unfortunately the "ls -R" doesn't work there.
Any idea how to do that with some basic POC would be really appreciated
Available commands:
bye Quit sftp
cd path Change remote directory to 'path'
chgrp grp path Change group of file 'path' to 'grp'
chmod mode path Change permissions of file 'path' to 'mode'
chown own path Change owner of file 'path' to 'own'
df [-hi] [path] Display statistics for current directory or
filesystem containing 'path'
exit Quit sftp
get [-Ppr] remote [local] Download file
help Display this help text
lcd path Change local directory to 'path'
lls [ls-options [path]] Display local directory listing
lmkdir path Create local directory
ln [-s] oldpath newpath Link remote file (-s for symlink)
lpwd Print local working directory
ls [-1afhlnrSt] [path] Display remote directory listing
lumask umask Set local umask to 'umask'
mkdir path Create remote directory
progress Toggle display of progress meter
put [-Ppr] local [remote] Upload file
pwd Display remote working directory
quit Quit sftp
rename oldpath newpath Rename remote file
rm path Delete remote file
rmdir path Remove remote directory
symlink oldpath newpath Symlink remote file
version Show SFTP version
!command Execute 'command' in local shell
! Escape to local shell
? Synonym for help
This recursive script does the job:
#!/bin/bash
#
URL=user#XXX.XXX.XXX.XXX
TMPFILE=/tmp/ls.sftp
echo 'ls -1l' > $TMPFILE
function handle_dir {
echo "====== $1 ========="
local dir=$1
sftp -b $TMPFILE "$URL:$dir" | tail -n +2 | while read info; do
echo "$info"
if egrep -q '^d' <<< $info; then
info=$(echo $info)
subdir=$(cut -d ' ' -f9- <<< $info)
handle_dir "$dir/$subdir"
fi
done
}
handle_dir "."
fill URL with the sftp server data.
I scan the whole Internet and find a great tool sshfs. Mount the remote directory tree through SSHFS. SSHFS is a remote filesystem that uses the SFTP protocol to access remote files.
Once you've mounted the filesystem, you can use all the usual commands without having to care that the files are actually remote.
sshfs helps me a lot, may give you help, too.
mkdir localdir
sshfs user#host:/dir localdir
cd localdir
find . -name '*'