We have an Ubuntu server which runs Nginx for hosting webapps. We deploy to that server by using a shell script which contains a rsync command. We only want to transfer files which have content changes (no metadata). But now when I deploy and another user has done a deployment before, all my files reported as changed. By this we can't see if only the latest changes are getting deployed (and if we are missing some files from a submodule). When i run rsync multiple times on my environment, changes are reported like expected.
Example:
rsync -rltz --progress --stats --delete \
--perms \
--chmod=u=rwX,g=rwX,o=rX \
--exclude='- node_modules' \
--rsh "ssh" \
--rsync-path "sudo rsync" sourceDir user#domain:targetDir
Does anyone have any idea how files can be transferred from multiple users to a server only when there are content changes?
If I understand correctly you are looking for the -c, or --checksum, option.
Related
I'm nearly there, but stuck at the last hurdle.
$ /path/to/soffice.bin --version
^ This works both on my local machine (Docker Container) and on (container deployed on) AWS Lambda
However,
$ /path/to/soffice.bin \
--headless --invisible --nodefault --nofirststartwizard --nolockcheck --nologo --norestore --nosplash \
--convert-to pdf:writer_pdf_Export \
--outdir /tmp \
$filename \
2>&1 || true # avoid exit-on-fail
... fails with:
LibreOffice - dialog 'LibreOfficeDev 6.4 - Fatal Error': 'The application cannot be started.
User installation could not be completed. 'LibreOfficeDev 6.4 - Fatal Error: The application cannot be started.
User installation could not be completed.
Searching on google, everything is pointing towards a permissions issue with ~/.config/libreoffice
And there is something strange going on with file permissions on the Lambda runtime.
Maybe it is attempting to read or write to a location to which it doesn't have access.
Is there any way to get it working?
The problem is that lambda can only write on /tmp, but the default HOME is not /tmp
adding
export HOME=/tmp
before calling /path/to/soffice.bin
should do the trick.
Also, note that the first run will produce a predictable error because of unknown issues. So you should handle the retry.
(Translated using Hero Translate)
I am running debian with OMV (Openmediavault) and Owncloud setup. I would like to sync the filesystem tree with the database of Owncloud. Because OMV can alter the files without Owncloud updating the database. I was thinking about a bash script.
When I Create delete or move a file it needs to be registered in the database of Owncloud.
This is a little script I created for this purpose.
You will need the Inotify package.
#!/bin/sh
DATADIR="/sharedfolders/Owncloud"
inotifywait -m -r -q -e moved_to,create,delete --format '%w%f' "$DATADIR" |
while read INOTIFYFILE ; do # wait until change is made in the data dir
SCANFILE="${INOTIFYFILE##$DATADIR}" # converting Inotify output to something the owncloud --path option understands
sudo -u www-data php /var/www/owncloud/occ files:scan --path="$SCANFILE" #remove -q to enable logging & scans detected file
done
Is it possible with Jenkins to deploy the artifacts of the last successful build if the current one fails at any point? If so how?
I'm currently using rsync to deploy my files from the workspace as one of my build steps.
I'm aware that there is a plugin called BuildResultTrigger which I can guess I can use but I have no idea how to access archived artifacts and tell my current build which one was the last successful build.
I recommend that when deploying, you copy the already deployed (successfully built and tested) version to someplace on the deployment machine.
Then, when running the tests on Jenkins (assuming you use command-line to start the tests):
ssh deploymentmachine rm -rf /where/you/store/them
ssh deploymentmachine cp -R /where/you/deploy/them /where/you/store/them
rsync -rvz /where/jenkins/built/the/files deploymentmachine:/where/you/deploy/them
ssh deploymentmachine sh runtests.sh || \
(ssh deploymentmachine rm -rf /where/you/deploy/them; \
ssh deploymentmachine cp -R /where/you/store/them /where/you/deploy/them; \
ssh deploymentmachine rm -rf /where/you/store/them
exit 1)
ssh deploymentmachine rm -rf /where/you/store/them
This should give you a false exit status on failure and re-deploy the last successful version.
Adapt the solution as needed (for example, you probably start the tests in another way than "sh runtests.sh", the deployment may require re-starting servers instead of just copying around files, and the directory paths need adjusting).
Just for running a test I want to put an image file into one of my instance folders from my Desktop.
I've tried solutions provided at this same topic question:
Rsync to Amazon Ec2 Instance
So I've tried:
sudo rsync -azv --progress -e "ssh -i ~/.ssh/MyKeyPair.pem" \ ~/Desktop/luffy.jpg \ec2-user#xx.xx.xx.xxx:/home/ec2-user/myproject/mysite/mysite/media
~/.ssh/ is where MyKeyPair.pem is located. In fact, to enter via ssh I do first cd ~/.ssh and then I run the ssh -i ... command.
But I'm getting this error:
Warning: Identity file ~/.ssh/MyKeyPair.pem not accessible: No such file or directory.
Permission denied (publickey).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(600) [sender=3.0.6]
I've read on another Q&A page someone who got this same error reporting he solved it by just installing rsync via yum. In my case it is already installed (version 3.0.6).
I would be grateful if anyone can help!
For copying local files to EC2, the rsync command should be run on your local system, not on the EC2 instance.
The tilde (~) will not be shell expanded to your home directory if it is inside quotes. Try using $HOME instead.
If you are using sudo on the local side, then you probably want to use sudo on the remote (e.g., to copy over file ownerships). This can be done with the appropriate --rsync-path option.
I recommend including the options -SHAX to more closely preserve the files on the target system.
If "media" is supposed to be a subdirectory, then a trailing slash will help avoid some oddities if it does not currently exist.
End result:
sudo rsync -azv -SHAX --progress -e "ssh -i $HOME/.ssh/MyKeyPair.pem" \
--rsync-path "sudo rsync" \
~/Desktop/luffy.jpg \
ec2-user#xx.xx.xx.xxx:/home/ec2-user/myproject/mysite/mysite/media/
Here's an old article where I write about using rsync with EC2 instances. You can replace "ubuntu" with "ec2-user" for Amazon Linux.
http://alestic.com/2009/04/ubuntu-ec2-sudo-ssh-rsync
If this not solve your problem, please provide more details about what exact command you are running where and what exact error messages you are getting.
Great! This worked with a slight modification. Removed sudo:
sudo rsync -azv --progress -e "ssh -i $HOME/<path_to>" \
--rsync-path "rsync" \
<source> \
<target>
I'm very new to lftp, so forgive my ignorance.
I just ran a dry run of my lftp script, which consists basically of a line like this:
mirror -Rv -x regexp --only-existing --only-newer --dry-run /local/root/dir /remote/dir
When it prints what it's going to do, it wants to chmod a bunch of files - files which I grabbed from svn, never modified, and which should be identical to the ones on the server.
My local machine is Ubuntu, and the remote is a Windows server. I have a few questions:
Why is it trying to do that? Does it try to match file permissions from the local with the remote?
What will happen when it tries to chmod the files? As I understand it, Windows doesn't support chmod - will it just fail gracefully and leave the files alone?
Many thanks!
Use the -p option and it shouldn't try to change permissions. I've never sent to a windows host, but you are correct in that it shouldn't do anything to the permission levels on the windows box.
I think that you should try
lftp -e "mirror -R $localPath $remotePath; chmod -R 777 $remotePath; bye" -u $username,$password $host