I am running debian with OMV (Openmediavault) and Owncloud setup. I would like to sync the filesystem tree with the database of Owncloud. Because OMV can alter the files without Owncloud updating the database. I was thinking about a bash script.
When I Create delete or move a file it needs to be registered in the database of Owncloud.
This is a little script I created for this purpose.
You will need the Inotify package.
#!/bin/sh
DATADIR="/sharedfolders/Owncloud"
inotifywait -m -r -q -e moved_to,create,delete --format '%w%f' "$DATADIR" |
while read INOTIFYFILE ; do # wait until change is made in the data dir
SCANFILE="${INOTIFYFILE##$DATADIR}" # converting Inotify output to something the owncloud --path option understands
sudo -u www-data php /var/www/owncloud/occ files:scan --path="$SCANFILE" #remove -q to enable logging & scans detected file
done
Related
I've created a simple bash script that grabs some data and then outputs it to a log file. When I run the script without sudo it fails to write to the logs and says they are write-protected. It then ask me if it should unwrite-protect them, but this fails (permission denied).
If I run the script as sudo it appears to work without issue. How can I set these log file to be available to the script?
cd /home/pi/scripts/powermonitor/
python /home/pi/powermonitor/plugpower.py > plug.log
echo -e "$(sed '1d' /home/pi/scripts/powermonitor/plug.log)\n" > plug.log
sed 's/^.\{139\}//' plug.log > plug1.log
rm plug.log
grep -o -E '[0-9]+' plug1.log > plug.log
rm plug1.log
sed -n '1p' plug.log > plug1.log
rm plug.log
perl -pe '
I was being dumb. I just needed to set the write permissions on the log files.
The ability to write a file depends on the file permissions that have been assigned to that file or, if the file does not exist but you want to create a new file, then the permissions on the directory in which you want to write the file. If you use sudo, then you are temporarily becoming the root user, and the root user can read/write/execute any file at all without restriction.
If you run your script first using sudo and the script ends up creating a file, that file is probably going to be owned by the root user and will not be writable by your typical user. If you run your script without using sudo, then it's going to run under the username you used to connect to the machine and that user will need to have permission to write the log files.
You can change the ownership and permissions of directories and files by using the chown, chmod, chgrp commands. If you want to always run your script as sudo, then you don't have much to worry about. If you want to run these commands without sudo, that means you're running them as some other user and you will need to grant write permission to that user, whoever it is, in order to write the files/folders where the log files get written.
For instance, if I wanted to run the script as user sneakyimp and wanted the files written to /home/sneakyimp/logs/ then I'd need to make sure that directory was writable by sneakyimp:
sudo chown -R sneakyimp:sneakyimp /home/sneakyimp/logs
This command changes ownership of that directory and its contents to the user sneakyimp. You might also need to run some chmod commands to make sure they are writable by owner.
I have been trying to get this running for over a week and am at a loss. I am trying to call a script on completion by using the settings.json config like so:
"script-torrent-done-enabled": true,
"script-torrent-done-filename": "/posttorrent.sh",
My script is in the root of the jail and is owned by transmission. I have also checked permissions on the file (which are 755) and have run chmod +x /posttorrent.sh.
I have even simplified the file to just output to a log file like so:
#!/bin/bash
echo "$TR_TORRENT_NAME is completed" >> /posttorrent.log
However, thus far, I still do not have a posttorrent.log file anywhere, no matter what file I download. I am not totally sure if I am on the right track as Transmission is set to log level 3 and yet I do not even see the calls to the script in /var/log/debug.log. I am sure I am missing something easy as others have been able to get this working, I am just out of options now as I think I have read and/or tried everything I could find in relation to this issue. Thanks!
The script is relying on /bin/bash being present inside of the jail.
You can either change the script to use /bin/sh, change /bin/bash
to /usr/bin/env bash, or link /path/to/port/bin/bash to
/usr/local/bin/bash (or wherever bash is located relative to the
jail directory, but if it exists it should be in /usr/local/bin).
ln -s /usr/local/bin/bash /path/to/jail/bin/bash
Also, the root directory (by default) is only writable by root, so
the transmission user would not have permission to create
the log file in the root directory. To properly allow creation of
the log file change the destination directory to one that the
transmission user has permission to write to. For example
/var/db/transmission/posttorrent.log, if using the FreeNAS plugin.
A directory can be created for the transmission user using the
install utility:
install -d -o transmission -g transmission /home/transmission
Alternatively the log file can be created manually using the
install utility, or the owner can be set with chown:
install -o transmission -g transmission -m 644 /dev/null /posttorrent.log
# or on an existing log file
chown transmission /posttorrent.log
chgrp transmission /posttorrent.log
# normally the mode bits will already be 644
chmod 644 /posttorrent.log
Transmission will also rewrite the configuration file when it
exits. So transmission-daemon has to be stopped before
editing the settings file. However, if using the Transmission plugin,
the settings are stored in a SQLite database
(/usr/pbi/transmission-amd64/transmissionUI/transmission.db)
and the settings file will be recreated from the database on
startup. sqlite3 can be used to manually edit the database, or the plugin's settings can be edited on the FreeNAS web UI.
sqlite3 /usr/pbi/transmission-amd64/transmissionUI/transmission.db <<EOF
UPDATE freenas_transmission SET enable=1;
UPDATE freenas_transmission SET script_post_torrent="/posttorrent.sh";
I have a pre-commit hook that's running a mysqldump to keep track of MySQL.
I'm trying to add that dump to the commit, but for some reasons it won't.
The code:
#!/bin/sh
rm -f database.sql
exec "C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqldump.exe" --skip-comments -u root --password=password my-database > database.sql
git add database.sql
The file is created, but not added to the commit.
Running TortoiseGit on Windows 7.
I don't know if it will help you, but here a step by step guide, how it works on my windows 10 machine with xampplite.
go to your project git
go to "hooks\"
create file "pre-commit" (without file ending)
go to file properties and give full access to windows user
open pre-commit and paste:
#!/bin/sh
"C:\xampplite\mysql\bin\mysqldump.exe" -u dbuser -ppassword
database_name > database_name.sql
git add database_name.sql exit 0
(-u username -ppassword databasename > file.sql)
file will be stored in project root. between -p and password is NO space.
Now, before every commit a mysql dump will be done and added to the commit.
I'm very new to lftp, so forgive my ignorance.
I just ran a dry run of my lftp script, which consists basically of a line like this:
mirror -Rv -x regexp --only-existing --only-newer --dry-run /local/root/dir /remote/dir
When it prints what it's going to do, it wants to chmod a bunch of files - files which I grabbed from svn, never modified, and which should be identical to the ones on the server.
My local machine is Ubuntu, and the remote is a Windows server. I have a few questions:
Why is it trying to do that? Does it try to match file permissions from the local with the remote?
What will happen when it tries to chmod the files? As I understand it, Windows doesn't support chmod - will it just fail gracefully and leave the files alone?
Many thanks!
Use the -p option and it shouldn't try to change permissions. I've never sent to a windows host, but you are correct in that it shouldn't do anything to the permission levels on the windows box.
I think that you should try
lftp -e "mirror -R $localPath $remotePath; chmod -R 777 $remotePath; bye" -u $username,$password $host
Ok so I kinda know how to do this locally with a find then cp command, but don't know how to do the same remotely with scp.
So know this:
scp -vp me#server:/target/location/ /destination/dir/.
That target directory is going to be full of database backups, how can I tell it to find the latest backup, and scp that locally?
remote_dir=/what/ever
dst=remote-system.host.name.com
scp $dst:`ssh $dst ls -1td $remote_dir/\* | head -1` /tmp/lastmod
Write a script on the remote side that uses find to find it and then cat to send it to stdout, then run:
ssh me#server runscript.sh > localcopy