nemo script for torrents - shell

Hi I am new to scripting and I do mean a complete noobie. I am working on a script to automatically make a torrent with nemo scripts.
#!/bin/bash
DIR="$NEMO_SCRIPT_SELECTED_FILE_PATHS"
BNAME=$(basename "$DIR")
TFILE="$BNAME.torrent"
TTRACKER="http://tracker.com/announce.php"
USER="USERNAME"
transmission-create -o "/home/$USER/Desktop/$TFILE" -t $TTRACKER "$DIR"
It does not work.
However if I replace
DIR="$NEMO_SCRIPT_SELECTED_FILE_PATHS"
with
DIR="absolutepath"
than it works like a charm. It creates it on the desktop with the tracker I want. I think this would come in handy for many people. I dont really know what to put. Have questions please ask. Again complete noobie.

The $NEMO_SCRIPT_SELECTED_FILE_PATHS is the same as $NAUTILUS_SCRIPT_SELECTED_FILE_PATHS. It's populated by nemo/nautilus when you run the script and contains a newline-delimited (I think) list of the selected files/folders. Assuming you are selecting only one file or folder, I don't really see why it wouldn't work - unless the newline character is in there and causing problems. If that's the case, you may be able to strip it with sed. Not running nemo or nautilus, so I can't test it.

I finally found the solution to yours and my problem [https://askubuntu.com/questions/243105/nautilus-scripts-nautilus-script-selected-file-paths-have-problems-with-spac][1]
The variable $NEMO_SCRIPT_SELECTED_FILE_PATH/$NAUTILUS_SCRIPT_SELECTED_FILE_PATH is a list of paths/filenames seperated by a Newline. This messes up anything that assumes its just one filename, even if it is.
#!/bin/bash
echo "$NEMO_SCRIPT_SELECTED_FILE_PATHS" | while read DIR; do
BNAME=$(basename "$DIR")
TFILE="$BNAME.torrent"
TTRACKER="http://tracker.com/announce.php"
USER="USERNAME"
transmission-create -o "/home/$USER/Desktop/$TFILE" -t $TTRACKER "$DIR"
done
Notice it seems to do an extra pass for the newline. You either need to filter that out or put an if the file/folder exists

Related

copy paste code works but not as a script

I wrote a script with six if statements that looks like this:
#!/usr/bin/bash
if [ -n $var1 ]
then
for f in /path/*.fastq.gz
do
x=${f/%.fastq.gz/_sample1-forward.fastq.gz}
y=${f/%.fastq.gz/_sample1-forward.out}
q=${f/%.fastq.gz/_temp.fastq.gz}
command [options] -i $f -o $temp${x##*/}
cp $temp${x##*/} $temp${q##*/}
done
else
echo "no $var1"
for f in /path/*.fastq.gz
do
q=${f/%.fastq.gz/_temp.fastq.gz}
cp $f $temp${q##*/}
done
fi
The other five statements do a similar task for var2 to var6. When I run the script I get unexpected output (no errors no warnings), but when I copy paste each of the if statements to terminal I end up with the exact result I would expect. I've looked for hidden characters or syntax issues for hours now. Could this be a shell issue? Script written on OSX (default zsh) and execution on server (default bash). I have a feeling this issue is similar but I couldn't find an answer to my issue in the replies.
Any and all ideas are most welcome!
Niwatori
You should maybe look at the shebang. I think proper usage would be #!/usr/bin/env bash or #!/bin/bash.
thanks for the help, much appreciated. Shebang didn't seem to be the problem although thanks for pointing that out. Shellcheck.net reminded me to use
[[ ]]
for unquoted variables, but that didn't solve the problem either. Here's what went wrong and how I 'fixed' it:
for every variable the same command (tool) is used which relies on a support file (similar in format but different content). Originally, before every if statement I replaced the support file for the previous variable with the one needed for the current variable. For some reason (curious why, any thoughts are welcome) this didn't always happen correctly.
As a quick workaround I made six versions of the tool, all with a different support file and used PYTHONPATH=/path/to/version/:$PYTHONPATH before every if statement. Best practice would be to adapt the tool so it can use different support files or an option that deals with repetitive tasks but I don't have the time at the moment.
Have a nice day,
Niwatori

Batch remove former of two file extensions

I am working with many pictures on a MacOSX 10.12.
In order to do some image analyses I need to change the format from .JPG to .gif.
Using ImageMagick I did it relatively quickly, and now I have multiple files with the double extension *.JPG.gif.
I would like to remove the ".JPG" part from the file name but for some reason what I am doing is not working. (I should say that this step is probably not critical to what I have to do next, but since I have a lot of files simplifying their name as much as possible is probably best. I should also say that I have all the super user permission and none of the file names actually contain brakes or spaces so even adding the "" to my code doesn't change anything).
Here is what I am trying using a bash script:
#!/bin/bash
for file in /folder/*.JPG.gif
do
mv $file ${file#.JPG}
done
My understanding is that this code should remove the .JPG part from $file starting the match from the front of the file's name. And yet when I call the ls command to see if the program did what it is supposed to do, all the names are still there with the double extension.
Any help is greatly appreciated.
Modify your mv command like this :
#!/bin/bash
for file in /folder/*.JPG.gif
do
mv "$file" "${file/\.JPG}"
done
Your initial code uses an expansion that removes text from the beginning, not in the middle. The expansion above removes inside the string.
Please note that this is not very robust. If you have ".JPG" in your path or filenames anywhere except at the end of your filenames, it will not do what you want. Quoting, even if not necessary in your case for now, is still good practice as things change, and code gets copy and pasted.

Loop through a directory with Grep (newbie)

I'm trying to do loop through the current directory that the script resides in, which has a bunch of files that end with _list.txt I would like to grep each file name and assign it to a variable and then execute some additional commands and then move on to the next file until there are no more _list.txt files to be processed.
I assume I want something like:
while file_name=`grep "*_list.txt" *`
do
Some more code
done
But this doesn't work as expected. Any suggestions of how to accomplish this newbie task?
Thanks in advance.
If I understand you problem correctly, you don't need a grep. You can just do:
for file in *_list.txt
do
# use $file, like echo $file
done
grep is one of the most useful commands of Unix. You must comprehend it well; see some useful examples here. As far as your current requirement, I think following code will be useful:
for file in *.*
do
echo "Happy Programming"
done
In place of *.* you can also use regular expressions. For more such useful examples, see First Time Linux, or read all grep options at your terminal using man grep.

Writing a simple backup script in bash and sending echo output to email

I've written a script that backs up my financial spreadsheet document to another hard drive and another computer. I have also set up my server with email to send cronjob messages to my email instead of system mail. In my script I can't figure out how to use if then to check date of backed up file and current date. Here is the script
#!/bin/bash
mount -t cifs //IP/Share /Drive/Directory -o username=username,password=password
cp /home/user/Desktop/finances10.ods /media/MediaConn/financesbackup/Daily\ Bac$
cp /home/user/Desktop/finances10.ods /Drive/Directory/FinancesBackup/Daily\ Backup/
umount /Drive/Directory
export i=`stat -c %y /media/MediaConn/financesbackup/Daily\ Backup/finances10.o$
export j=`date +%d`
if ["$i"="$j"]; then
echo Your backup has completed successfully. Please check the Daily Backup fo$
echo This message is automated by the system and mail is not checked.
else
echo Your backup has failed. Please manually backup the financial file to the$
echo drive. This message is automated by the system and mail is not checked.
fi
Pretty simple script. The output is sent by email because it's a cronjob. If anyone could help I would greatly appreciate it. Thanks in advance
Your code is all messed up in the post, but anyway... you should probably compare the output of 'stat -c %Y' (not %y) to the output of 'date +%s' (not %d).
But, even better, use something like md5sum or sha1sum to make sure the backed up file really matches the original.
I would strongly recommend checking that each command in your script has succeeded. Otherwise the script will carry on blindly and (at best) exit with a success code or (at worst) do something completely unexpected.
Here's a decent tutorial on capturing and acting on exit codes.
This line needs spaces around the brackets and equal sign:
if [ "$i" = "$j" ]; then
There's no need to export the variables in this context.
You should use the format codes that vanza suggested, since they correspond to the same format which is the number of seconds since the Unix epoch.
You should put quotes around the messages that you echo:
echo "some message"
When you pasted your code (apparently copied from nano), it got truncated. It might work better if you list it using less and copy it from there since less is likely to wrap lines rather than truncate them.
Thanks for all of the input. Sorry, I did copy and paste from nano, lol, didn't realized it was truncated. All of your advise was very helpful. I was able to do what I wanted using the format I had but just putting the spaces between the brackets and equal sign. I've never used md5sum or sha1sum but will check it out. Thanks again for all your help, it's working great now!

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

Resources