mktemp with extension without specifying file path - bash

Prefacing this with that I found identical questions but none of them have answers that are working for me.
I need to make a temporary .json file (it needs to be json because I'll be working with jq later in the script).
I thought based on the answers to this question that it would be the following, but they're creating files named .json and XXXXXXXX.json respectively.
STACKS=$(mktemp .json)
STACKS=$(mktemp XXXXXXXX.json)
This will need to run on both mac OS, and a linux box. I can't specify a path for the file because it will be run both locally and by Jenkins, which have an unidentical file structure. What's the proper syntax?

if you are using openBSD mktemp you can
STACKS="$(mktemp XXXXXX).json"
and then write a trap so the tmps are removed when script finishes:
function cleanup {
if [ -f "$STACKS" ] && [[ "$STACKS" =~ ".json"$ ]]; then
rm -f "$STACKS"
fi
}
trap cleanup EXIT
so when script finishes (no matter how) it will try to remove $STACKS if it is a file and if it ends with .json (for extra safety).

Related

Variable for a right-clicked item (say, a jpg) in a bash script?

I have a very simple bash script that I run often from the cli, but I've found it's frustrating to have to open a terminal, identify the right file, and run it and think the easiest way would be to run it as an option from a right-click. I am running Ubuntu 18.04 LTS.
The script is just erasing exif data, leaving the orientation tags, essentially this:
exiftool -all= -tagsfromfile # -Orientation file-*.jpg
Is there a way to have the script identify which image I'm right clicking on? I'm at a loss what to put in the file-*.jpg part which will be a variable for "whatever image I'm right-clicking on right now."
Tried searching for a good while on how to do this but am clearly either not using the right search terms or else this isn't done very often. Thank you in advance for any help!
if you want your script to run in file manager right-click menu you have to change your script and define file(s) as arguments. this happens simply by changing your file section with $1 to $n as the parameter(s).
as far as I know ubuntu uses nautilus as an file manager.
you can run nautilus-actions-config-tool either from your terminal or from dash and define your script a name and a command to run. you can follow this link for illustration learning :
ubuntu nautilus defile script in menu
for example :
#!/bin/bash
if [ "$1" != "" ]; then
echo "Positional parameter 1 contains value $1"
else
echo "Positional parameter 1 is empty"
fi
for all arguments :
#!/bin/bash
if [[ "$#" -gt 0 ]]; then
for arg in "$#"; do
echo $arg
done
fi
here is the image that shows the script worked
I know the question is a little older, but I can provide you with the solution.
You have to set up FileManager-actions, an extension for GNOME/Nautilus (but it also works for other file managers).
Setup filemanager-actions on Ubuntu 20.04
sudo apt update
sudo apt install filemanager-actions
Then run fma-config-tool and create a new action.
When creating an action, please ensure that:
[v] Display item in selection context menu
is flagged; otherwise, you will not see the context menu during the file selection.
Prepare the script you want to execute
Prepare a script that does what you need. Touch in /tmp, mv it in /usr/bin and give it execute permissions:
touch /tmp/your-script
# edit it with your editor
sudo mv /tmp/your-script /usr/bin/
sudo chmod +x /usr/bin/your-script
In your script, you can reference the filename using $1.
FILENAME=$1
echo $FILENAME
In the variable FILENAME you will find the selected file name.
Configure Nautilus-action command
To let nautilus pass the filename, insert the script path and the argument string in the' command' tab.
To fully answer your question, to let Nautilus pass the filename as a script argument, you have to specify %f.
At this point, quit the Nautilus instance and open it again:
nautilus -q
nautilus
Now, let's have a try! Right-click on a file and check out the result!
Appendix 1 - Filemanager-actions formats
%f A single file name, even if multiple files are selected.
%F A list of files. Each file is passed as a separate argument to the executable program.
%u A single URL. Local files may either be passed as file: URLs or as file path.
%U A list of URLs. Each URL is passed as a separate argument to the executable program.
%d Base directory
Here you can find a comprehensive list.
Appendix 2 - Sources
Check out my post blog in which I actually realize something similar: https://gabrieleserra.ml/blog/2021-08-14-filemanager-actions-ubuntu-20-04.html
Reference to all possible formats for FileManager-actions: https://askubuntu.com/a/783313/940068
Realize it in Ubuntu 18.04: https://askubuntu.com/a/77285/940068

Create file, but fail if it exists, with bash [duplicate]

In system call open(), if I open with O_CREAT | O_EXCL, the system call ensures that the file will only be created if it does not exist. The atomicity is guaranteed by the system call. Is there a similar way to create a file in an atomic fashion from a bash script?
UPDATE:
I found two different atomic ways
Use set -o noclobber. Then you can use > operator atomically.
Just use mkdir. Mkdir is atomic
A 100% pure bash solution:
set -o noclobber
{ > file ; } &> /dev/null
This command creates a file named file if there's no existent file named file. If there's a file named file, then do nothing (but return a non-zero return code).
Pros of > over the touch command:
Doesn't update timestamp if file already existed
100% bash builtin
Return code as expected: fail if file already existed or if file couldn't be created; success if file didn't exist and was created.
Cons:
need to set the noclobber option (but it's okay in a script, if you're careful with redirections, or unset it afterwards).
I guess this solution is really the bash counterpart of the open system call with O_CREAT | O_EXCL.
Here's a bash function using the mv -n trick:
function mkatomic() {
f="$(mktemp)"
mv -n "$f" "$1"
if [ -e "$f" ]; then
rm "$f"
echo "ERROR: file exists:" "$1" >&2
return 1
fi
}
Examples:
$ mkatomic foo
$ wc -c foo
0 foo
$ mkatomic foo
ERROR: file exists: foo
You could create it under a randomly-generated name, then rename (mv -n random desired) it into place with the desired name. The rename will fail if the file already exists.
Like this:
#!/bin/bash
touch randomFileName
mv -n randomFileName lockFile
if [ -e randomFileName ] ; then
echo "Failed to acquired lock"
else
echo "Acquired lock"
fi
Just to be clear, ensuring the file will only be created if it doesn't exist is not the same thing as atomicity. The operation is atomic if and only if, when two or more separate threads attempt to do the same thing at the same time, exactly one will succeed and all others will fail.
The best way I know of to create a file atomically in a shell script follows this pattern (and it's not perfect):
create a file that has an extremely high chance of not existing (using a decent random number selection or something in the file name), and place some unique content in it (something that no other thread would have - again, a random number or something)
verify that the file exists and contains the contents you expect it to
create a hard link from that file to the desired file
verify that the desired file contains the expected contents
In particular, touch is not atomic, since it will create the file if it's not there, or simply update the timestamp. You might be able to play games with different timestamps, but reading and parsing a timestamp to see if you "won" the race is harder than the above. mkdir can be atomic, but you would have to check the return code, because otherwise, you can only tell that "yes, the directory was created, but I don't know which thread won". If you're on a file system that doesn't support hard links, you might have to settle for a less ideal solution.
Another way to do this is to use umask to try to create the file and open it for writing, without creating it with write permissions, like this:
LOCK_FILE=only_one_at_a_time_please
UMASK=$(umask)
umask 777
echo "$$" > "$LOCK_FILE"
umask "$UMASK"
trap "rm '$LOCK_FILE'" EXIT
If the file is missing, the script will succeed at creating and opening it for writing, despite the file being created without writing permissions. If it already exists, the script won't be able to open the file for writing. It would be possible to use exec to open the file and keep the file descriptor around.
rm requires you to have write permissions to the directory itself, without regards to file permissions.
touch is the command you are looking for. It updates timestamps of the provided file if the file exists or creates it if it doesn't.

bash is zipping entire home

I am trying to back up a all world* folders from /home/mc/server/ and drop the zipped in /home/mc/backup/
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
backup="/home/mc/backup/map$moment.zip"
map="/home/mc/server/world*"
zipping="zip -r -9 $backup $map"
eval $zipping
The zipped file is created in backup folder as expected, but when I unzipped it contants the entire /home dir. I am running this bash in two ways:
Manually
Using user's crontab
Finally, If I put an echo of echo $zipping this prints correctly the command that I need to trigger. What am I missing? Thank you in advance.
There's no reason to use eval here (and no, justifying it on DRY grounds if you want to both log a command line and subsequently execute it does not count as a good reason IMO.)
Define a function and call it with the appropriate arguments:
#!/bin/bash
moment=$(date +"%Y%m%d%H%M")
zipping () {
output=$1
shift
zip -r -9 "$output" "$#"
}
zipping "/home/mc/backup/map$moment.zip" /home/mc/server/world*
(I'll admit, I don't know what is causing the behavior you report, but it would be better to confirm it is not somehow specific to the use of eval before trying to diagnose it further.)

Can a shell script indicate that its lines be loaded into memory initially?

UPDATE: this is a repost of How to make shell scripts robust to source being changed as they run
This is a little thing that bothers me every now and then:
I write a shell script (bash) for a quick and dirty job
I run the script, and it runs for quite a while
While it's running, I edit a few lines in the script, configuring it for a different job
But the first process is still reading the same script file and gets all screwed up.
Apparently, the script is interpreted by loading each line from the file as it is needed. Is there some way that I can have the script indicate to the shell that the entire script file should be read into memory all at once? For example, Perl scripts seem to do this: editing the code file does not affect a process that's currently interpreting it (because it's initially parsed/compiled?).
I understand that there are many ways I could get around this problem. For example, I could try something like:
cat script.sh | sh
or
sh -c "`cat script.sh`"
... although those might not work correctly if the script file is large and there are limits on the size of stream buffers and command-line arguments. I could also write an auxiliary wrapper that copies a script file to a locked temporary file and then executes it, but that doesn't seem very portable.
So I was hoping for the simplest solution that would involve modifications only to the script, not the way in which it is invoked. Can I just add a line or two at the start of the script? I don't know if such a solution exists, but I'm guessing it might make use of the $0 variable...
The best answer I've found is a very slight variation on the solutions offered to How to make shell scripts robust to source being changed as they run. Thanks to camh for noting the repost!
#!/bin/sh
{
# Your stuff goes here
exit
}
This ensures that all of your code is parsed initially; note that the 'exit' is critical to ensuring that the file isn't accessed later to see if there are additional lines to interpret. Also, as noted on the previous post, this isn't a guarantee that other scripts called by your script will be safe.
Thanks everyone for the help!
Use an editor that doesn't modify the existing file, and instead creates a new file then replaces the old file. For example, using :set writebackup backupcopy=no in Vim.
How about a solution to how you edit it.
If the script is running, before editing it, do this:
mv script script-old
cp script-old script
rm script-old
Since the shell keep's the file open as long as you don't change the contents of the open inode everything will work okay.
The above works because mv will preserve the old inode while cp will create a new one. Since a file's contents will not actually be removed if it is opened, you can remove it right away and it will be cleaned up once the shell closes the file.
According to the bash documentation if instead of
#!/bin/bash
body of script
you try
#!/bin/bash
script=$(cat <<'SETVAR'
body of script
SETVAR)
eval "$script"
then I think you will be in business.
Consider creating a new bang path for your quick-and-dirty jobs. If you start your scripts with:
#!/usr/local/fastbash
or something, then you can write a fastbash wrapper that uses one of the methods you mentioned. For portability, one can just create a symlink from fastbash to bash, or have a comment in the script saying one can replace fastbash with bash.
If you use Emacs, try M-x customize-variable break-hardlink-on-save. Setting this variable will tell Emacs to write to a temp file and then rename the temp file over the original instead of editing the original file directly. This should allow the running instance to keep its unmodified version while you save the new version.
Presumably, other semi-intelligent editors would have similar options.
A self contained way to make a script resistant to this problem is to have the script copy and re-execute itself like this:
#!/bin/bash
if [[ $0 != /tmp/copy-* ]] ; then
rm -f /tmp/copy-$$
cp $0 /tmp/copy-$$
exec /tmp/copy-$$ "$#"
echo "error copying and execing script"
exit 1
fi
rm $0
# rest of script...
(This will not work if the original script begins with the characters /tmp/copy-)
(This is inspired by R Samuel Klatchko's answer)

How to run every script in a directory except itself?

I have a folder full of *.command files on my OS X workstation.
(For those that don't know, *.command files are just shell scripts that launch and run in a dedicated Terminal window).
I've dragged this folder onto my Dock to use a "stack" so I can access and launch these scripts conveniently via a couple clicks.
I want to add a new "run-all.command" script to the stack that runs all the *.command files in the same stack with the obvious exception of itself.
My Bash chops are too rusty to recall how you get a list of the *.command files, iterate them, skip the file that's running, and execute each (in this case I'd be using the "open" command so each *.command opens in its own dedicated Terminal window).
Can somebody please help me out?
How about something like this:
#! /bin/bash
for x in ./*
do
if [ "$x" != "$0" ]
then
open $x
fi
done
where $0 automatically holds the name of the script that's running
Using #bbg's original script as a starting point and incorporating the comments from #Jefromi and #Dennis Williamson, and working out some more directory prefix issues, I arrived at this working version:
#!/bin/bash
for x in "$(dirname $0)"/*.command
do
if [ "$(basename $x)" != "$(basename $0)" ]
then
open "$x"
fi
done

Resources