I am using latexmk to compile my LaTeX thesis. I keep the thesis on my Dropbox, and as the dozens-to-hundreds of .aux and associated files are created, Dropbox indexing induces a significant overhead.
I thus want to insert the following bash script before compilation starts to stop Dropbox:
#!/usr/bin/env bash
dropbox_pid="$echo $(pgrep Dropbox)"
kill -STOP $dropbox_pid
and correspondingly, to restart Dropbox at the end, I would like:
#!/usr/bin/env bash
dropbox_pid="$echo $(pgrep Dropbox)"
kill -CONT $dropbox_pid
How do I do this by editing the local latexmkrc?
Not sure you will be able to send the SIGCONT signal from the latexmkrc ; isn't this file sourced before the compilation?
You could try to set a bash function such as:
compile () {
pkill -STOP Dropbox;
# compile_command "$#"
pkill -CONT Dropbox
}
Setting the working directories ($aux_dir and $out_dir) to somewhere outside the Dropbox repository, you can avoid excessive Dropbox syncing.
The following is from my $HOME/.latexmk. It locates the working directory under ~/.tmp/tex/THE_NAME_OF_MY_WRITING_PROJECT and tries to create it if it is not present.
$aux_dir = "$ENV{HOME}/.tmp/tex/" . basename(getcwd);
$out_dir = $aux_dir;
mkpath($aux_dir);
Related
I'm currently using a terminal and vim on OSX as a development environment for Flutter. Things are going pretty well except that the app does not reload when I save any dart files. Is there a way to trigger that behavior?Currently I have to go to the terminal and hit "r" to see my changes.
Sorry for the plug, but I wrote a very simple plugin to handle this.
It makes use of Flutter's --pid-file command line flag to send it a SIGUSR1 signal.
You can achieve the same result as my two-line plugin by adding this to an autocmd
silent execute '!kill -SIGUSR1 "$(cat /tmp/flutter.pid)"'
And launching Flutter with the --pid-file flag.
I made a vim plugin hankchiutw/flutter-reload.vim based on killing with SIGUSR1.
You don't have to use --pid-file flag with this plugin. (Thanks to the pgrep :))
Simply execute flutter run, modify your *.dart file and see the reloading.
I did it with the excellent little tool called entr. On OS/X you can install it from brew: brew install entr. The home page of the tool is at http://eradman.com/entrproject/
Then you start flutter run with the pidfile as #nobody_nowhere suggests.
How do you run entr depends on the level of service. In the simplest case you just do find lib/ -name '*.dart' | entr -p kill -USR1 $(cat /tmp/flutter.pid)
But such invocation will not detect new files in the source tree (because find builds a list of files to watch only once, at the start). You can get away with slightly more complex one-liner:
while true
do
find lib/ -name '*.dart' | \
entr -d -p kill -USR1 $(cat /tmp/flutter.pid)
done
The -d option makes entr exit when it does detect a new file in one of the directories and the loop runs again.
I personally use even more complex approach. I use Redux and change to middleware or other state files does not work with hot reload, it doesn't pick up these changes. So you need to resort to hot restart.
I have a script hotrestarter.sh:
#!/bin/bash
set -euo pipefail
PIDFILE="/tmp/flutter.pid"
if [[ "${1-}" != "" && -e $PIDFILE ]]; then
if [[ "$1" =~ \/state\/ ]]; then
kill -USR2 $(cat $PIDFILE)
else
kill -USR1 $(cat $PIDFILE)
fi
fi
It checks if the modified file lives in /state subdirectory and if true does hot restart or else hot reload. I call the script like that:
while true
do
find lib/ -name '*.dart' | entr -d -p ./hotreloader.sh /_
done
The /_ parameter makes entr to pass the name of the file to the program being invoked.
You don't say what platform, but all platforms have a "watcher" app that can run a command when any file in a tree changes. You'll need to run one of those.
vscode has this feature. If you don't mind moving to vscode you can get it out of the box. You could also reach out to the author and see if they have any suggestions on how you could do it in vim or check the source directly. Most likely vim will have a mechanism to do so.
I have a bunch of wrapper shell scripts which manipulate command line arguments and do some stuff before invoking another binary at the end. Is there any reason to not always exec the binary at the end? It seems like this would be simpler and more efficient, but I never see it done.
If you check /usr/bin, you will likely find many many shell scripts that end with an exec command. Just as an example, here is /usr/bin/ps2pdf (debian):
#!/bin/sh
# Convert PostScript to PDF.
# Currently, we produce PDF 1.4 by default, but this is not guaranteed
# not to change in the future.
version=14
ps2pdf="`dirname \"$0\"`/ps2pdf$version"
if test ! -x "$ps2pdf"; then
____ps2pdf="ps2pdf$version"
fi
exec "$ps2pdf" "$#"
exec is used because it eliminates the need for keeping the shell process active after it is no longer needed.
My /usr/bin directory has over 150 shell scripts that use exec. So, the use of exec is common.
A reason not to use exec would be if there was some processing to be done after the binary finished executing.
I disagree with your assessment that this is not a common practice. That said, it's not always the right thing.
The most common scenario where I end a script with the execution of another command, but can't reasonably use exec, is if I need a cleanup hook to be run after the command at the end finishes. For instance:
#!/bin/sh
# create a temporary directory
tempdir=$(mktemp -t -d myprog.XXXXXX)
cleanup() { rm -rf "$tempdir"; }
trap cleanup 0
# use that temporary directory for our program
exec myprog --workdir="$tempdir" "$#"
...won't actually clean up tempdir after execution! Changing that exec myprog to merely myprog has some disadvantages -- continued memory usage from the shell, an extra process-table entry, signals being potentially delivered to the shell rather than to the program that it's executing -- but it also ensures that the shell is still around on myprog's exit to run any traps required.
In kornshell, `basename $0` gives me the name of the current script.
How would I exploit $$ or $PPID to implement the singleton pattern of only having one script named `basename $0` executed on this server by any user?
ps -ef|grep `basename $0`
This will show me all processes which are running that have the name of the currently running script.
I need a script which can abort when a thread which is not $$ is running the script named `basename $0`.
To provide a race-free mutex, flock is your friend. If you aren't on Linux -- where it's provided by util-linux -- a portable version is available.
If you truly want it to apply to the entire system -- crossing user accounts -- you'll need a directory for your locks to live where all users can create files, and you'll need to ensure that all users can write to your lockfiles.
Assuming you have the flock utility, each program which wants to participate in this protocol can behave as follows:
#!/bin/ksh
umask 000 # allow all users to access the file we're about to create
exec 9>"/tmp/${0##*/}.lck" # open lockfile on FD 9, based on basename of argv[0]
umask 022 # move back to more restrictive file permissions
flock -x -n 9 || exit # grab that lock, or exit the script early
# continue
One key note: Do not try to delete lockfiles when your script exits. If you're in a condition where someone else is actively trying to grab a lock, they'll already have a file descriptor on that existing file; if you delete the file while they have a handle on it, you just ensured a race wherein that program can think it holds the lock while someone else creates a new file under the same name and locks it.
I have a shell script which usually runs nearly 10 mins for a single run,but i need to know if another request for running the script comes while a instance of the script is running already, whether new request need to wait for existing instance to compplete or a new instance will be started.
I need a new instance must be started whenever a request is available for the same script.
How to do it...
The shell script is a polling script which looks for a file in a directory and execute the file.The execution of the file takes nearly 10 min or more.But during execution if a new file arrives, it also has to be executed simultaneously.
the shell script is below, and how to modify it to execute multiple requests..
#!/bin/bash
while [ 1 ]; do
newfiles=`find /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -newer /afs/rch/usr$
touch /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/.my_marker
if [ -n "$newfiles" ]; then
echo "found files $newfiles"
name2=`ls /afs/rch/usr8/fsptools/WWW/cgi-bin/upload/ -Art |tail -n 2 |head $
echo " $name2 "
mkdir -p -m 0755 /afs/rch/usr8/fsptools/WWW/dumpspace/$name2
name1="/afs/rch/usr8/fsptools/WWW/dumpspace/fipsdumputils/fipsdumputil -e -$
$name1
touch /afs/rch/usr8/fsptools/WWW/dumpspace/tempfiles/$name2
fi
sleep 5
done
When writing scripts like the one you describe, I take one of two approaches.
First, you can use a pid file to indicate that a second copy should not run. For example:
#!/bin/sh
pidfile=/var/run/$(0##*/).pid
# remove pid if we exit normally or are terminated
trap "rm -f $pidfile" 0 1 3 15
# Write the pid as a symlink
if ! ln -s "pid=$$" "$pidfile"; then
echo "Already running. Exiting." >&2
exit 0
fi
# Do your stuff
I like using symlinks to store pid because writing a symlink is an atomic operation; two processes can't conflict with each other. You don't even need to check for the existence of the pid symlink, because a failure of ln clearly indicates that a pid cannot be set. That's either a permission or path problem, or it's due to the symlink already being there.
Second option is to make it possible .. nay, preferable .. not to block additional instances, and instead configure whatever it is that this script does to permit multiple servers to run at the same time on different queue entries. "Single-queue-single-server" is never as good as "single-queue-multi-server". Since you haven't included code in your question, I have no way to know whether this approach would be useful for you, but here's some explanatory meta bash:
#!/usr/bin/env bash
workdir=/var/tmp # Set a better $workdir than this.
a=( $(get_list_of_queue_ids) ) # A command? A function? Up to you.
for qid in "${a[#]}"; do
# Set a "lock" for this item .. or don't, and move on.
if ! ln -s "pid=$$" $workdir/$qid.working; then
continue
fi
# Do your stuff with just this $qid.
...
# And finally, clean up after ourselves
remove_qid_from_queue $qid
rm $workdir/$qid.working
done
The effect of this is to transfer the idea of "one at a time" from the handler to the data. If you have a multi-CPU system, you probably have enough capacity to handle multiple queue entries at the same time.
ghoti's answer shows some helpful techniques, if modifying the script is an option.
Generally speaking, for an existing script:
Unless you know with certainty that:
the script has no side effects other than to output to the terminal or to write to files with shell-instance specific names (such as incorporating $$, the current shell's PID, into filenames) or some other instance-specific location,
OR that the script was explicitly designed for parallel execution,
I would assume that you cannot safely run multiple copies of the script simultaneously.
It is not reasonable to expect the average shell script to be designed for concurrent use.
From the viewpoint of the operating system, several processes may of course execute the same program in parallel. No need to worry about this.
However, it is conceivable, that a (careless) programmer wrote the program in such a way that it produces incorrect results, when two copies are executed in parallel.
I wrote an alias in my .bashrc file that open a txt file every time I start bash shell.
The problem is that I would like to open such a file only once, that is the first time I open the shell.
Is there any way to do that?
The general solution to this problem is to have a session lock of some kind. You could create a file /tmp/secret with the pid and/or tty of the process which is editing the other file, and remove the lock file when done. Now, your other sessions should be set up to not create this file if it already exists.
Proper locking is a complex topic, but for the simple cases, this might be good enough. If not, google for "mutual exclusion". Do note that there may be security implications if you get it wrong.
Why are you using an alias for this? Sounds like the code should be directly in your .bashrc, not in an alias definition.
So if, say, what you have now in your .bashrc is something like
alias start_editing_my_project_work_hour_report='emacs ~/prj.txt &̈́'
start_editing_my_project_work_hour_report
unalias start_editing_my_project_work_hour_report
... then with the locking, and without the alias, you might end up with something like
# Obtain my UID on this host, and construct directory name and lock file name
uid=$(id -u)
dir=/tmp/prj-$uid
lock=$dir/pid.lock
# The loop will execute at most twice,
# but we don't know yet whether once is enough
while true; do
if mkdir -p "$dir"; then
# Yay, we have the lock!
( echo $$ >"$lock" ; emacs ~/prj.txt; rm -f "$lock" ) &
break
else
other=$(cat "$lock")
# If the process which created the UID is still live, do nothing
if kill -0 $other; then
break
else
echo "removing stale lock file dir (dead PID $other) and retrying" >&2
rm -rf "$dir"
continue
fi
fi
done