wrong output because of backgrounded processes - bash

If I run the script with ./test.sh 100 I do not get the output 100 because I am using a thread. What do I have to do to get the expected output? (I must not change test.sh though.)
test.sh
#!/bin/bash
FILE="number.txt"
echo "0" > $FILE
for (( x=1; x<=$1; x++)); do
exec "./increment.sh" $FILE &
done
wait
cat $FILE
increment.sh
#!/bin/bash
value=(< "$1")
let value++
echo $value > "$1"
EDIT
Well I tried this:
#!/bin/bash
flock $1 --shared 2>/dev/null
value=(< "$1")
let value++
echo $value > "$1"
Now i get something like 98 99 all the time if I use ./test.sh 100
I is not working very well and I do not know how to fix it.

If test.sh really cannot be improved, then each instance of increment.sh must serialize it's own access to $FILE.
Filesystem locking is the obvious solution for this under UNIX. However, there is no shell builtin to accomplish this. Instead, you must rely on an external utility program like flock, setlock, or chpst -l|-L. For example:
#!/bin/bash
(
flock 100 # Lock *exclusively* (not shared)
value=(< "$1")
let value++
echo $value > "$1"
) 100>>"$1" # A note of caution
A note of caution: using the file you'll be modifying as a lockfile gets tricky quickly — it's easy to truncate in shell when you didn't mean to, and the mixing of access modes above might offend some people — but the above avoids gross mistakes.

Related

Loop history commands from first to 10 using Bash terminal

I'm should use until loop to get first 10 commands from history line by line.
Tried something like:
counter=0
until [ $counter -gt 10 ]
do
echo !$counter
((counter++))
done
But output is docounter ten times.
The main issue is how to get inside loop specific line from history.
Csh-style history expansion is an interactive feature; it does not work in scripts.
It's looking like you are simply looking for history $HISTSIZE | head -n 10
There are a few simple ways. Try this -
while read -r cmd; do if ((ctr++ < 10)); then echo "$cmd"; fi; done < "$HISTFILE"
or
history|head -10|mapfile -t h && for c in {0..9}; do echo "${h[c]}"; done
edit
The terminal you are using at tutorialspoint kinda sucks.
Try it this way, and pay attention to why it matters.
history | while read -r cmd; do if ((ctr++ < 10)); then echo "$cmd"; fi; done
Specifically, bash: /tmp/.bash_history: Permission denied
They are apparently only allowing access to the history file through the history program.

How to capture output of bash command group (curly braces) in environment variable

For example, I can do this with a subshell:
VAL=$( do_something )
but how do I achieve the same thing with curly braces so the command is NOT executing in a subshell? I.e. this does not work:
VAL={ do_something; }
TIA.
I'm not sure I understand the reasoning for what you're trying to accomplish, but if you can elaborate a bit more I might be able to help you.
I do recommend reading this fantastic write up about what's actually going on though, and why I don't think you want to invoke a process without a subshell.
However, to try and answer what you've asked:
You can't really run a command inside ${}, except in the fallback clause for when a value is not set (in POSIX sh or bash; might be feasible in zsh, which allows all manner of oddball syntax).
However, you can call cd like this if you really wanted this:
cdr() {
if (( $# )); then
command cd "$#"
else
local home
home=$(git rev-parse --show-toplevel 2>/dev/null) || home=$HOME
command cd "$home"
fi
}
Note
Using a function lets us test our argument list, use branching logic, have local variables, &c.
command cd is used to call through to the real cd implementation rather than recursing.
set -e is kinda stiff. Try something like
trap 'err=$?;
echo >&2 "ERROR $err in $0 at line $LINENO, Aborting";
exit $err;' ERR
This is a lot more informative when reading through your logs, and you can put a similar command inside the subshell. Yes, it means adding it inside the subshell... but I often do this sort of thing in function definitions that get called in subshells. Works well.
In use:
$ trap 'echo BOOM' ERR # parent shell trap for demo
$ false # trigger manually for demo
BOOM
$ x="$( trap 'err=$?;
> echo >&2 "ERROR $err in $0 at line $LINENO, Aborting";
> exit $err;' ERR
> date
> pwd
> false
> echo "I shan't"
> )"
ERROR 1 in bash at line 7, Aborting
BOOM
$ echo "$x"
Thu, Jan 10, 2019 8:35:57 AM
/c/Users/P2759474/repos/Old/deploy_microservices
$
If the outer shell had the same or a similar trap, it would have aborted too, with another message. (It's usually helpful to make the messages different.)
If you just don't like that, then as a clumsy workaround you can drop the data to a tempfile. Here's a script that will do it.
set -ex
{ pwd
date
false
echo "will this happen?"
} > foo
x=$(<foo)
echo "$x"
Put that in a script, it successfully bails.
$: ./sete
+ pwd
+ date
+ false
$: echo $?
1
I'd still use the trap, but the logic works.
I'd also use mktemp, and a trap to delete the temp on exit, etc.... but you get the idea.

Bash: echo extract variables

Suppose there's a script called 'test.sh':
#!/bin/bash
while read line; do
APP=/apps echo "$line"
done < ./lines
And the 'lines':
cd $APP && pwd
If I bash test.sh, it prints out 'cd $APP && pwd'.
But when I type APP=/apps echo "cd $APP && pwd" in the terminal, it prints out 'cd /apps && pwd'.
Is it possible using echo to extract variables which are reading from a regular file?
Depending on the contents of the file, you may want to use eval:
#!/bin/bash
APP=/apps
while read line; do
eval "echo \"$line\"" # WARNING: dangerous
done < ./lines
However, eval is extremely dangerous. Although the quoting here will work for simple cases, it is quite easy to execute arbitrary commands by manipulating the input.
You should use eval to evaluate string line read from file
If you know the variable(s) you want to substitute, just substitute them.
sed 's%\$APP\>%/apps%g' ./lines

Can i cache the output of a command on Linux from CLI?

I'm looking for an implementation of a 'cacheme' command, which 'memoizes' the output (stdout) of whatever has in ARGV. If it never ran it, it will run it and somewhat memorize the output. If it ran it, it will just copy the output of the file (or even better, both output and error to &1 and &2 respectively).
Let's suppose someone wrote this command, it would work like this.
$ time cacheme sleep 1 # first time it takes one sec
real 0m1.228s
user 0m0.140s
sys 0m0.040s
$ time cacheme sleep 1 # second time it looks for stdout in the cache (dflt expires in 1h)
#DEBUG# Cache version found! (1 minute old)
real 0m0.100s
user 0m0.100s
sys 0m0.040s
This example is a bit silly because it has no output. Ideally it would be tested on a script like sleep-1-and-echo-hello-world.sh.
I created a small script that creates a file in /tmp/ with hash of full command name and username, but I'm pretty sure something already exists.
Are you aware of any of this?
Note. Why I would do this? Occasionally I run commands that are network or compute intensive, they take minutes to run and the output doesn't change much. If I know it in advance I'd just prepend a cacheme <cmd>, go for dinner and when i'm back I can just rerun the SAME command over and over on the same machine and get the same answer in an instance.
Improved solution above somewhat by also adding expiry age as optional argument.
#!/bin/sh
# save as e.g. $HOME/.local/bin/cacheme
# and then chmod u+x $HOME/.local/bin/cacheme
VERBOSE=false
PROG="$(basename $0)"
DIR="${HOME}/.cache/${PROG}"
mkdir -p "${DIR}"
EXPIRY=600 # default to 10 minutes
# check if first argument is a number, if so use it as expiration (seconds)
[ "$1" -eq "$1" ] 2>/dev/null && EXPIRY=$1 && shift
[ "$VERBOSE" = true ] && echo "Using expiration $EXPIRY seconds"
CMD="$#"
HASH=$(echo "$CMD" | md5sum | awk '{print $1}')
CACHE="$DIR/$HASH"
test -f "${CACHE}" && [ $(expr $(date +%s) - $(date -r "$CACHE" +%s)) -le $EXPIRY ] || eval "$CMD" > "${CACHE}"
cat "${CACHE}"
I've implemented a simple caching script for bash, because I wanted to speed up plotting from piped shell command in gnuplot. It can be used to cache output of any command. Cache is used as long as the arguments are the same and files passed in arguments haven't changed. System is responsible for cleaning up.
#!/bin/bash
# hash all arguments
KEY="$#"
# hash last modified dates of any files
for arg in "$#"
do
if [ -f $arg ]
then
KEY+=`date -r "$arg" +\ %s`
fi
done
# use the hash as a name for temporary file
FILE="/tmp/command_cache.`echo -n "$KEY" | md5sum | cut -c -10`"
# use cached file or execute the command and cache it
if [ -f $FILE ]
then
cat $FILE
else
$# | tee $FILE
fi
You can name the script cache, set executable flag and put it in your PATH. Then simply prefix any command with cache to use it.
Author of bash-cache here with an update. I recently published bkt, a CLI and Rust library for subprocess caching. Here's a simple example:
# Execute and cache an invocation of 'date +%s.%N'
$ bkt -- date +%s.%N
1631992417.080884000
# A subsequent invocation reuses the same cached output
$ bkt -- date +%s.%N
1631992417.080884000
It supports a number of features such as asynchronous refreshing (--stale and --warm), namespaced caches (--scope), and optionally keying off the working directory (--cwd) and select environment variables (--env). See the README for more.
It's still a work in progress but it's functional and effective! I'm using it already to speed up my shell prompt and a number of other common tasks.
I created bash-cache, a memoization library for Bash, which works exactly how you're describing. It's designed specifically to cache Bash functions, but obviously you can wrap calls to other commands in functions.
It handles a number of edge-case behaviors that many simpler caching mechanisms miss. It reports the exit code of the original call, keeps stdout and stderr separately, and retains any trailing whitespace in the output ($() command substitutions will truncate trailing whitespace).
Demo:
# Define function normally, then decorate it with bc::cache
$ maybe_sleep() {
sleep "$#"
echo "Did I sleep?"
} && bc::cache maybe_sleep
# Initial call invokes the function
$ time maybe_sleep 1
Did I sleep?
real 0m1.047s
user 0m0.000s
sys 0m0.020s
# Subsequent call uses the cache
$ time maybe_sleep 1
Did I sleep?
real 0m0.044s
user 0m0.000s
sys 0m0.010s
# Invocations with different arguments are cached separately
$ time maybe_sleep 2
Did I sleep?
real 0m2.049s
user 0m0.000s
sys 0m0.020s
There's also a benchmark function that shows the overhead of the caching:
$ bc::benchmark maybe_sleep 1
Original: 1.007
Cold Cache: 1.052
Warm Cache: 0.044
So you can see the read/write overhead (on my machine, which uses tmpfs) is roughly 1/20th of a second. This benchmark utility can help you decide whether it's worth caching a particular call or not.
How about this simple shell script (not tested)?
#!/bin/sh
mkdir -p cache
cachefile=cache/cache
for i in "$#"
do
cachefile=${cachefile}_$(printf %s "$i" | sed 's/./\\&/g')
done
test -f "$cachefile" || "$#" > "$cachefile"
cat "$cachefile"
Improved upon solution from error:
Pipes output into the "tee" command which allows it to be viewed real-time as well as stored in the cache.
Preserve colors (for example in commands like "ls --color") by using "script --flush --quiet /dev/null --command $CMD".
Avoid calling "exec" by using script as well
Use bash and [[
#!/usr/bin/env bash
CMD="$#"
[[ -z $CMD ]] && echo "usage: EXPIRY=600 cache cmd arg1 ... argN" && exit 1
# set -e -x
VERBOSE=false
PROG="$(basename $0)"
EXPIRY=${EXPIRY:-600} # default to 10 minutes, can be overriden
EXPIRE_DATE=$(date -Is -d "-$EXPIRY seconds")
[[ $VERBOSE = true ]] && echo "Using expiration $EXPIRY seconds"
HASH=$(echo "$CMD" | md5sum | awk '{print $1}')
CACHEDIR="${HOME}/.cache/${PROG}"
mkdir -p "${CACHEDIR}"
CACHEFILE="$CACHEDIR/$HASH"
if [[ -e $CACHEFILE ]] && [[ $(date -Is -r "$CACHEFILE") > $EXPIRE_DATE ]]; then
cat "$CACHEFILE"
else
script --flush --quiet --return /dev/null --command "$CMD" | tee "$CACHEFILE"
fi
The solution I came up in ruby is this. Does anybody see any optimization?
#!/usr/bin/env ruby
VER = '1.2'
$time_cache_secs = 3600
$cache_dir = File.expand_path("~/.cacheme")
require 'rubygems'
begin
require 'filecache' # gem install ruby-cache
rescue Exception => e
puts 'gem filecache requires installation, sorry. trying to install myself'
system 'sudo gem install -r filecache'
puts 'Try re-running the program now.'
exit 1
end
=begin
# create a new cache called "my-cache", rooted in /home/simon/caches
# with an expiry time of 30 seconds, and a file hierarchy three
# directories deep
=end
def main
cache = FileCache.new("cache3", $cache_dir, $time_cache_secs, 3)
cmd = ARGV.join(' ').to_s # caching on full command, note that quotes are stripped
cmd = 'echo give me an argment' if cmd.length < 1
# caches the command and retrieves it
if cache.get('output' + cmd)
#deb "Cache found!(for '#{cmd}')"
else
#deb "Cache not found! Recalculating and setting for the future"
cache.set('output' + cmd, `#{cmd}`)
end
#deb 'anyway calling the cache now'
print(cache.get('output' + cmd))
end
main
An implementation exists here: https://bitbucket.org/sivann/runcached/src
Caches executable path, output, exit code, remembers arguments. Configurable expiration. Implemented in bash, C, python, choose whatever suits you.

redirecting to file from var

#! /bin/bash
if [ $1 ]; then
redirect="<"$1
else
redirect="<&0"
fi
mysql --host=a.b --port=3306 --user=me --password='!' -D DCVBase2 $redirect
wanna redirect from either file or stdin. May be use some quatation around $redirect?
Something like this will work:
#!/bin/bash
if [ "$1" ]; then
redirect="$1"
else
redirect="/dev/tty"
fi
while read LINE
do
echo ${LINE}
done < ${redirect}
You can't put the "<..." in a variable, but you can put the parameter to it in a variable
Also, redirecting stdin from /dev/fd/0 is a no-op, because /dev/fd/0 is bash's stdin, which is what mysql would inherit by default.
so you can make this work by falling back to taking stdin from /dev/fd/0, which looks similar to James_R_Ferguson's answer, except that it uses /dev/fd/0 because using /dev/tty makes an assumption that bash's stdin is an actual terminal.
#! /bin/bash
if [ -n "$1" ]; then # note, I changed this to test for non-empty
redirect="$1"
else
redirect="/dev/fd/0"
fi3
mysql --host=a.b --port=3306 --user=me --password='!' -D DCVBase2 < "$redirect"
Not possible in standard bash because the redirects are processed before variable expansion (though I can't seem to find a reference for that, so you'll have to believe my empirical findings).
You can work around that using the eval command:
$ redirect=">foo"
$ ls $redirect
ls: cannot access >foo: No such file or directory
$ eval ls $redirect
Beware that this opens a can of worms regarding substitution, quoting and so, so you have to be careful to escape everything you do not want interpreted BEFORE the eval (the ! in your command is likely going to be a problem, for example).
$ foo=\$bar
$ bar=baz
$ echo $foo
$bar
$ eval echo $foo
baz

Resources