Remove last argument in shell script (POSIX) - shell

I am currently working on a language that aims to compile to POSIX shell languages and I want to introduce a pop feature. Just like how you can use "shift" to remove the first argument passed to a function:
f() {
shift
printf '%s' "$*"
}
f 1 2 3 #=> 2 3
I want some code that when introduced below can remove the last argument.
g() {
# pop
printf '%s' "$*"
}
g 1 2 3 #=> 1 2
I am aware of the array method as detailed in (Remove last argument from argument list of shell script (bash)), but I want something portable that will work in at least the following shells: ash, dash, ksh (Unix), bash, and zsh. I also want something reasonably speedy; something that opens external processes/subshells would be too heavy for small argument counts, thought if you have a creative solution I wouldn't mind seeing it regardless (and they can still be used as a fallback for large argument counts). Something as fast as those array methods would be ideal.

This is my current answer:
pop() {
local n=$(($1 - ${2:-1}))
if [ -n "$ZSH_VERSION" -o -n "$BASH_VERSION" ]; then
POP_EXPR='set -- "${#:1:'$n'}"'
elif [ $n -ge 500 ]; then
POP_EXPR="set -- $(seq -s " " 1 $n | sed 's/[0-9]\+/"${\0}"/g')"
else
local index=0
local arguments=""
while [ $index -lt $n ]; do
index=$((index+1))
arguments="$arguments \"\${$index}\""
done
POP_EXPR="set -- $arguments"
fi
}
Note that local is not POSIX, but since all major sh shells support it (and specifically the ones I asked for in my question) and not having it can cause serious bugs, I decided to include it in this leading function. But here's a fully compliant POSIX version with obfuscated arguments to reduce the chance of bugs:
pop() {
__pop_n=$(($1 - ${2:-1}))
if [ -n "$ZSH_VERSION" -o -n "$BASH_VERSION" ]; then
POP_EXPR='set -- "${#:1:'$__pop_n'}"'
elif [ $__pop_n -ge 500 ]; then
POP_EXPR="set -- $(seq -s " " 1 $__pop_n | sed 's/[0-9]\+/"${\0}"/g')"
else
__pop_index=0
__pop_arguments=""
while [ $__pop_index -lt $__pop_n ]; do
__pop_index=$((__pop_index+1))
__pop_arguments="$__pop_arguments \"\${$__pop_index}\""
done
POP_EXPR="set -- $__pop_arguments"
fi
}
Usage
pop1() {
pop $#
eval "$POP_EXPR"
echo "$#"
}
pop2() {
pop $# 2
eval "$POP_EXPR"
echo "$#"
}
pop1 a b c #=> a b
pop1 $(seq 1 1000) #=> 1 .. 999
pop2 $(seq 1 1000) #=> 1 .. 998
pop_next
Once you've created the POP_EXPR variable with pop, you can use the following
function to change it to omit further arguments:
pop_next() {
if [ -n "$BASH_VERSION" -o -n "$ZSH_VERSION" ]; then
local np="${POP_EXPR##*:}"
np="${np%\}*}"
POP_EXPR="${POP_EXPR%:*}:$((np == 0 ? 0 : np - 1))}\""
return
fi
POP_EXPR="${POP_EXPR% \"*}"
}
pop_next is a much simpler operation than pop in posix shells (though it's
slightly more complex than pop on zsh and bash)
It's used like this:
main() {
pop $#
pop_next
eval "$POP_EXPR"
}
main 1 2 3 #=> 1
POP_EXPR and variable scope
Note that if you're not going to be using eval "$POP_EXPR" immediately after
pop and pop_next, if you're not careful with scoping some function call
inbetween the operations could change the POP_EXPR variable and mess things
up. To avoid this, simply put local POP_EXPR at the start of every function
that uses pop, if it's available.
f() {
local POP_EXPR
pop $#
g 1 2
eval "$POP_EXPR"
printf '%s' "f=$*"
}
g() {
local POP_EXPR
pop $#
eval "$POP_EXPR"
printf '%s, ' "g=$*"
}
f a b c #=> g=1, f=a b
popgen.sh
This particular function is good enough for my purposes, but I did create a
script to generate further optimized functions.
https://gist.github.com/fcard/e26c5a1f7c8b0674c17c7554fb0cd35c#file-popgen-sh
One of the ways to improve performance without using external tools here is
to realize that having several small string concatenations is slow, so doing
them in batches makes the function considerably faster. calling the script
popgen.sh -gN1,N2,N3 creates a pop function that handles the operations
in batches of N1, N2, or N3 depending on the argument count. The script also
contains other tricks, exemplified and explained below:
$ sh popgen \
> -g 10,100 \ # concatenate strings in batches\
> -w \ # overwrite current file\
> -x9 \ # hardcode the result of the first 9 argument counts\
> -t1000 \ # starting at argument count 1000, use external tools\
> -p posix \ # prefix to add to the function name (with a underscore)\
> -s '' \ # suffix to add to the function name (with a underscore)\
> -c \ # use the command popsh instead of seq/sed as the external tool\
> -# \ # on zsh and bash, use the subarray method (checks on runtime)\
> -+ \ # use bash/zsh extensions (removes runtime check from -#)\
> -nl \ # don't use 'local'\
> -f \ # use 'function' syntax\
> -o pop.sh # output file
An equivalent to the above function can be generated with popgen.sh -t500 -g1 -#.
In the gist containing popgen.sh you will find a popsh.c file that can be
compiled and used as a specialized, faster alternative to the default shell
external tools, it will be used by any function generated with
popgen.sh -c ... if it's accessible as popsh by the shell.
Alternatively, you can create any function or tool named popsh and use
it in its place.
Benchmark
Benchmark functions:
The script I used for benchmarking can be found on this gist:
https://gist.github.com/fcard/f4aec7e567da2a8e97962d5d3f025ad4#file-popbench-sh
The benchmark functions are found in these lines:
https://gist.github.com/fcard/f4aec7e567da2a8e97962d5d3f025ad4#file-popbench-sh-L233-L301
The script can be used as such:
$ sh popbench.sh \
> -s dash \ # shell used by the benchmark, can be dash/bash/ash/zsh/ksh.\
> -f posix \ # function to be tested\
> -i 10000 \ # number of times that the function will be called per test\
> -a '\0' \ # replacement pattern to model arguments by index (uses sed)\
> -o /dev/stdout \ # where to print the results to (concatenates, defaults to stdout)\
> -n 5,10,1000 # argument sizes to test
It will output a time -p style sheet with a real, user and sys time values,
as well as an int value, for internal, that is calculated inside the benchmark
process using date.
Times
The following are the int results of calls to
$ sh popbench.sh -s $shell -f $function -i 10000 -n 1,5,10,100,1000,10000
posix refers to the second and third clauses, subarray refers to the first,
while final refers to the whole.
value count 1 5 10 100 1000 10000
---------------------------------------------------------------------------------------
dash/final 0m0.109s 0m0.183s 0m0.275s 0m2.270s 0m16.122s 1m10.239s
ash/final 0m0.104s 0m0.175s 0m0.273s 0m2.337s 0m15.428s 1m11.673s
ksh/final 0m0.409s 0m0.557s 0m0.737s 0m3.558s 0m19.200s 1m40.264s
bash/final 0m0.343s 0m0.414s 0m0.470s 0m1.719s 0m17.508s 3m12.496s
---------------------------------------------------------------------------------------
bash/subarray 0m0.135s 0m0.179s 0m0.224s 0m1.357s 0m18.911s 3m18.007s
dash/posix 0m0.171s 0m0.290s 0m0.447s 0m3.610s 0m17.376s 1m8.852s
ash/posix 0m0.109s 0m0.192s 0m0.285s 0m2.457s 0m14.942s 1m10.062s
ksh/posix 0m0.416s 0m0.581s 0m0.768s 0m4.677s 0m18.790s 1m40.407s
bash/posix 0m0.409s 0m0.739s 0m1.145s 0m10.048s 0m58.449s 40m33.024s
On zsh
For large argument counts setting set -- ... with eval is very slow on zsh no
matter no matter the method, save for eval 'set -- "${#:1:$# - 1}"'. Even as
simple a modification as changing it to eval "set -- ${#:1:$# - 1}"
(ignoring that it doesn't work for arguments with spaces) makes it two orders
of magnitude slower.
value count 1 5 10 100 1000 10000
---------------------------------------------------------------------------------------
zsh/subarray 0m0.203s 0m0.227s 0m0.233s 0m0.461s 0m3.643s 0m38.396s
zsh/final 0m0.399s 0m0.416s 0m0.441s 0m0.722s 0m4.205s 0m37.217s
zsh/posix 0m0.718s 0m0.913s 0m1.182s 0m6.200s 0m46.516s 42m27.224s
zsh/eval-zsh 0m0.419s 0m0.353s 0m0.375s 0m0.853s 0m5.771s 32m59.576s
More benchmarks
For more benchmarks, including only using external tools, the c popsh tool or the naive algorithm, see this file:
https://gist.github.com/fcard/f4aec7e567da2a8e97962d5d3f025ad4#file-benchmarks-md
It's generated like this:
$ git clone https://gist.github.com/f4aec7e567da2a8e97962d5d3f025ad4.git popbench
$ cd popbench
$ sh popgen_run.sh
$ sh popbench_run.sh --fast # or without --fast if you have a day to spare
$ sh poptable.sh -g >benchmarks.md
Conclusion
This has been the result of a week-long research on the subject, and I thought
I'd share it. Hopefully it's not too long, I tried to trim it to the main
information with links to the gist. This was initially made as an answer to
(Remove last argument from argument list of shell script (bash)) but I felt the focus on POSIX
made it off topic.
All the code in the gists linked here is licensed under the MIT license.

alias pop='set -- $(eval printf '\''%s\\n'\'' $(seq $(expr $# - 1) | sed '\''s/^/\$/;H;$!d;x;s/\n/ /g'\'') )'
EDIT:
this is a POSIX shell solution that use aliases instead of functions; if called in a function, this gives the desired effect (it resets the function arguments by using the same number of arguments minus the last; being an alias, and with eval, it can change the values of the enclosing function):
func () {
pop
pop
echo "$#"
}
func a b c d e # prints a b c

pop () {
i=0
while [ $((i+=1)) -lt $# ]; do
set -- "$#" "$1"
shift
done # 1 2 3 -> 3 1 2
printf '%s' "$1" # last argument
shift # $# is now without last argument
}

Related

Show average file output speed once for loop is complete, for benchmarking purposes?

Sorry for being unclear my follow mates,
So to elaborate and possibly answer my own question, while Distro1Analysis.txt is being written to, calculate output speed in kb/s and when output is done then average output speed and print to screen.
The second part, its own question really, is quite simple, I'm not a computer scientist or advanced programmer, but I am certain there's an relatively easy way to improve the overall execution speed of the script which asking what is the speed culprit, how the script was written, the chosen programs, the mix of programs (i.e., is it faster to use 3 instances of the same program as opposed to one instance of 3 different programs...) For instance, could recursive-ness be used and how?
I was orignally going to ask how to benchmark the speed of a program to run one command, but it seemed simpler to use an overarching (global) benchmark hence the question. But any help you can provide would be useful.
Rdepends Version
ps -A &>> Distro1Analysis.txt && sudo service --status-all &>> Distro1Analysis.txt && \
for z in $(dpkg -l | awk '/^[hi]i/{print $2}' | grep -v '^lib'); do \
printf "\n$z:" && \
aptitude show $z | grep -E 'Uncompressed Size' && \
result=$(apt-rdepends 2>/dev/null $z | grep -v "Depends")
final=$(apt show 2>/dev/null $result | grep -E "Package|Installed-Size" | sed "/APT/d;s/Installed-Size: //");
if [[ (${#final} -le 700) ]]; then echo $final; else :; fi done &>> Distro1Analysis.txt
Depends Version
ps -A &>> Distro1Analysis.txt && sudo service --status-all &>> Distro1Analysis.txt && \
for z in $(dpkg -l | awk '/^[hi]i/{print $2}' | grep -v '^lib'); do \
printf "\n$z:" && \
aptitude show $z | grep -E 'Uncompressed Size' && \
printf "\n" && \
apt show 2>/dev/null $(aptitude search '!~i?reverse-depends("^'$z'$")' -F "%p" | \
sed 's/:i386$//') | grep -E 'Package|Installed-Size' | sed '/APT/d;s/^.*Package:/\t&/;N;s/\n/ /'; done &>> Distro1Analysis.txt
calculate output speed in kb/s and when output is done then average
output speed and print to screen
Here's an answer that's basically
Starting your script to run in the background.
Checking the size of its output file every two seconds with du -b.
Run the following bash script like so: $ bash scriptoutmon.sh subscript.sh Distro1Analysis.txt 12 10 2
scriptoutmon.sh usage:
$1 : Path to the subscript to run
$2 : Path to output file to monitor
$3 : How long to run scriptoutmon.sh script in seconds.
$4 : How long to run the subscript ($1)
$5 : Tick length for displayed updates in seconds.
scriptoutmon.sh:
#!/bin/bash
# Date: 2020-04-13T23:03Z
# Author: Steven Baltakatei Sandoval
# License: GPLv3+ https://www.gnu.org/licenses/gpl-3.0.en.html
# Description: Runs subscript and measures change in file size of a specified file.
# Usage: scriptoutmon.sh [ path to subscript ] [ path to subscript output file ] [ script TTL (s) ] [ subscript TTL (s) ] [ tick size (s) ]
# References:
# [1]: Adrian Pronk (2013-02-22). "Floating point results in Bash integer division". https://stackoverflow.com/a/15015920
# [2]: chronitis (2012-11-15). "bc: set number of digits after decimal point". https://askubuntu.com/a/217575
# [3]: ypnos (2020-02-12). "Differences of size in du -hs and du -b". https://stackoverflow.com/a/60196741
# == Function Definitions ==
echoerr() { echo "$#" 1>&2; } # display message via stderr
getSize() { echo $(du -b "$1" | awk '{print $1}'); } # output file size in bytes. See [3].
# == Initialize settings ==
SUBSCRIPT_PATH="$1" # path to subscript to run
SUBSCRIPT_OUTPUT_PATH="$2" # path to output file generated by subscript
SCRIPT_TTL="$3" # set script time-to-live in seconds
SUBSCRIPT_TTL="$4" # set subscript time-to-live in seconds
TICK_SIZE="$5" # update tick size (in seconds)
# == Perform work ==
timeout $SUBSCRIPT_TTL bash "$SUBSCRIPT_PATH" & # run subscript for SCRIPT_TTL seconds.
# note: SUBSCRIPT_OUTPUT_PATH should be path of output file generated by subscript.sh .
if [ -f $SUBSCRIPT_OUTPUT_PATH ]; then SUBSCRIPT_OUTPUT_INITIAL_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH"); else SUBSCRIPT_OUTPUT_INITIAL_SIZE="0"; fi # save initial size if file exists.
echoerr "Running $(basename "$SUBSCRIPT_PATH") and then monitoring rate of file size changes to $(basename "$SUBSCRIPT_OUTPUT_PATH")." # explain displayed output
# Calc and display subscript output file size changes
while [ $SECONDS -lt $SCRIPT_TTL ]; do # loop while script age (in seconds) less than SCRIPT_TTL.
if [ $SECONDS -ge $TICK_SIZE ]; then # if after first tick
OUTPUT_PREVIOUS_SIZE="$OUTPUT_CURRENT_SIZE" ; # save size previous tick
OUTPUT_CURRENT_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH") ; # save size current tick
BYTES_WRITTEN=$(( $OUTPUT_CURRENT_SIZE - $OUTPUT_PREVIOUS_SIZE )) ; # calc size difference between current and previous ticks.
WRITE_SPEED_BYTES_PER_SECOND=$(($BYTES_WRITTEN / $TICK_SIZE)) ; # calc write speed in bytes per second
WRITE_SPEED_KILOBYTES_PER_SECOND=$( echo "scale=3; $WRITE_SPEED_BYTES_PER_SECOND / 1000" | bc -l ) ; # calc write speed in kilobytes per second. See [1], [2].
echo "File size change rate (KB/sec):"$WRITE_SPEED_KILOBYTES_PER_SECOND ;
else # if first tick
OUTPUT_CURRENT_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH") # save size current tick (initial)
fi
sleep "$TICK_SIZE"; # wait a tick
done
SUBSCRIPT_OUTPUT_FINAL_SIZE=$(getSize "$SUBSCRIPT_OUTPUT_PATH") # save final size
# == Display results ==
SUBSCRIPT_OUTPUT_TOTAL_CHANGE_BYTES=$(( $SUBSCRIPT_OUTPUT_FINAL_SIZE - $SUBSCRIPT_OUTPUT_INITIAL_SIZE )) # calc total size change in bytes
SUBSCRIPT_OUTPUT_TOTAL_CHANGE_KILOBYTES=$( echo "scale=3; $SUBSCRIPT_OUTPUT_TOTAL_CHANGE_BYTES / 1000" | bc -l ) # calc total size change in kilobytes. See [1], [2].
echoerr "$SUBSCRIPT_OUTPUT_TOTAL_CHANGE_KILOBYTES kilobytes added to $SUBSCRIPT_OUTPUT_PATH size in $SUBSCRIPT_TTL seconds."
exit 0;
You should get output like this:
baltakatei#debianwork:/tmp$ bash scriptoutmon.sh subscript.sh Distro1Analysis.txt 12 10 2
Running subscript.sh and then monitoring rate of file size changes to Distro1Analysis.txt.
File size change rate (KB/sec):6.302
File size change rate (KB/sec):.351
File size change rate (KB/sec):.376
File size change rate (KB/sec):.345
File size change rate (KB/sec):.335
15.419 kilobytes added to Distro1Analysis.txt size in 10 seconds.
baltakatei#debianwork:/tmp$
Increase $3 and $4 to monitor the script longer (perhaps to let it finish its work).
The second part, its own question really
I'd suggest making it a separate question.

Script to pick random directory in bash

I have a directory full of directories containing exam subjects I would like to work on randomly to simulate the real exam.
They are classified by difficulty level:
0-0, 0-1 .. 1-0, 1-1 .. 2-0, 2-1 ..
I am trying to write a shell script allowing me to pick one subject (directory) randomly based on the parameter I pass when executing the script (0, 1, 2 ..).
I can't quite figure it, here is my progress so far:
ls | find . -name "1$~" | sort -r | head -n 1
What am I missing here?
There's no need for any external commands (ls, find, sort, head) for this at all:
#!/usr/bin/env bash
set -o nullglob # make globs expand to nothing, not themselves, when no matches found
dirs=( "$1"*/ ) # list directories starting with $1 into an array
# Validate that our glob actually had at least one match
(( ${#dirs[#]} )) || { printf 'No directories start with %q at all\n' "$1" >&2; exit 1; }
idx=$(( RANDOM % ${#dirs[#]} )) # pick a random index into our array
echo "${dirs[$idx]}" # and look up what's at that index

bash script to compare number inside the 2 files

I want to compare 2 number from two different file using Bash script. The file is tmp$i and tmp$(($i-1)). I have tried the script below is not working
#!/bin/bash
for i in `seq 1 5`
do
if [ $tmp$i -lt $tmp$(($i-1)) ];then
cat tmp$i >> inf
else
cat tmp$i >> sup
fi
done
Sample data
Tmp1:
0.8856143905954186 0.8186070632371812 0.7624440603372680 0.7153352945456424 0.6762383806114797 0.6405457936981878
Tmp2:
0.5809579333203458 0.5567050091247218 0.5329405222386163 0.5115305043007474 0.4963898045543342 0.4846139486344327
You are not setting $tmp so you end up simply comparing whether i is smaller than i-1 which of course it isn't.
Removing the dollar sign nominally fixes that, but will just compare two strings (for which numeric cardinality isn't well-defined, so in practice, always false), not access the contents of files named like those strings. tmp2 is neither larger nor smaller than tmp1. (Bash can perform lexical comparison, but test ... -lt isn't the tool to do that.)
Try this instead:
if [ $(cat "tmp$i") -lt $(cat "tmp$((i - 1))") ]; then
In response to the observation that you want to do this on decimal numbers, you need a different tool, because Bash only supports integer arithmetic. My approach would be to write a simple Awk script which performs the comparison.
In order to be able to use it as a conditional, it should exit(0) if the condition is true, exit(1) otherwise.
In order to keep the main script readable, I would encapsulate it in a function, like this:
smaller_first_line () {
awk 'NR==1 && FNR==1 { i=$1; next } FNR==1 { exit($1 < i) }' "$1" "$2"
}
if smaller_first_line "tmp$i" "tmp$((i - 1))"; then
:

Iterate over lists embedded as values in key/value pairs in bash

I'm trying to get a (key,multiple-value) structure (some sort of hashmap) in bash, like this :
[
[ "abc" : 1, 2, 3, 4 ],
[ "def" : "w", 33, 2 ]
]
I'd like to iterate through eack key (some kind of for key in ..., and get each value with something like map["def",2] or map[$key,2].
I've seen a couple of threads talking about single-value hashmap, but nothing about this issue.
I could go with N arrays, N being the amount of key in my map, filled with every field in a row, but I don't want to duplicate code as much as possible.
Thanks in advance !
Edit :
I'd like to go through the structure with something like this :
for key in ${map[#]} do;
echo $key # "abc" then "def"
for value in ${map[$key,#]} do;
...
done
done
Using modern bash features with the multiple-array case:
Assignment (manual):
map_abc=( 1 2 3 4 )
map_def=( w 33 2 )
Assignment (programmatic):
append() {
local array_name="${1}_$2"; shift; shift
declare -g -a "$array_name"
declare -n array="$array_name" # BASH 4.3 FEATURE
array+=( "$#" )
}
append map abc 1 2 3 4
append map def w 33 2
Iteration (done inside a function to contain the namevar's scope):
iter() {
for array in ${!map_#}; do
echo "Iterating over array ${array#map_}"
declare -n cur_array="$array" # BASH 4.3 FEATURE
for key in "${!cur_array[#]}"; do
echo "$key: ${cur_array[$key]}"
done
done
}
iter
This can also be done without namevars, but in an uglier and more error-prone fashion. (To be clear, I believe the code given here uses eval safely, but it's easy to get wrong -- if trying to build your own implementation on this template, please be very cautious).
# Compatible with older bash (should be through 3.x).
append() {
local array_name="${1}_$2"; shift; shift
declare -g -a "$array_name"
local args_str cmd_str
printf -v args_str '%q ' "$#"
printf -v cmd_str "%q+=( %s )" "$array_name" "$args_str"
eval "$cmd_str"
}
...and, to iterate in a way compatible with bash back through 3.x:
for array in ${!map_#}; do
echo "Iterating over array ${array#map_}"
printf -v cur_array_cmd 'cur_array=( ${%q[#]} )' "$array"
eval "$cur_array_cmd"
for key in "${!cur_array[#]}"; do
echo "$key: ${cur_array[$key]}"
done
done
This is more computationally efficient than filtering through a single large array (the other answer given) -- and, when namevars are available, arguably results in cleaner code as well.
Do-able. The declaration is somewhat ugly
declare -A map=(
[abc,0]=1
[abc,1]=2
[abc,2]=3
[abc,3]=4
[def,0]=w
[def,1]=33
[def,2]=2
)
key="def"
i=1
echo "${map[$key,$i]}" # => 33
Iterating: helpful to keep a separate array of "keys":
keys=(abc def)
Then
for key in "${keys[#]}"; do
echo "$key"
for idx in "${!map[#]}"; do
if [[ $idx == $key,* ]]; then
n=${idx##*,}
printf "\t%s\t%s\n" "$n" "${map["$idx"]}"
fi
done
done
abc
0 1
1 2
2 3
3 4
def
1 33
0 w
2 2

Convert floating point variable to integer?

The shell script shown below will show a warning if the page takes more than 6 seconds to load. The problem is that the myduration variable is not an integer. How do I convert it to integer?
myduration=$(curl http://192.168.50.1/mantisbt/view.php?id=1 -w %{time_total}) > /dev/null ; \
[[ $myduration -gt 1 ]] && echo "`date +'%y%m%d%H%M%S'
It took more than 6 seconds to load the page http://192.168.50.1/mantisbt/view.php?id=1
Assuming $myduration is a decimal or integer
$ myduration=6.5
$ myduration=$( printf "%.0f" $myduration )
$ echo $myduration
6
You can do this:
float=1.23
int=${float%.*}
I am using this on bash.
It's not entirely clear, but I think you're asking how to convert a floating-point value (myduration) to an integer in bash. Something like this may help you, depending on which way you want to round your number.
#!/bin/bash
floor_val=
ceil_val=
function floor() {
float_in=$1
floor_val=${float_in/.*}
}
function ceiling() {
float_in=$1
ceil_val=${float_in/.*}
ceil_val=$((ceil_val+1))
}
float_val=$1
echo Passed in: $float_val
floor $float_val
ceiling $float_val
echo Result of floor: $floor_val
echo Result of ceiling: $ceil_val
Example usage:
$ ./int.sh 12.345
Passed in: 12.345
Result of floor: 12
Result of ceiling: 13
Eliminate page contents from the variable:
When I tried your command, myduration contained the HTML contents of the page at the URL I used in my test plus the time value. By adding -s to suppress the progress bar and adding -o /dev/null to the options for curl, I was able to remove the redirect to /dev/null and have only the time saved in myduration.
Since the value of myduration is likely to be short, you can use the technique ire_and_curses shows which will often yield zero as its result which would be less than the 1 you are testing for (note that your log message says "6 seconds", though).
Finer resolution:
If you'd like to have a finer resolution test, you can multiply myduration by 1000 using a technique like this:
mult1000 () {
local floor=${1%.*}
[[ $floor = "0" ]] && floor=''
local frac='0000'
[[ $floor != $1 ]] && frac=${1#*.}$frac
echo ${floor}${frac:0:3}
}
Edit: This version of mult1000 properly handles values such as "0.234", "1", "2.", "3.5"
and "6.789". For values with more than three decimal places, the extra digits are truncated without rounding regardless of the value ("1.1119" becomes "1.111").
Your script with the changes I mentioned above and using mult1000 (with my own example time):
myduration=$(curl -s -o /dev/null http://192.168.50.1/mantisbt/view.php?id=1 -w %{time_total}); [[ $(mult1000 $myduration) -gt 3500 ]] && echo "`date +'%y%m%d%H%M%S'` took more than 3.5 seconds to load the page http://192.168.50.1/mantisbt/view.php?id=1 " >> /home/shantanu/speed_report.txt
Here it is broken into multiple lines (and simplified) to make it more readable here in this answer:
myduration=$(curl -s -o /dev/null http://example.com -w %{time_total})
[[ $(mult1000 $myduration) -gt 3500 ]] &&
echo "It took more than 3.5 seconds to load thttp://example.com" >> report.txt

Resources