Loop history commands from first to 10 using Bash terminal - bash

I'm should use until loop to get first 10 commands from history line by line.
Tried something like:
counter=0
until [ $counter -gt 10 ]
do
echo !$counter
((counter++))
done
But output is docounter ten times.
The main issue is how to get inside loop specific line from history.

Csh-style history expansion is an interactive feature; it does not work in scripts.
It's looking like you are simply looking for history $HISTSIZE | head -n 10

There are a few simple ways. Try this -
while read -r cmd; do if ((ctr++ < 10)); then echo "$cmd"; fi; done < "$HISTFILE"
or
history|head -10|mapfile -t h && for c in {0..9}; do echo "${h[c]}"; done
edit
The terminal you are using at tutorialspoint kinda sucks.
Try it this way, and pay attention to why it matters.
history | while read -r cmd; do if ((ctr++ < 10)); then echo "$cmd"; fi; done
Specifically, bash: /tmp/.bash_history: Permission denied
They are apparently only allowing access to the history file through the history program.

Related

Optimizing a script that lists available commands with manual pages

I'm using this script to generate a list of the available commands with manual pages on the system. Running this with time shows an average of about 49 seconds on my computer.
#!/usr/local/bin/bash
for x in $(for f in $(compgen -c); do which $f; done | sort -u); do
dir=$(dirname $x)
cmd=$(basename $x)
if [[ ! $(man --path "$cmd" 2>&1) =~ 'No manual entry' ]]; then
printf '%b\n' "${dir}:\n${cmd}"
fi
done | awk '!x[$0]++'
Is there a way to optimize this for faster results?
This is a small sample of my current output. The goal is to group commands by directory. This will later be fed into an array.
/bin: # directories generated by $dir
[ # commands generated by $cmd (compgen output)
cat
chmod
cp
csh
date
Going for a complete disregard of built-ins here. That's what which does, anyway. Script not thoroughly tested.
#!/bin/bash
shopt -s nullglob # need this for "empty" checks below
MANPATH=${MANPATH:-/usr/share/man:/usr/local/share/man}
IFS=: # chunk up PATH and MANPATH, both colon-deliminated
# just look at the directory!
has_man_p() {
local needle=$1 manp manpp result=()
for manp in $MANPATH; do
# man? should match man0..man9 and a bunch of single-char things
# do we need 'man?*' for longer suffixes here?
for manpp in "$manp"/man?; do
# assumption made for filename formats. section not checked.
result=("$manpp/$needle".*)
if (( ${#result[#]} > 0 )); then
return 0
fi
done
done
return 1
}
unset seen
declare -A seen # for deduplication
for p in $PATH; do
printf '%b:\n' "$p" # print the path first
for exe in "$p"/*; do
cmd=${exe##*/} # the sloppy basename
if [[ ! -x $exe || ${seen[$cmd]} == 1 ]]; then
continue
fi
seen["$cmd"]=1
if has_man_p "$cmd"; then
printf '%b\n' "$cmd"
fi
done
done
Time on Cygwin with a truncated PATH (the full one with Windows has too many misses for the original version):
$ export PATH=/usr/local/bin:/usr/bin
$ time (sh ./opti.sh &>/dev/null)
real 0m3.577s
user 0m0.843s
sys 0m2.671s
$ time (sh ./orig.sh &>/dev/null)
real 2m10.662s
user 0m20.138s
sys 1m5.728s
(Caveat for both versions: most stuff in Cygwin's /usr/bin comes with a .exe extension)

Collecting Cron Tab entries across servers/users

Struggling with this for a couple of days. Trying to create a space delimited list of $host $useraccount $crontab entries.
I've tried a couple of different ways. Each ending in a different level of disaster, The closest I've come is this, someone point out the obvious thing I'm missing.
#!/usr/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=/pathto/LPAR.txt
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
for user in $(ssh -n $LPAR /opt/freeware/bin/ls /var/spool/cron/crontabs);
do
while read line;
do
echo "$LPAR $user $line"
done <"$(ssh -n "$LPAR" /opt/freeware/bin/tail -n +29 /var/spool/cron/crontabs/$user)"
done
fi
done <$LPARLIST
It seems to be complaining about trying to execute the output of the tail as a command.
./crons.sh: line 11: (Several pages of cropped cron entries): File name too long
./crons.sh: line 11: : No such file or directory
./crons.sh: line 11: #
This is working for me.
#!/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=LPAR.txt
cat $LPARLIST |
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
for user in $(ssh -n root#$LPAR ls /var/spool/cron/crontabs);
do
ssh -n "root#$LPAR" tail -n +29 /var/spool/cron/crontabs/$user |
while read line;
do
echo "$LPAR $user $line"
done
done
fi
done
I prefer cat xxx | while ... instead of redirecting input as you did. In theory it should be the same. I did not spot anything specifically wrong and I did not really change anything -- just rearranged what you had.
The advantage of the cat xxx | while ... technique is you can insert commands between the cat and the while. In this case, I would not do the tail -n +29 because you are guessing the first 29 lines are junk but that might not be true. Rather I would just do a cat of the file and then egrep out the lines that start with a hash #. Again, .... yes, the cat is redundant but who really cares. It is more general and easier to add and delete things.
I don't have the /opt packages installed and I would not depend upon them unless absolutely necessary. You are increasing dependencies. So I just used the local "ls" and "tail". I also added an explicit root# but you don't need that. It just simplified my testing.
Perhaps the ls is overflowing. Try this one:
#!/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=LPAR.txt
cat $LPARLIST |
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
ssh -n root#$LPAR ls /var/spool/cron/crontabs |
while read user
do
ssh -n "root#$LPAR" tail -n +29 /var/spool/cron/crontabs/$user |
while read line;
do
echo "$LPAR $user $line"
done
done
fi
done
Hope this helps...

bash call script with variable

What I want to achieve is the following :
I want the subtitles for my TV Show downloaded automatically.
The script "getSubtitle.sh" is ran as soon as the show is downloaded, but it can happen that no subtitle are released yet.
So what I am doing to counter this :
Creating a file each time "getSubtitle.sh" is ran. It contain the location of the script with its arguments, for example :
/Users/theo/logSubtitle/getSubtitle.sh "The Walking Dead - 5x10 - Them.mp4" "The.Walking.Dead.S05E10.480p.HDTV.H264.mp4" "/Volumes/Window HD/Série/The Walking Dead"
If a subtitle has been found, this file will contain only this line, if no subtitle has been found, this file will have 2 lines (the first one being "no subtitle downloaded", and the second one being the path to the script as explained above)
Now, once I get this, I'm planning to run a cron everyday that will do the following :
Remove all file that have only 1 line (Subtitle found), and execute the script again for the remaining file. Here is the full script :
cd ~/logSubtitle/waiting/
for f in *
do nbligne=$(wc -l $f | cut -c 8)
if [ "$nbligne" = "1" ]
then
rm $f
else
command=$(sed -n "2 p" $f)
sh $command 3>&1 1>&2 2>&3 | grep down > $f ; echo $command >> $f
fi
done
This is unfortunately not working, I have the feeling that the script is not called.
When I replace $command by the line in the text file, it is working.
I am sure that $command match the line because of the "echo $command >> $f" at the end of my script.
So I really don't get what I am missing here, any ideas ?
Thanks.
I'm not sure what you're trying to achieve with the cut -c 8 part in wc -l $f | cut -c 8. cut -c 8 will select the 8th character of the output of wc -l.
A suggestion: to check whether your file contains 1 or two lines (and since you'll need the content of the second line, if any, anyway), use mapfile. This will slurp the file in an array, one line per field. You can use the option -n 2 to read at most 2 lines. This will be much more efficient, safe and nice than your solution:
mapfile -t -n 2 ary < file
Then:
if ((${#ary[#]}==1)); then
printf 'File contains one line only: %s\n' "${ary[0]}"
elif ((${#ary[#]==2)); then
printf 'File contains (at least) two lines:\n'
printf ' %s\n' "${ary[#]}"
else
printf >&2 'Error, no lines found in file\n'
fi
Another suggestion: use more quotes!
With this, a better way to write your script:
#!/bin/bash
dir=$HOME/logSubtitle/waiting/
shopt -s nullglob
for f in "$dir"/*; do
mapfile -t -n 2 ary < "$f"
if ((${#ary[#]}==1)); then
rm -- "$f" || printf >&2 "Error, can't remove file %s\n" "$f"
elif ((${#ary[#]}==2)); then
{ sh -c "${ary[1]}" 3>&1 1>&2 2>&3 | grep down; echo "${ary[1]}"; } > "$f"
else
printf >&2 'Error, file %s contains no lines\n' "$f"
fi
done
After the done keyword you can even add the redirection 2>> logfile to a log file if you wish. Make sure the cron job is run with your user: check crontab -l and, if needed, edit it with crontab -e.
Use eval instead of sh. The reason it works with eval and not sh is due to the number of passes to evaluate variables. sh will treat the sed command as its command to execute while eval will evaluate the sed command first and then execute the result.
Briefly explained.

wrong output because of backgrounded processes

If I run the script with ./test.sh 100 I do not get the output 100 because I am using a thread. What do I have to do to get the expected output? (I must not change test.sh though.)
test.sh
#!/bin/bash
FILE="number.txt"
echo "0" > $FILE
for (( x=1; x<=$1; x++)); do
exec "./increment.sh" $FILE &
done
wait
cat $FILE
increment.sh
#!/bin/bash
value=(< "$1")
let value++
echo $value > "$1"
EDIT
Well I tried this:
#!/bin/bash
flock $1 --shared 2>/dev/null
value=(< "$1")
let value++
echo $value > "$1"
Now i get something like 98 99 all the time if I use ./test.sh 100
I is not working very well and I do not know how to fix it.
If test.sh really cannot be improved, then each instance of increment.sh must serialize it's own access to $FILE.
Filesystem locking is the obvious solution for this under UNIX. However, there is no shell builtin to accomplish this. Instead, you must rely on an external utility program like flock, setlock, or chpst -l|-L. For example:
#!/bin/bash
(
flock 100 # Lock *exclusively* (not shared)
value=(< "$1")
let value++
echo $value > "$1"
) 100>>"$1" # A note of caution
A note of caution: using the file you'll be modifying as a lockfile gets tricky quickly — it's easy to truncate in shell when you didn't mean to, and the mixing of access modes above might offend some people — but the above avoids gross mistakes.

How to change parameter in a file, only if the file exists and the parameter is not already set?

#!/bin/bash
# See if registry is set to expire updates
filename=hostnames
> test.log
PARAMETER=Updates
FILE=/etc/.properties
CODE=sudo if [ ! -f $FILE] && grep $PARAMETER $FILE; then echo "File found, parameter not found."
#CODE=grep $PARAMETER $FILE || sudo tee -a /etc/.properties <<< $PARAMETER
while read -r -a line
do
hostname=${line//\"}
echo $hostname":" >> test.log
#ssh -n -t -t $hostname "$CODE" >> test.log
echo $CODE;
done < "$filename"
exit
I want to set "Updates 30" in /etc/.properties on about 50 servers if:
The file exists (not all servers have the software installed)
The parameter "Updates" is not already set in the file (e.g. in case of multiple runs)
I am a little puzzled so far how, because I am not sure if this can be done in 1 line of bash code. The rest of the script works fine.
Ok, here's what i think would be a solution for you. Like explained in this article http://www.unix.com/shell-programming-scripting/181221-bash-script-execute-command-remote-servers-using-ssh.html
invoke the script which contains the commands that you want to be executed at the remote server
Code script 1:
while read -r -a line
do
ssh ${line} "bash -s" < script2
done < "$filename"
To replace a line in a text file, you can use sed (http://www.cyberciti.biz/faq/unix-linux-replace-string-words-in-many-files/)
Code script 2:
PARAMETER=Updates
FILE=/etc/.properties
NEWPARAMETER=Updates ###(What you want to write there)
if [ ! -f $FILE] && grep $PARAMETER $FILE; then exit
sed -i 's/$PARAMETER/$NEWPARAMETER/g' $FILE
So, I'm not certain this covers all your use case, I hope this helps you out if there is anything feel free to ask!

Resources