I want to do a very simple script: just want to find the newest version of a program, say svn, on my computer. I want to load the result into a variable, say mysvn
So I make this script:
#!/bin/sh
mysvn="foobar"
best_ver=0
which -a svn | while read p
do
version=$("$p" --version | grep 'version ' | grep -oE '[0-9.]+' | head -1)
if [[ "$version" > "$best_ver" ]]
then
best_ver=$version
mysvn="$p"
fi
echo $mysvn
done
echo $mysvn
Very simple in fact ... but it does not work under rxvt (my pseudo-linux terminal), version 2.7.10, running under XP: the final output string is foobar.
Does anybody know why I have this problem?
I have been writing some scripts for the past few months, it is the first time I encounter such a behaviour.
Note: I know how to make it work, with a few changes (just put the main lines into $() )
The reason this occurs is that the while loop is part of a pipeline, and (at least in bash) any shell commands in a pipeline always get executed in a subshell. When you set mysvn inside the loop, it gets set in the subshell; when the loop ends, the subshell exits and this value is lost. The parent shell's value of mysvn never gets changed. See BashFAQ #024 and this previous question.
The standard solution in bash is to use process substitution rather than a pipeline:
while
...
done < <(which -a svn)
But note that this is a bash-only feature, and you must use #!/bin/bash as your shebang for this to work.
Here on Ubuntu:
:~$ which -a svn | while read p
> do
> version=$("$p" --version | grep 'version ' | grep -oE '[0-9.]+' | head -1)
> echo $version
> done
.
so, your version is ., not very nice.
I tried this, and I think it's what you're looking for:
:~$ which -a svn | while read p
> do
> version=$("$p" --version | grep -oE '[0-9.]+' | head -1)
> echo $version
> done
1.7.5
Related
I am trying to ask WSL to check if the distribution exists or doesn't exist, for example, wsl.exe -l -v outputs:
NAME STATE VERSION
Arch-Linux Running 2
* Ubuntu Running 2
docker-desktop Running 2
docker-desktop-data Running 2
I need to ask WSL to check if desktop-desktop-data or Arch exists.
Here is my small Bash code. I tested with several different ways, nothing worked.
# Distribution name: Ubuntu
wsl_distro_name="docker-desktop"
# WSL command to find the distribution name
wsl_command=`wsl.exe -l -v | grep -iq '^$wsl_distro_name' | awk '{print $2}'`
if [[ "$wsl_command" ]]; then
echo "Distro found"
exit 0
else
echo "Not found"
exit 1
fi
Updated answer:
The core issue behind this question has been fixed, or at least improved, in the latest WSL Preview release (0.64.0), but note that this is currently only available for Windows 11 users.
To avoid breaking older code that relies on (or works around) the issue, the fix is opt-in. Setting a WSL_UTF8 environment variable with a value of 1 (and only, in my testing, that value) will result in correct/expected output from wsl.exe.
Note that if using this environment variable inside WSL, you'll need to also add it to WSLENV so that it gets passed "back" to Windows through Interop.
For your use, for example, the following will now work:
export WSL_UTF8=1
WSLENV="$WSLENV":WSL_UTF8
wsl_distro_name="docker-desktop-data"
wsl.exe -l -v | grep -q "\s${wsl_distro_name}\s" && echo "Found" || echo "Not found"f
For older WSL installations
Short answer:
wsl_distro_name="docker-desktop-data"
wsl.exe -l -v | iconv -f UTF-16 | grep -q "\s${wsl_distro_name}\s" && echo "Found" || echo "Not found"
Explanation:
There are a few things going on with your example:
First, the thing that probably has you stymied the most is a WSL bug (covered in more detail in this question) that causes the output to be in a mangled UTF-16 encoding. You can see this to some degree with wsl.exe | hexdump -C, which will show the null byte characters after every regular character.
The solution to that part is to run it through iconv -f utf16. For example:
wsl.exe -l -v | iconv -f UTF-16
I'm guessing you probably introduced some of the following errors into your code while trying to work around the preceding bug:
You have $wsl_distro_name in single quotes in your grep, which disables string interpolation in Bash. You need double-quotes there.
The ^ won't work at the beginning of the regex since there is whitespace (and/or an asterisk) in the first couple of characters of the wsl.exe -l -v output. Better to use "\s$wsl_distro_name\s to find the distro name that is surrounded by whitespace.
This will also prevent the expression from finding a "partial" distribution name. Without it, "docker-desktop" would match both "docker-desktop" and/or "docker-desktop-data".
grep -q disables output and only returns a status code of 0 when found or 1 if not. Since you are attempting to capture the grep output into $wsl_command, the result will always be empty. You either remove the -q to capture the output or continue to use -q and test the status code.
Given your if statement, it seems like you may be expecting the output to be a status result anyway, but testing "$wsl_command" won't work for that, nor will capturing the output via backticks. That would look more like:
wsl_distro_name="docker-desktop-data"
wsl.exe -l -v | iconv -f UTF-16 | grep -q "\s${wsl_distro_name}\s"
if [[ $? -eq 0 ]]; then
echo Found
fi
The $? variable holds the exit code of the last command.
Alternatively:
wsl_distro_name="docker-desktop-data"
wsl_distro_result=$(wsl.exe -l -v | iconv -f UTF-16 | grep -o "\s${wsl_distro_name}\s.*" | awk '{print $2}')
if [[ -n "$wsl_distro_result" ]]; then
echo "Found ${wsl_distro_name} and it is ${wsl_distro_result}."
else
echo "${wsl_distro_name} not found."
fi
Note that I added the -o option to grep in that snippet. The default distribution will have an asterisk in the first field, which means that we need to "normalize" it so that the second field for awk is always the Status.
Finally, recommend using $() rather than backticks. See this question, but note that the POSIX spec says, "the backquoted variety of command substitution is not recommended."
Side note: If for some reason you ever need to run this in Alpine, then make sure to install the gnu-libiconv package in order for iconv to handle this properly as well.
Take the following example:
ls -l | grep -i readme | ./myscript.sh
What I am trying to do is get ls -l | grep -i readme as a string variable in myscript.sh. So essentially I am trying to get the whole command before the last pipe to use inside myscript.sh.
Is this possible?
No, it's not possible.
At the OS level, pipelines are implemented with the mkfifo(), dup2(), fork() and execve() syscalls. This doesn't provide a way to tell a program what the commands connected to its stdin are. Indeed, there's not guaranteed to be a string representing a pipeline of programs being used to generate stdin at all, even if your stdin really is a FIFO connected to another program's stdout; it could be that that pipeline was generated by programs calling execve() and friends directly.
The best available workaround is to invert your process flow.
It's not what you asked for, but it's what you can get.
#!/usr/bin/env bash
printf -v cmd_str '%q ' "$#" # generate a shell command representing our arguments
while IFS= read -r line; do
printf 'Output from %s: %s\n' "$cmd_str" "$line"
done < <("$#") # actually run those arguments as a command, and read from it
...and then have your script start the things it reads input from, rather than receiving them on stdin.
...thereafter, ./yourscript ls -l, or ./yourscript sh -c 'ls -l | grep -i readme'. (Of course, never use this except as an example; see ParsingLs).
It can't be done generally, but using the history command in bash it can maybe sort of be done, provided certain conditions are met:
history has to be turned on.
Only one shell has been running, or accepting new commands, (or failing that, running myscript.sh), since the start of myscript.sh.
Since command lines with leading spaces are, by default, not saved to the history, the invoking command for myscript.sh must have no leading spaces; or that default must be changed -- see Get bash history to remember only the commands run with space prefixed.
The invoking command needs to end with a &, because without it the new command line wouldn't be added to the history until after myscript.sh was completed.
The script needs to be a bash script, (it won't work with /bin/dash), and the calling shell needs a little prep work. Sometime before the script is run first do:
shopt -s histappend
PROMPT_COMMAND="history -a; history -n"
...this makes the bash history heritable. (Code swiped from unutbu's answer to a related question.)
Then myscript.sh might go:
#!/bin/bash
history -w
printf 'calling command was: %s\n' \
"$(history | rev |
grep "$0" ~/.bash_history | tail -1)"
Test run:
echo googa | ./myscript.sh &
Output, (minus the "&" associated cruft):
calling command was: echo googa | ./myscript.sh &
The cruft can be halved by changing "&" to "& fg", but the resulting output won't include the "fg" suffix.
I think you should pass it as one string parameter like this
./myscript.sh "$(ls -l | grep -i readme)"
I think that it is possible, have a look at this example:
#!/bin/bash
result=""
while read line; do
result=$result"${line}"
done
echo $result
Now run this script using a pipe, for example:
ls -l /etc | ./script.sh
I hope that will be helpful for you :)
I am pretty inexperienced with bash. I am trying to save the last run command as a variable, this is what I have:
#!/bin/bash
prev=$(fc -ln -1)
echo $prev
This doesn't print anything. Once I save the last command, I plan on coding this line:
valgrind --leak-check=full $prev
So what am I doing wrong?
fc references the history of the current shell. When run inside a shell script it refers to history of that new shell. If you use an alias or shell function, then fc will operate within the current shell. You can also source a script file for the same effect.
$ cat go
#!/bin/bash
set -o history
echo a b c
fc -nl -1
$ ./go
a b c
echo a b c
$ alias zz='fc -nl -1 | tr a-z A-Z'
$ zz
ALIAS ZZ='FC -NL -1 | TR A-Z A-Z'
I'd use aliases rather than a script, something like:
alias val='valgrind --leak-check=full'
alias vallast='val $(history 2 | head -1 | cut -d" " -f3-)'
As explained in the link, you can add these lines to your .bashrc.
BTW, the latter can also be executed as:
val !!
val !-1 #same
Or if you want to valgrind the program you ran 2 commands ago:
val !-2
These history commands are explained here.
I am writing shell script that works with files. I need to find files and print them with some inportant informations for me. Thats no problem... But then I wanted to add some "features" and make it to work with arguments as well. One of the feature is ignoring some files that match patterm (like *.c - to ignore all c file). So I set variable and added string into it.
#!/bin/sh
command="grep -Ev \"$2\"" # in 2nd argument is pattern, that will be ignored
echo "find $PWD -type f | $command | wc -l" # printing command
file_num=$(find $path -type f | $command | wc -l) # saving number of files
echo "Number of files: $file_num"
But, command somehow ignor my variable and count all files. But when I put the same command into bash or shell, I get different number (the correct one) of files. I though, it could be just beacouse of bash, but on other machine, where is ksh, same problem and changing #!/bin/sh to #!/bin/bash did not help too.
The command line including the arguments is processed by the shell before it is executed. So, when you run script the command will be grep -Ev "c"and when you run single command grep -Ev "c" shell will interpreter this command as grep -Ev c.
You can use this command to check it: echo grep -Ev "c".
So, just remove quotes in $command and everything will be ok )
You need only to modify command value :
command="grep -Ev "$1
Doing a simple script in work and I can't figure out why it will not output the results in another file.
/tmp/system stores the node list
#!/bin/bash
$results=restest.txt
for i in $(cat /tmp/system); do
ping -c 1 $i
if [ $i = 0 ]; then
ssh $i ps -ef | grep ops | echo > $results
fi
done
echo is not printing from stdin but rather its arguments (or a newline if there no arguments). So
echo file.txt
only prints 'file.txt', not the content of the file. Therefore your code only writes a newline to the file. You can use cat for printing stdin to stdout, but here it is useless, since you can pipe the grep output directly to the file:
ssh $i ps -ef | grep ops > $results
first thank you for editing your code (it's more clear like this :)
I have 2 or 3 advices :
1- when you want to store a value in variable dont use "$" symbole, this symbole is used to get the variable's value
ex:
MYVAR=3
echo MYVAR "this will print MYVAR"
echo $MYVAR "this will print 3"
2- always quote your values, specially if the value is comming from another command
3- To fix your script you need to quote the command executed on the remote machine,
then redirect the output to your file
ex:
ssh user#MachineA "ls > file.txt" "file.txt is created on machineA"
ssh user#machineA "ls" > file.txt "file.txt is created on YOUR machine"
so simply you can replace your last line by
ssh $i "ps -ef | grep ops" > $results
try to use -ne in your test bash classictest
good luck
There are several errors. First, don't iterate over files like this. Second, i is the name of the host, not the exit status of ping. Third, the echo is unnecessary (and does not read from standard input). Lastly, $ is only used for expanding a parameter, not assigning to it.
#!/bin/bash
results=restest.txt
while read -r host; do
if ping -c 1 "$host"; then
ssh "$host" ps -ef | grep ops > "$results"
fi
done < /tmp/system