How do i store the output of a bash command in a variable? [duplicate] - bash

This question already has answers here:
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 8 years ago.
I'm trying to write a simple script for killing a process. I've already read Find and kill a process in one line using bash and regex so please don't redirect me to that.
This is my code:
LINE=$(ps aux | grep '$1')
PROCESS=$LINE | awk '{print $2}'
echo $PROCESS
kill -9 $PROCESS
I want to be able to run something like
sh kill_proc.sh node and have it run
kill -9 node
But instead what I get is
kill_process.sh: line 2: User: command not found
I found out that when I log $PROCESS it is empty.
Does anyone know what I'm doing wrong?

PROCESS=$(echo "$LINE" | awk '{print $2}')
or
PROCESS=$(ps aux | grep "$1" | awk '{print $2}')
I don't know why you're getting the error you quoted. I can't reproduce it. When you say this:
PROCESS=$LINE | awk '{print $2}'
the shell expands it to something like this:
PROCESS='mayoff 10732 ...' | awk '{print $2}'
(I've shortened the value of $LINE to make the example readable.)
The first subcommand of the pipeline sets variable PROCESS; this variable-setting command has no output so awk reads EOF immediately and prints nothing. And since each subcommand of the pipeline runs in a subshell, the setting of PROCESS takes place only in a subshell, not in the parent shell running the script, so PROCESS is still not set for later commands in your script.
(Note that some versions of bash can run the last subcommand of the pipeline in the current shell instead of in a subshell, but that doesn't affect this example.)
Instead of setting PROCESS in a subshell and feeding nothing to awk on standard input, you want to feed the value of LINE to awk and store the result in PROCESS in the current shell. So you need to run a command that writes the value of LINE to its standard output, and connects that standard output to the standard input of awk. The echo command can do this (or the printf command, as chepner pointed out in his answer).

You need to use echo (or printf) to actually put the value of $LINE onto the standard input of the awk command.
LINE=$(ps aux | grep "$1")
PROCESS=$(echo "$LINE" | awk '{print $2}')
echo $PROCESS
kill -9 $PROCESS
There's no need use LINE; you can set PROCESS with a single line
PROCESS=$(ps aux | grep "$1" | awk '{print $2}')
or better, skip the grep:
PROCESS=$(ps aux | awk -v pname="$1" '$1 ~ pname {print $2}')
Finally, don't use kill -9; that's a last resort for debugging faulty programs. For any program that you didn't write yourself, kill "$PROCESS" should be sufficient.

Related

How to use bash tail command inside a custom pipe command script?

I want to use tail in my custom pipe command.
For example, I want to execute this command:
>ls -1 | tail -n 1 | awk '{print "last file is "$1}'
>last file is test.txt
And I want to make it short by making my own custom script. It looks like this:
>ls -1 | myscript
>last file is test.txt
I know myscript can get input from "ls -1" by this code:
while read line; do
echo last file is $line
done
But I don't know how to use "tail -n 1" in the custom pipe command code above.
Is there a way to use a pipe command in another pipe command script?
Or do I have to implement the code which does the same process as "tail -n 1" myself?
I hope bash has some solution for this.
Try putting just this in myscript
tail -n 1 | awk '{print "last file is "$1}'
This works as the first command (tail) consumes the stdin of your script. In general, scripts work as though you typed their contest as-is to the terminal.

How to remove the username/hostname line from an output on Korn Shell?

I run the command
df -gP /data1 /data2 | grep -v File | awk '{print $1}' |
awk -F/dev/ '$0=$2' | tr '\n' '
on the AIX shell (ksh) and it prints the output below:
lv_data01 lv_data02 root#testhost:/
However, I would like the output to be printed this way. Could someone help?
lv_data01 lv_data02
Using grep … | awk … | awk … is not necessary; a single awk could do the whole job. So could sed and it might even be easier. I'd be tempted to deal with the spacing by using:
x=$(df … | sed …); echo $x
The tr command, once corrected, replaces newlines with spaces, so the prompt follows without a newline before it. The ; echo suggestion adds the missing newline; the echo $x suggestion (note no double quotes) does too.
As for the sed command:
sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'
Don't print anything by default
If the line doesn't match File (doing the work of grep -v):
remove the first space (blank or tab) and everything after it (doing the work of awk '{print $1}')
replace everything up to /dev/ with nothing and print (doing the work of awk -F/dev/ '{$0=$2}')
The command substitution and capture, followed by echo, deals with spaces and newlines.
So, my suggested solution is:
x=$(df -gP /data1 /data2 | sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'); echo $x
You could add unset x after the echo if you are going to be using this directly in the shell and not in a shell script. If it'll be encapsulated in a shell script, you don't have to worry about it.
I'm blithely assuming the output from df -gP won't contain a path such as this, with two occurrences of /dev:
/who/knows/dev/lv_data01/dev/bin
If that's a real problem, you can fix the sed script, but I don't think it will be. It's one thing the second awk script in the question handles differently.

bash variable not available after running script [duplicate]

This question already has answers here:
Global environment variables in a shell script
(7 answers)
Closed 5 years ago.
I have a shell script that assigns my IP address to a variable, but after running the script, I cannot access the variable in bash. If I put an echo in the script, it will print the variable, but it does not save it after the script is done running.
Is there a way to change the script to access it after it runs?
ip=$(/sbin/ifconfig | grep "inet " | awk '{print $2}' | grep -v 127 | cut -d":" -f2)
I am using terminal on a Mac.
A script by default runs in a a child process, which means the current (calling) shell cannot see its variables.
You have the following options:
Make the script output the information (to stdout), so that the calling shell can capture it and assign it to a variable of its own. This is probably the cleanest solution.
ip=$(my-script)
Source the script to make it run in the current shell as opposed to a child process. Note, however, that all modifications to the shell environment you make in your script then affect the current shell.
. my-script # any variables defined (without `local`) are now visible
Refactor your script into a function that you define in the current shell (e.g., by placing it in ~/.bashrc); again, all modifications made by the function will be visible to the current shell:
# Define the function
my-func() { ip=$(/sbin/ifconfig | grep "inet " | awk '{print $2}' | grep -v 127 | cut -d":" -f2); }
# Call it; $ip is implicitly defined when you do.
my-func
As an aside: You can simplify your command as follows:
/sbin/ifconfig | awk '/inet / && $2 !~ /^127/ { print $2 }'

Trying to kill processes in bash - code embedded in [...] not run?

here is what I want to do
while ["ps a | grep '[m]ono' | awk '{print $1}'" != ""] ; do
kill ps a | grep '[m]ono' | awk '{print $1}'
done
meaning if the grep returns nothing, don't try a kill.
The thing is I've always been lost with the expression evaluation in bash, sometime I use " " around something, sometime it's eval, sometime it's ''.
Could someone explain to me how to write the condition in my loop and explain the difference between the above? I'm used to find the working one with many tries and it feels like a huge loss of time.
Best choice: Do something else.
Utilities already exist for this purpose. Example:
killall mono
pkill mono
...or, even better, something targeted to the specific executable you want to terminate:
fuser -k /path/to/something.exe
...which would kill only programs with a file handle on that specific executable, rather than all applications running with mono on the machine.
...but, to explain the bugs:
There are two things wrong here: Missing command substitutions, and missing whitespace.
Missing whitespace:
["ps a | grep '[m]ono' | awk '{print $1}'" != ""]
...is literally trying to run a command with a name starting with [ps, as in, looking in the PATH for...
/bin/[ps\ a
/usr/bin/[ps\ a
...etc. [ is a command, and needs a space after its name like any other command. Thus:
[ "ps a | grep '[m]ono' | awk '{print $1}'" != "" ]
...fixes this problem (but leaves another one).
Missing command substitutions:
[ "ps a | grep '[m]ono' | awk '{print $1}'" != "" ]
...is comparing a string that starts with "ps a" to to ""; it does not compare the output of running a command that starts with ps a. To do that, you'd instead run:
[ "$(ps a | grep '[m]ono' | awk '{print $1}')" != "" ]
The content of $(...) is replaced with the output of the command within; thus, running your pipeline and comparing its output to an empty string.

How to get process id from process name?

I'm trying to create a shell script getting the process id of the Skype app on my Mac.
ps -clx | grep 'Skype' | awk '{print $2}' | head -1
The above is working fine, but there are two problems:
1)
The grep command would get all process if their name just contains "Skype". How can I ensure that it only get the result, if the process name is exactly Skype?
2)
I would like to make a shell script from this, which can be used from the terminal but the process name should be an argument of this script:
#!/bin/sh
ps -clx | grep '$1' | awk '{print $2}' | head -1
This isn't returning anything. I think this is because the $2 in the awk is treated as an argument too. How can I solve this?
Your ps -cl1 output looks like this:
UID PID PPID F CPU PRI NI SZ RSS WCHAN S ADDR TTY TIME CMD
501 185 172 104 0 31 0 2453272 1728 - S ffffff80145c5ec0 ?? 0:00.00 httpd
501 303 1 80004004 0 31 0 2456440 1656 - Ss ffffff8015131300 ?? 0:11.78 launchd
501 307 303 4004 0 33 0 2453456 7640 - S ffffff8015130a80 ?? 0:46.17 distnoted
501 323 303 40004004 0 33 0 2480640 9156 - S ffffff80145c4dc0 ?? 0:03.29 UserEventAgent
Thus, the last entry in each line is your command. That means you can use the full power of regular expressions to help you.
The $ in a regular expression means the end of the string, thus, you could use $ to specify that not only does the output must have Skype in it, it must end with Skype. This means if you have a command called Skype Controller, you won't pull it up:
ps -clx | grep 'Skype$' | awk '{print $2}' | head -1
You can also simplify things by using the ps -o format to just pull up the columns you want:
ps -eo pid,comm | grep 'Skype$' | awk '{print $1}' | head -1
And, you can eliminate head by simply using awk's ability to select your line for you. In awk, NR is your record number. Thus you could do this:
ps -eo pid,comm | grep 'Skype$' | awk 'NR == 1 {print $1}'
Heck, now that I think of it, we could eliminate the grep too:
ps -eo pid,comm | awk '/Skype$/ {print $1; exit}'
This is using awk's ability to use regular expressions. If the line contains the regular expression, 'Skype$', it will print the first column, then exit
The only problem is that if you had a command Foo Skype, this will also pick it up. To eliminate that, you'll have to do a bit more fancy footwork:
ps -eo pid,comm | while read pid command
do
if [[ "$command" = "Skype" ]]
then
echo $pid
break
fi
done
The while read is reading two variables. The trick is that read uses white space to divide the variables it reads in. However, since there are only two variables, the last one will contain the rest of the entire line. Thus if the command is Skype Controller, the entire command will be put into $command even though there's a space in it.
Now, we don't have to use a regular expression. We can compare the command with an equality.
This is longer to type in, but you're actually using fewer commands and less piping. Remember awk is looping through each line. All you're doing here is making it more explicit. In the end, this is actually much more efficient that what you originally had.
If pgrep is available on Mac, you can use pgrep '^Skype$'. This will list the process id of all processes called Skype.
You used the wrong quotes in your script:
ps -clx | grep "$1" | awk '{print $2}' | head -1
or
pgrep "^$1$"
The problem with your second example is that the $1 is in single quotes, which prevents bash from expanding the variable. There is already a utility that accomplishes what you want without manually parsing ps output.
pgrep "$1"
You can do this in AppleScript:
tell application "System Events"
set skypeProcess to the process "Skype"
set pid to the unix id of skypeProcess
pid
end tell
which means you can use 'osascript' to get the PID from within a shell script:
$ osascript -e "tell application \"System Events\"" -e "set skypeProcess to the process \"Skype\"" -e "set pid to the unix id of skypeProcess" -e "pid" -e "end tell"
3873
You can format the output of ps using the -o [field],... and list by process name using -C [command_name] ;however, ps will still print the column header, which can be removed by piping it through grep -v PID
ps -o pid -C "$1" |grep -v PID
where $1 would be the command name (in this case Skype)
I'd so something like:
ps aux | grep Skype | awk 'NR==1 {print $2}'
==== UPDATE ====
Use the parameter without quotes and use single quotes for awk
#!/bin/bash
ps aux | grep $1 | awk 'NR==1 {print $2}'
Method 1 - Use awk
I don't see any reason to use the -l flag (long format), I also don't see any reason to use grep and awk at the same time: awk has grep capability built in. Here is my plan: use ps and output just 2 columns: pid and command, then use awk to pick out what you want:
ps -cx -o pid,command | awk '$2 == "Skype" { print $1 }'
Method 2 - Use bash
This method has the advantage that if you already script in bash, you don't even need awk, which save one process. The solution is longer than the other method, but very straight forward.
#!/bin/bash
ps -cx -o pid,command | {
while read pid command
do
if [ "_$command" = "_$1" ]
then
# Do something with the pid
echo Found: pid=$pid, command=$command
break
fi
done
}
pgrep myAwesomeAppName
This works great under Catalina 10.15.2
Use double quotes to allow bash to perform variable substitution.
Single quotes disable bash variable substitution mechanism.
ps -clx | grep "$1" | awk "{print $2}" | head -1

Resources