how to get the last login time for all users in one line for different shells - shell

I can make the following line work on ksh
for user in $( awk -F: '{ print $1}' /etc/passwd); do last $user | head -1 ; done | tr -s "\n" |sort
But I'd like to make it work on UNIX sh and UNIX csh. (in linux sh it runs fine, but linux is not unix...)
I know there are limitations for this since it seems that each UNIX(*) has its own variations on the syntax.
update: sorry, there are some restrictions here:
I can't write on the disk, so I can't save scripts.
how do i write this in CSH?

This awk-script seems to be the equivalent to you loop above:
{
cmd = "last "$1
cmd | getline result
printf "%s", result
}
use it like this:
awk -F: -f script_above.awk /etc/passwd
Pipe the output to sort
As a one-liner:
$ awk -F: '{cmd = "last "$1; cmd | getline result;printf "%s", result}' /etc/passwd

This might do the trick for you, should be POSIX compliant:
last | awk 'FNR==NR{split($0,f,/:/);a[f[1]];next}($1 in a)&&++b[$1]==1' /etc/passwd - | sort

You don't really need Awk for this.
while IFS=: read user _; do
last "$user" | head -n 1
done </etc/passwd # | grep .
Instead of reinvent it in Csh, how about
sh -c 'while IFS=: read user _; do last "$user" | head -n 1; done </etc/passwd'
You will get empty output for users who have not logged in since wtmp was rotated; maybe add a | grep . to weed those out. (I added it commented out above.)
To reiterate, IFS=: sets the shell's internal field separator to a colon, so that read will split the password file on that.

Just use simple command:
lastlog

Related

shell script in a here-document used as input to ssh gives no result

I am piping a result of grep to AWK and using the result as a pattern for another grep inside EOF (not sure whats the terminology there), but the AWK gives me blank results. Below is part of the bash script that gave me issues.
ssh "$USER"#logs << EOF
zgrep $wgr $loc$env/app*$date* | awk -F":" '{print $5 "::" $7}' | awk -F"," '{print $1}' | sort | uniq | while read -r rid ; do
zgrep $rid $loc$env/app*$date*;
done
EOF
I am really drawing a blank here beacuse of no error and Im out of ideas.
Samples:
I am greping log files that looks like below:
app-server.log.2020010416.gz:2020-01-04 16:00:00,441 INFO [redacted] (redacted) [rid:12345::12345-12345-12345-12345-12345,...
I am interested in rid and I can grep that in logs again:
zgrep $rid $loc$env/app*$date*
loc, env and date are working properly, but they are outside of EOF.
The script as a whole connects to ssh and goes out properly but I am getting no result.
The immediate problem is that the dollar signs are evaluated by the local shell because you don't (and presumably cannot) quote the here document (because then $wqr and $loc etc will also not be expanded by the shell).
The quick fix is to backslash the dollar signs, but in addition, I see several opportunities to get rid of inelegant or wasteful constructs.
ssh "$USER"#logs << EOF
zgrep "$wgr" "$loc$env/app"*"$date"* |
awk -F":" '{v = \$5 "::" \$7; split(v, f, /,/); print f[1]}' |
sort -u | xargs -I {} zgrep {} "$loc$env"/app*"$date"*
EOF
If you want to add decorations around the final zgrep, probably revert to the while loop you had; but of course, you need to escape the dollar sign in that, too:
ssh "$USER"#logs << EOF
zgrep "$wgr" "$loc$env/app"*"$date"* |
awk -F":" '{v = \$5 "::" \$7; split(v, f, /,/); print f[1]}' |
sort -u |
while read -r rid; do
echo Dancing hampsters "\$rid" more dancing hampsters
zgrep "\$rid" "$loc$env"/app*"$date"*
done
EOF
Again, any unescaped dollar sign is evaluated by your local shell even before the ssh command starts executing.
Could you please try following. Fair warning I couldn't test it since lack of samples. By doing this approach we need not to escape things while doing ssh.
##Configure/define your shell variables(wgr, loc, env, date, rid) here.
printf -v var_wgr %q "$wgr"
printf -v var_loc %q "$loc"
printf -v var_env %q "$env"
printf -v var_date %q "$date"
ssh -T -p your_pass user#"$host" "bash -s $var_str" <<'EOF'
# retrieve it off the shell command line
zgrep "$var_wgr $var_loc$var_env/app*$var_date*" | awk -F":" '{print $5 "::" $7}' | awk -F"," '{print $1}' | sort | uniq | while read -r rid ; do
zgrep "$rid $var_loc$var_env/app*$date*";
done
EOF

Adding value to global variable in a subshell is not working

I am trying to get the total disk usage of my machine. Below is the script code:
#!/bin/sh
totalUsage=0
diskUse(){
df -H | grep -vE '^Filesystem|cdrom' | awk '{ print $5 " " $1 }' | while read output;
do
diskUsage=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
totalUsage=$((totalUsage+diskUsage))
done
}
diskUse
echo $totalUsage
While totalUsage is a global variable, I have tried to sum the individual disk usage to totalUsage in the line:
totalUsage=$((totalUsage+diskUsage))
An echo of totalUsage between do and done shows the correct value,
but when I try to echo it after my call diskUse, it stills prints a 0
Can you please help me, what is wrong here?
The variable totalUsage in a sub-shell doesn't change the value in the parent shell.
Since you tagged bash, you can use here string to modify your loop:
#!/bin/bash
totalUsage=0
diskUse(){
while read output;
do
diskUsage=$(echo $output | awk '{ print $1}' | cut -d'%' -f1 )
totalUsage=$((totalUsage+diskUsage))
done <<<"$(df -H | grep -vE '^Filesystem|cdrom' | awk '{ print $5 " " $1 }')"
}
diskUse
echo $totalUsage
I suggest to insert
shopt -s lastpipe
as new line after
#!/bin/bash
From man bash:
lastpipe: If set, and job control is not active, the shell runs the last command of a pipeline not executed in the background in the current shell environment.

Split String in Unix Shell Script

I have a String like this
//ABC/REC/TLC/SC-prod/1f9/20/00000000957481f9-08d035805a5c94bf
and want to get last part of
00000000957481f9-08d035805a5c94bf
Let's say you have
text="//ABC/REC/TLC/SC-prod/1f9/20/00000000957481f9-08d035805a5c94bf"
If you know the position, i.e. in this case the 9th, you can go with
echo "$text" | cut -d'/' -f9
However, if this is dynamic and your want to split at "/", it's safer to go with:
echo "${text##*/}"
This removes everything from the beginning to the last occurrence of "/" and should be the shortest form to do it.
For more information on this see: Bash Reference manual
For more information on cut see: cut man page
The tool basename does exactly that:
$ basename //ABC/REC/TLC/SC-prod/1f9/20/00000000957481f9-08d035805a5c94bf
00000000957481f9-08d035805a5c94bf
I would use bash string function:
$ string="//ABC/REC/TLC/SC-prod/1f9/20/00000000957481f9-08d035805a5c94bf"
$ echo "${string##*/}"
00000000957481f9-08d035805a5c94bf
But following are some other options:
$ awk -F'/' '$0=$NF' <<< "$string"
00000000957481f9-08d035805a5c94bf
$ sed 's#.*/##g' <<< "$string"
00000000957481f9-08d035805a5c94bf
Note: <<< is herestring notation. They do not create a subshell, however, they are NOT portable to POSIX sh (as implemented by shells such as ash or dash).
In case you want more than just the last part of the path,
you could do something like this:
echo $PWD | rev | cut -d'/' -f1-2 | rev
You can use this BASH regex:
s='//ABC/REC/TLC/SC-prod/1f9/20/00000000957481f9-08d035805a5c94bf'
[[ "$s" =~ [^/]+$ ]] && echo "${BASH_REMATCH[0]}"
00000000957481f9-08d035805a5c94bf
This can be done easily in awk:
string="//ABC/REC/TLC/SC-prod/1f9/20/00000000957481f9-08d035805a5c94bf"
echo "${string}" | awk -v FS="/" '{ print $NF }'
Use "/" as field separator and print the last field.
You can try this...
echo //ABC/REC/TLC/SC-prod/1f9/20/00000000957481f9-08d035805a5c94bf |awk -F "/" '{print $NF}'

Assigning deciles using bash

I'm learning bash, and here's a short script to assign deciles to the second column of file $1.
The complicating bit is the use of awk within the script, leading to ambiguous redirects when I run the script.
I would have gotten this done in SAS by now, but like the idea of two lines of code doing the job.
How can I communicate the total number of rows (${N}) to awk within the script? Thanks.
N=$(wc -l < $1)
cat $1 | sort -t' ' -k2gr,2 | awk '{$3=int((((NR-1)*10.0)/"${N}")+1);print $0}'
You can set an awk variable from the command line using -v.
N=$(wc -l < "$1" | tr -d ' ')
sort -t' ' -k2gr,2 "$1" | awk -v n=$N '{$3=int((((NR-1)*10.0)/n)+1);print $0}'
I added tr -d to get rid of the leading spaces that wc -l puts in its result.

results of wc as variables

I would like to use the lines coming from 'wc' as variables. For example:
echo 'foo bar' > file.txt
echo 'blah blah blah' >> file.txt
wc file.txt
2 5 23 file.txt
I would like to have something like $lines, $words and $characters associated to the values 2, 5, and 23. How can I do that in bash?
In pure bash: (no awk)
a=($(wc file.txt))
lines=${a[0]}
words=${a[1]}
chars=${a[2]}
This works by using bash's arrays. a=(1 2 3) creates an array with elements 1, 2 and 3. We can then access separate elements with the ${a[indice]} syntax.
Alternative: (based on gonvaled solution)
read lines words chars <<< $(wc x)
Or in sh:
a=$(wc file.txt)
lines=$(echo $a|cut -d' ' -f1)
words=$(echo $a|cut -d' ' -f2)
chars=$(echo $a|cut -d' ' -f3)
There are other solutions but a simple one which I usually use is to put the output of wc in a temporary file, and then read from there:
wc file.txt > xxx
read lines words characters filename < xxx
echo "lines=$lines words=$words characters=$characters filename=$filename"
lines=2 words=5 characters=23 filename=file.txt
The advantage of this method is that you do not need to create several awk processes, one for each variable. The disadvantage is that you need a temporary file, which you should delete afterwards.
Be careful: this does not work:
wc file.txt | read lines words characters filename
The problem is that piping to read creates another process, and the variables are updated there, so they are not accessible in the calling shell.
Edit: adding solution by arnaud576875:
read lines words chars filename <<< $(wc x)
Works without writing to a file (and do not have pipe problem). It is bash specific.
From the bash manual:
Here Strings
A variant of here documents, the format is:
<<<word
The word is expanded and supplied to the command on its standard input.
The key is the "word is expanded" bit.
lines=`wc file.txt | awk '{print $1}'`
words=`wc file.txt | awk '{print $2}'`
...
you can also store the wc result somewhere first.. and then parse it.. if you're picky about performance :)
Just to add another variant --
set -- `wc file.txt`
chars=$1
words=$2
lines=$3
This obviously clobbers $* and related variables. Unlike some of the other solutions here, it is portable to other Bourne shells.
I wanted to store the number of csv file in a variable. The following worked for me:
CSV_COUNT=$(ls ./pathToSubdirectory | grep ".csv" | wc -l | xargs)
xargs removes the whitespace from the wc command
I ran this bash script not in the same folder as the csv files. Thus, the pathToSubdirectory
You can assign output to a variable by opening a sub shell:
$ x=$(wc some-file)
$ echo $x
1 6 60 some-file
Now, in order to get the separate variables, the simplest option is to use awk:
$ x=$(wc some-file | awk '{print $1}')
$ echo $x
1
declare -a result
result=( $(wc < file.txt) )
lines=${result[0]}
words=${result[1]}
characters=${result[2]}
echo "Lines: $lines, Words: $words, Characters: $characters"

Resources