I use the following script to retrieve information about mounted file-systems on several hundred Solaris (v9,10,11) and Red Hat Enterprise Linux (v5,6,7) servers for analysis.
# retrieves for all mounted file-systems: server, device, allocated, used, available, percent_used, mount_directory, permissions, owner_name, and group_name
server=$(uname -n)
df -h | awk '
NF == 6 { print ($0); }
NF == 1 { device = $1; }
NF == 5 { print (device, " ", $0); }
' | while read device allocated used available percent mount
do
ls -ld "${mount}" | read permissions links owner_name group_name size month day time directory
echo "${server} ${device} ${allocated} ${used} ${available} ${percent} ${mount} ${permissions} ${owner_name} ${group_name}"
done
I perform this operation from Windoze using PuTTY "plink" utility.
plink -m filesys.script server_name >>filesys.txt
All worked as expected until my default shell was changed from ksh to bash on all servers. Now, the second read command that obtains ls output for permissions, owner_name, and group_name is not functioning and does not produce any error messages either. Therefore the result is that only seven tokens are in output (server through mount) and there is nothing for permissions, owner_name, or group_name.
I have confirmed that if I upload the script to the Unix server with a shebang (#!/bin/ksh) at the top line the script works as expected. However, I do not want to push this script to hundreds of servers and maintain the script in a distributed mechanism. I would like to retain the script on central Windoze workstation and call with -m parameter of plink. Placing a shabang at top of the file does not execute ksh using plink -m option.
The Bash shell versions that are in play are 3.2 and 4.1. I have also made certain that the Windoze script file has carriage returns removed. The awk utility is used to handle situations where the device name is too long and df breaks the output over two lines.
Again, the first read (from df/awk) is working fine but the second (ls output) is not. I confirmed by placing a 'set' following the second read and those environment varriables were not in the environment.
The read (as a pipe element) happens in a subshell, so even though it actually does execute perfectly, once that pipeline exits its results aren't available to the echo running on a separate line (as part of the parent process that originally spawned the pipeline). This is fully allowed by POSIX; which component of a pipeline, if any, is performed by the shell spawning that pipeline is unspecified by the standard and thus implementation-defined.
You can address the issue by putting the echo inside of the same pipeline element as the read:
server=$(uname -n)
df -h | awk '
NF == 6 { print ($0); }
NF == 1 { device = $1; }
NF == 5 { print (device, " ", $0); }
' | while read device allocated used available percent mount
do
# NOTE: parsing output from "ls" is unreliable
ls -ld "${mount}" | {
read permissions links owner_name group_name size month day time directory
echo "${server} ${device} ${allocated} ${used} ${available} ${percent} ${mount} ${permissions} ${owner_name} ${group_name}"
}
done
References:
BashFAQ #24 (I set variables in a loop that's in a pipeline. Why do they disappear after the loop terminates? Or, why can't I pipe data to read?)
ParsingLs (Why you shouldn't parse the output of ls(1))
If you have GNU stat or find, either of which allows you to provide a format string to control metadata output, I would strongly suggest using them in place of ls -l for parsing metadata. Even perl is somewhat better for the purpose, having only a single universally available implementation with uniform stat behavior between releases.
Related
I just want to share a small script that I made to enhance the docker stats command.
I am not sure about the exactitude of this method.
Can I assume that the total amount of memory consumed by the complete Docker deployment is the sum of each container consumed memory ?
Please share your modifications and or corrections. This command is documented here: https://docs.docker.com/engine/reference/commandline/stats/
When running a docker stats The output looks like this:
$ docker stats --all --format "table {{.MemPerc}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.Name}}"
MEM % CPU % MEM USAGE / LIMIT NAME
0.50% 1.00% 77.85MiB / 15.57GiB ecstatic_noether
1.50% 3.50% 233.55MiB / 15.57GiB stoic_goodall
0.25% 0.50% 38.92MiB / 15.57GiB drunk_visvesvaraya
My script will add the following line at the end:
2.25% 5.00% 350.32MiB / 15.57GiB TOTAL
docker_stats.sh
#!/bin/bash
# This script is used to complete the output of the docker stats command.
# The docker stats command does not compute the total amount of resources (RAM or CPU)
# Get the total amount of RAM, assumes there are at least 1024*1024 KiB, therefore > 1 GiB
HOST_MEM_TOTAL=$(grep MemTotal /proc/meminfo | awk '{print $2/1024/1024}')
# Get the output of the docker stat command. Will be displayed at the end
# Without modifying the special variable IFS the ouput of the docker stats command won't have
# the new lines thus resulting in a failure when using awk to process each line
IFS=;
DOCKER_STATS_CMD=`docker stats --no-stream --format "table {{.MemPerc}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.Name}}"`
SUM_RAM=`echo $DOCKER_STATS_CMD | tail -n +2 | sed "s/%//g" | awk '{s+=$1} END {print s}'`
SUM_CPU=`echo $DOCKER_STATS_CMD | tail -n +2 | sed "s/%//g" | awk '{s+=$2} END {print s}'`
SUM_RAM_QUANTITY=`LC_NUMERIC=C printf %.2f $(echo "$SUM_RAM*$HOST_MEM_TOTAL*0.01" | bc)`
# Output the result
echo $DOCKER_STATS_CMD
echo -e "${SUM_RAM}%\t\t\t${SUM_CPU}%\t\t${SUM_RAM_QUANTITY}GiB / ${HOST_MEM_TOTAL}GiB\tTOTAL"
From the documentation that you have linked above,
The docker stats command returns a live data stream for running containers.
To limit data to one or more specific containers, specify a list of container names or ids separated by a space.
You can specify a stopped container but stopped containers do not return any data.
and then furthermore,
Note: On Linux, the Docker CLI reports memory usage by subtracting page cache usage from the total memory usage.
The API does not perform such a calculation but rather provides the total memory usage and the amount from the page cache so that clients can use the data as needed.
According to your question, it looks like you can assume so, but also do not forget it also factors in containers that exist but are not running.
Your docker_stats.sh does the job for me, thanks!
I had to add unset LC_ALL somewhere before LC_NUMERIC is used though as the former overrides the latter and otherwise I get this error:
"Zeile 19: printf: 1.7989: Ungültige Zahl." This is probably due to my using a German locale.
There is also a discussion to add this feature to the "docker stats" command itself.
Thanks for sharing the script! I've updated it in a way it depends on DOCKER_MEM_TOTAL instead of HOST_MEM_TOTAL as docker has it's own memory limit which could differ from host total memory count.
I'm looking for a possibility to export all failed jobs (resolved and not) of a day into a file (text, csv, xml,..)
Tendency is, I will not be able to check all resolved/forced-ok jobs which failed all throughout the day unless I do it manually by placing in a spreadsheet.
Does anybody know if there is such an utility? We're currently using Control-M in Version 7.0 on Server
You can schedule a job do so :-
run below script as command line with passing two argument , %%PARM1 %%PARM2
you need to update two filed in it :-
1. NDP time of your Environment , I have used as 0930
2. Control-M environment name
3. your email ID in last line of mailx and file path as per your system .
***you can use mutt -a if mailx -a is not working in your system for sending email with attached file .
----------------------------------
now job :-
Job Type : Command
File Path : not reuired
C ommand : path/report.sh %%PARM1 %%PARM2
rest all as normal ,
but don't forget to define PARM1 and PRAM2 in auto edit variable
PARM1 = %%$PREV
PARM2 = %%$DATE
-------------------------
Script
***********************************
report.sh
------------------------------------------------
#!/bin/bash
env=< Control-M user name > # Use control-M name
ctmlog list $1 0930 $2 0930 | grep NOTOK > $1_failedjob.txt # update time to NDP time ,i used 0930
cut -d'|' -f2,3,4,5,8 $1_failedjob.txt | sed 's/|/,/g' > $1_failed.csv
awk 'BEGIN {print "DATE,TIME,JOBNAME\t,ORDERID\t,STATUS";}
{print $0;}
END { print "\tReport generated\n";}' $1_failed.csv
rm $1_failedjob.txt
echo " Last 24 Hour failed job list " | mailx -s "Failed Job list for $1" -a "absolute path of file $1_failed.csv" youremail#domain.com
exit 0`
&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
Apart from using this , you can always ask your ops team to send an report by exporting failed job for a particular time and date from Control-m EM GUI
I need some help with displaying how many times two strings are found on the same line! Lets say I want to search the file 'test.txt', this file contains names and IP's, I want to enter a name as a parameter when running the script, the script will search the file for that name, and check if there's an IP-address there also. I have tried using the 'grep' command, but I don't know how I can display the results in a good way, I want it like this:
Name: John Doe IP: xxx.xxx.xx.x count: 3
The count is how many times this line was found, this is how my grep script looks like right now:
#!/bin/bash
echo "Searching $1 for the Name '$2'"
result=$(grep "$2" $1 | grep -E "(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)")
echo $result
I will run the script like 'sh search test.txt John'.
I'm having trouble displaying the information I get from the grep command, maybe there's a better way to do this?
EDIT:
Okey, I will try to explain a little better, let's say I want to search a .log file, I want a script to search that file for a string the user enters as a parameter. i.e if the user enters 'sh search test.log logged in' the script will search for the string "logged in" within the file 'test.log'. If the script finds this line on the same line as a IP-address the IP address is printed, along with how many times this line was found.
And I simply don't know how to do it, I'm new to shell scripting, and was hoping I could use grep along with regular expressions for this! I will keep on trying, and update this question with an answer if I figure it out.
I don't have said file on my computer, but it looks something like this:
Apr 25 11:33:21 Admin CRON[2792]: pam_unix(cron:session): session opened for user 192.168.1.2 by (uid=0)
Apr 25 12:39:01 Admin CRON[2792]: pam_unix(cron:session): session closed for user 192.168.1.2
Apr 27 07:42:07 John CRON[2792]: pam_unix(cron:session): session opened for user 192.168.2.22 by (uid=0)
Apr 27 14:23:11 John CRON[2792]: pam_unix(cron:session): session closed for user 192.168.2.22
Apr 29 10:20:18 Admin CRON[2792]: pam_unix(cron:session): session opened for user 192.168.1.2 by (uid=0)
Apr 29 12:15:04 Admin CRON[2792]: pam_unix(cron:session): session closed for user 192.168.1.2
Here is a simple Awk script which does what you request, based on the log snippet you posted.
awk -v user="$2" '$4 == user { i[$11]++ }
END { for (a in i) printf ("Name: %s IP: %s count: %i\n", user, a, i[a]) }' "$1"
If the fourth whitespace-separated field in the log file matches the requested user name (which was passed to the shell script as its second parameter), add one to the count for the IP address (from field 11).
At the end, loop through all non-zero IP addresses, and print a summary for each. (The user name is obviously whatever was passed in, but matches your expected output.)
This is a very basic Awk script; if you think you want to learn more, I urge you to consult a simple introduction, rather than follow up here.
If you want a simpler grep-only solution, something like this provides the information in a different format:
grep "$2" "$1" |
grep -o -E '(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)' |
sort | uniq -c | sort -rn
The trick here is the -o option to the second grep, which extracts just the IP address from the matching line. It is however less precise than the Awk script; for example, a user named "sess" would match every input line in the log. You can improve on that slightly by using grep -w in the first grep -- that still won't help against users named "pam" --, but Awk really gives you a lot more control.
My original answer is below this line, partly becaus it's tangentially useful, partially because it is required in order to understand the pesky comment thread below.
The following
result=$(command)
echo $result
is wrong. You need the second line to be
echo "$result"
but in addition, the detour over echo is superfluous; the simple way to write that is simply
command
I am new to shell scripting, and want to implement a script on my server which will automatically send e-mail alerts if:
Disk usage exceeds 90%
Disk usage exceeds 95% (In addition to the previous e-mail)
My filesystem is abc:/xyz/abc and my mount is /pqr. How can I set this up via scripts?
You can use the df command to check the file system usage. As a starting point, you can use the below command:
df -h | awk -v val=90 '$1=="/pqr"{x=int($5)>val?1:0;print x}'
The above command will print 1 if more than threshold, else print 0. The threshold is set in val.
Note: Please ensure the 5th column of your df output is the use percentage, else use appropriate column.
I managed to get Ubuntu running on a mobile device. I need to automate some processes on it because user input is totally impossible without a convoluted setup and soldering wires.
I need to run "parted print" and then pipe "yes, fix, fix" to stdin here is the desired output:
~ # parted /dev/block/mmcblk0 print
parted /dev/block/mmcblk0 print
Warning: /dev/block/mmcblk0 contains GPT signatures, indicating that it has a
GPT table. However, it does not have a valid fake msdos partition table, as it
should. Perhaps it was corrupted -- possibly by a program that doesn't
understand GPT partition tables. Or perhaps you deleted the GPT table, and are
now using an msdos partition table. Is this a GPT partition table?
Yes/No? yes
yes
yes
Error: The backup GPT table is not at the end of the disk, as it should be.
This might mean that another operating system believes the disk is smaller.
Fix, by moving the backup to the end (and removing the old backup)?
Fix/Ignore/Cancel? fix
fix
fix
Warning: Not all of the space available to /dev/block/mmcblk0 appears to be
used, you can fix the GPT to use all of the space (an extra 569312 blocks) or
continue with the current setting?
Fix/Ignore? fix
fix
fix
Model: MMC SEM16G (sd/mmc)
Disk /dev/block/mmcblk0: 15.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 131kB 262kB 131kB xloader
2 262kB 524kB 262kB bootloader
3 524kB 16.3MB 15.7MB recovery
4 16.8MB 33.6MB 16.8MB boot
5 33.6MB 83.9MB 50.3MB rom
6 83.9MB 134MB 50.3MB bootdata
7 134MB 522MB 388MB factory
8 522MB 1164MB 642MB system
9 1164MB 1611MB 447MB cache
10 1611MB 2684MB 1074MB media
11 2684MB 15.6GB 12.9GB userdata
Here is what I've drafted..
#! /bin/bash
mkfifo Input
mkfifo Output
#Redirect commandline input from fifo to parted, Redirect output to fifo, background
cat Input &| - parted print >Output &
Line=""
while [ 1 ]
do
while read Line
do
if [ $Line == *Yes\/No\?* ]; then
echo "yes">Input
fi
if [ $Line == *Fix\/Ignore/\Cancel\?* ]; then
echo "fix">Input
fi
if [ $Line == *Fix\/Ignore\?* ]; then
echo "fix">Input
fi
test $Line == *userdata* && break
done<Output
test $Line == *userdata* && break
done
But this does not work. If someone could assist me in redirecting output from a program into a fifo, then analyzing that data and directing output into another fifo to be put back in the original program? The desired results are in the first code block.
If you always know what the needed inputs will be -- if they never change from run to run -- you can just redirect input from a file or from a HERE document and you don't need to do anything complicated.
If the needed inputs will change from run to run, you need to use something other than the shell because it will not make what you are trying to do possible. perl might be a good choice. (You don't need to use expect here because you're not trying to simulate a tty.)