I need create a script to generate log command TOP - bash

I have that generate a history of server's memory and cpu used. But i can't install any software on server.
Then i am trying this script
#!/bin/bash
while true; do
date >> system.log
top -n1 | grep 'Cpu\|Mem\|java\|eservices' >> system.log
echo '' >> system.log
sleep 2
done
but when i try execute tail -500f system.log the logs stopping

You should probably use the -b batch mode parameter. From man top:
Starts top in 'Batch' mode, which could be useful for sending output from top to other programs or to a file. In this mode, top will not accept input and runs until the iterations limit you've set with the '-n' command-line option or until killed.
You might want to use the portable format tail -n 500 -f.
In any case, saving top output to file and then running tail -f on it emulates the way top works. What are you trying to achieve that top does not do already?

To monitor total free memory use on a server, you can
grep -F MemFree: /proc/meminfo
To monitor process memory use:
ps -o rss $pid

Related

How to add a time stamp on each line of grep output?

I have a long running process running and I want to monitor its RAM usage. I can do this by watching top. However I would like to be able to log out and have a record written, every minute say, to a shared disk space instead.
My solution which works is:
nohup top -b -d 60 -p 10036|grep 10036 >> ramlog.txt &
But I would like to know when each line is outputted too. How can I modify the one-liner to add this information on each line?
I know about screen and tmux but I would like to get this simple one-liner working.
You could add a loop that reads each line from grep and prepends a date. Make sure to use grep --line-buffered to ensure each line is printed without delay.
nohup top -b -d 60 -p 10036 |
grep --line-buffered 10036 |
while read line; do echo "$(date): $line"; done >> ramlog.txt &

So i'm trying to make a background process that 'espeak's specific log events

I'm relatively new to linux - please forgive me if the solution is simple/obvious..
I'm trying to set up a background running script that monitors a log file for certain keyword patterns with awk and tail, and then uses espeak to provide a simplified notification when these keywords appear in the log file (which uses sysklogd)
The concept is derived from this guide
This is a horrible example of what i'm trying to do:
#!/bin/bash
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session opened for user/{system("espeak \"Opening SSH session\"")}'
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session closed/{system("espeak \"Session closed. Goodbye.\"")}''
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/authentication failure/{system("espeak \"Warning: Authentication Faliure\"")}'
tail -f -n1 /var/log/example_main | awk '/example sshd/&&/authentication failure/{system("espeak \"Authentication Failure. I have denied access.\"")}'
The first tail command by itself works perfectly; it monitors the defined log file for 'example sshd' and 'session opened for user', then uses espeak to say 'Opening SSH session'. As you would expect given the above excerpt, the bash script will not run multiple tails simultaneously (or at least it stops after this first tail command).
I guess I have a few questions:
How should I set out this script?
What is the best way to constantly run this script in the background - e.g init?
Are there any tutorials/documentation somewhere that could help me out?
Is there already something like this available that I could use?
Thanks, any help would be greatly appreciated - sorry for the long post.
Personally, I would attempt to set each of these up as an individual cron job. This would allow you to run it at a specific time and at specified intervals.
For example, you could type crontab -e
Then inside, have each of these tail commands listed as such:
5 * * * * tail -f -n1 /var/log/example_main | awk '/example sshd/&&/session opened for user/{system("espeak \"Opening SSH session\"")}'
That would run that one command at 5 minutes after the hour, every hour.
This was a decent guide I found: HowTo: Add Jobs To cron

Is it possible to output the contents of more than one stream into separate columns in the terminal?

For work, I occasionally need to monitor the output logs of services I create. These logs are short lived, and contain a lot of information that I don't necessarily need. Up until this point I've been watching them using:
grep <tag> * | less
where <tag> is either INFO, DEBUG, WARN, or ERROR. There are about 10x as many warns as there are errors, and 10x as many debugs as warns, and so forth. It makes it difficult to catch one ERROR in a sea of relevant DEBUG messages. I would like a way to, for instance, make all 'WARN' messages appear on the left-hand side of the terminal, and all the 'ERROR' messages appear on the right-hand side.
I have tried using tmux and screen, but it doesn't seem to be working on my dev machine.
Try doing this :
FILE=filename.log
vim -O <(grep 'ERR' "$FILE") <(grep 'WARN' "$FILE")
Just use sed to indent the desired lines. Or, use colors. For example, to make ERRORS red, you could do:
$ r=$( printf '\033[1;31m' ) # escape sequence may change depending on the display
$ g=$( printf '\033[1;32m' )
$ echo $g # Set the output color to the default
$ sed "/ERROR/ { s/^/$r/; s/$/$g/; }" *
If these are live logs, how about running these two commands in separate terminals:
Errors:
tail -f * | grep ERROR
Warnings:
tail -f * | grep WARN
Edit
To automate this you could start it in a tmux session. I tend to do this with a tmux script similar to what I described here.
In you case the script file could contain something like this:
monitor.tmux
send-keys "tail -f * | grep ERROR\n"
split
send-keys "tail -f * | grep WARN\n"
Then run like this:
tmux new -d \; source-file monitor.tmux; tmux attach
You could do this using screen. Simply split the screen vertically and run tail -f LOGFILE | grep KEYWORD on each pane.
As a shortcut, you can use the following rc file:
split -v
screen bash -c "tail -f /var/log/syslog | grep ERR"
focus
screen bash -c "tail -f /var/log/syslog | grep WARN"
then launch your screen instance using:
screen -c monitor_log_screen.rc
You can of course extend this concept much further by making more splits and use commands like tail -f and watch to get live updates of different output.
Do also explore screen other screen features such as use of multiple windows (with monitoring) and hardstatus and you can come up with quite a comprehensive "monitoring console".

Bash: How to redirect the output of set of commands piped together to a file?

perf record | perf inject -b | perf report > tempfile 2>&1
I am running the above set of commands and trying to capture the ouput to temfile, but sometimes the outputs doesn't get fully appended in the tempfile (output of each command). To be more precise I am running this command from a script and I tried putting them in small brackets like
(perf record | perf inject -b | perf report) > tempfile 2>&1
but this also didn't work.
Pipe redirects output of one program to another. To log the output to a file and redirect to another program use tee command:
http://en.wikipedia.org/wiki/Tee_(command)

How do I get grep to keep the file/pipe open?

I am trying to debug some errors in a live Merb app. There are a lot of lined of error code running by, but I jut need to see the first one. I can use grep to select these lines and print them but it closes as soon as it reached the end of the file.
What I would like to do is use grep like the shift-F mode in less where it will keep the file open and report new matching line as they are written to the log.
- or -
Is there a way to do this directly with less that I don't know about?
try this
tail -f dev.log | grep '^ERROR:'
the -f option to tail tells it to wait for more data when it hits EOF.
Can't you do this with watch and tail?
watch -n 30 "grep 'dev.log' '^ERROR:' | tail -n 30"

Resources