Can anyone suggest me a single unix command through which I can get only the last 15 minute logs from any log file - putty

I am looking for a single unix command not a lines of codes or something like.
I have already tried with sed and awk command but not working for me.
sed command works but sometimes it returns nothing and awk is giving me the result but not specific for last 15minute

Mostly what we do is to read last 100 or 500 lines from log file like this
tail -n500 /var/log/messages
last 500 lines will be enough for 15 mints logs

Related

I want to watch a log file

I am running stuff on a server, and I find watch -n 30 qstat job.pbs a bit uninformative. Why can't I get
$ watch -n 30 less location/of/log|tail
to work? I get a blinking cursor. I do get output from
less location/of/log|tail
Try tail -f log.txt The -f option is designed to do exactly that: Every time the file is updated, it displays the addition.
I find less is more convenient if you want more than just following a log file (e.g., scrolling and searching etc.).
E.g. open log file with less as follows:
less /path/to/log
then press
Shift+F - to follow the log (behaves just like tail -f)
Ctrl+C - to stop following the log
?pattern - to search backward in log file

Get log entries written while executing command

I have a service that writes to a file in /var/log. For testing purposes, I am looking for a way to extract just the log lines that are written while executing a command against the service. I know I could do it with a C program using fseek/ftell, but that would require extra tooling in the VM. I would prefer a pure bash solution (bash 4.4, Ubuntu 18.04). I thought maybe something about using tail -f might work, but I can't figure out exactly how to work that.
You can use diff command. It takes 2 files as input and prints differing lines. You can copy the log file before execution of the service and compare it to the original file afterwords.
$ cat > logfile
line 1
line 2 asdf
$ cp logfile logfile-old
$ cat >> logfile
Third one.
Oups. Error occured.
$ diff logfile logfile-old
3,4d2
< Third one.
< Oups. Error occured.

What is mean that `grep -m 1 ` command in UNIX

I googled this command but there was not.
grep -m 1 "\[{" xxx.txt > xxx.txt
However I typed this command, error didn't occured.
Actually, there was not also result of this command.
Please anyone explain me this command's working?
This command reads from and writes to the same file, but not in a left-to-right fashion. In fact > xxx.txt runs first, emptying the file before the grep command starts reading it. Therefore there is no output. You can fix this by storing the result in a temporary file and then renaming that file to the original name.
PS: Some commands, like sed, have an output file option which works around this issue by not relying on shell redirects.

Get sed to ignore special characters in file?

I'm trying to extract user's crontabs so as to view them together. The initial problem I ran into was that the crontab file (From crontab -l) contains a lot of commented lines placed there by the system to explain the file's function. I stole a sed snippet to deal with this that deletes lines starting with comments and replaces the rest of lines following comments with blanks. (As best as I understand it.)
Here's an example crontab I'd like to capture:
0 0 5 * * /home/thornegarvin/myscript.sh
The sed code I'm using is: (With croneditor.temp containing the crontab)
sed '/^[[:blank:]]*#/d;s/#.*//' croneditor.temp
I think that the command is matching the *s in the file as comments and then deleting the line, but I'm not sure if that's why the command is failing.
I need a version of this command or another one entirely that works as I intended (Grabbing crontabs from the output of crontab -l).

format command line/shell output for readability

Example: man -k ls
Output: A LOT of text, so much that I can only read the last 20 lines.
I don't want information on how to scroll up through the output.
I would like to know, if possible, how to format/control the output so that only the first 20 lines are shown, then, when I press enter/scroll down, the next 20 lines are shown.
This way I can read all the output at my own pace. The output waits for me to tell it to continue. Is there a simple command for this?
Notice: This isn't a text file I'm outputting (I think), its just standard output, and way too much, so much so that it is unreadable except for the last 20 lines.
Can you just pipe the output to less or more? Or redirect the output to files and then go through them after the output is generated?
E.g. To redirect stdout to a file:
prompt> command -args > output.txt
More information on redirecting stdout and stderr can be found here:
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html
man -k ls | less
Found the answer, literally, right after I posted this question...
Apparently, the "| less" uses pipelines to make any command have scrolling output. I got this info from another site through a Google search.

Resources