I want to watch a log file - shell

I am running stuff on a server, and I find watch -n 30 qstat job.pbs a bit uninformative. Why can't I get
$ watch -n 30 less location/of/log|tail
to work? I get a blinking cursor. I do get output from
less location/of/log|tail

Try tail -f log.txt The -f option is designed to do exactly that: Every time the file is updated, it displays the addition.

I find less is more convenient if you want more than just following a log file (e.g., scrolling and searching etc.).
E.g. open log file with less as follows:
less /path/to/log
then press
Shift+F - to follow the log (behaves just like tail -f)
Ctrl+C - to stop following the log
?pattern - to search backward in log file

Related

Redirecting shell stdout from application run by mono

I have run in to an interesting issue that I can't seem to work around so maybe you guru's can direct me in the right direction.
I want to have my app, which is run by mono direct its output to a logfile, but I want it to overwrite / empty the logfile before every write. Now it appends and I can't seem to change that no matter what.
Here are some of the options I've tried and their result:
mono myapp.exe > myapp.log & Append
mono myapp.exe >| myapp.log & Append
mono myapp.exe | tee myapp.log Append
mono myapp.exe >! myapp.log & Nothing
The above are called from within my bash-based startup script, which runs nicely otherwise. (just one iteration, don't need multiple) Directing to /dev/null is so far the best as it would otherwise grow the log file indefinitely.
Tha app outputs some info every 2 seconds and it'd be nice to view it with tail -f myapp.log whenever I wanted to.
Thank you for your time.
I don't get the problem, sorry. why not using rm and don't run the programms in background.
Or do you want just one specific line of the output ? In this case you need some additional tools like head or sed and you have to give more input.

shell - keep writing to the same file[name] even if externally changed

There's a process, which should not be interrupted, and I need to capture the stdout from it.
prg > debug.log
I can't quite modify the way the process outputs its data, though I'm free to wrap its launch as I see fit. Background, piping the output to other commands, etc, that's all a fair game. The process, once started, must run till the end of times (and can't be blocked, say, waiting for a fifo to be emptied). The writing isn't very fast, and the file can be cut at arbitrary place, if it exceeds predefined size.
Now, the problem is the log would grow to fill all available space, and so it must be rotated, oldest instances deleted/overwritten. And now there's the problem...
If I do
mv debug.log debug.log.1
the file debug.log vanishes forever while debug.log.1 keeps growing.
If I do
cp debug.log debug.log.1
rm debug.log
the file debug.log.1 doesn't grow, but debug.log vanishes forever, and all consecutive output from the program is lost.
Is there some way to make the stdout redirect behave like typical log writing - if the file vanished, got renamed or such, create it again?
(this is all working under busybox, so lightweight solutions are preferred.)
If the application in question holds the log file open all the time and cannot be told to close and re-open the log file (as many applications can) then the only option I can think of is to truncate the file in place.
Something like this:
cp debug.log debug.log.1
: > debug.log
You may use the split command or any other command which reopens a new file
prg | split --numeric-suffixes --lines=100 debug.log.
The reason is, that the redirected file output is sent to a file handle, not the file name.
So the process has to close the file handle and open a new one.
You may use rotatelogs to do it in Apache HTTPD style, if you like:
http://httpd.apache.org/docs/2.2/programs/rotatelogs.html
In bash style, you can use a script:
fn='debug.log'
prg | while IFS='' read in; do
if ...; then # time or number of lines read or filesize or ...
i=$((i+1))
mv "$file" "$fn.$i" # rename the file
> "$file" # make it empty
fi
echo "$in" >> "$file" # fill the file again
done

How do I tell tail -f it has finished - cleanly?

I am copying a LOGFILE to a remote server as it is being created.
tail -f LOGILE | gzip -c >> /faraway/log.gz
However, when the original LOGFILE is closed, and moved to a storage directory, my tail -f seems to get some odd data.
How can I ensure that tail -f stops cleanly and that the compressed file /faraway/log.gz is a true copy of LOGFILE?
EDIT 1
I did a bit more digging.
/faraway/log.gz terminated badly - halfway through a FIX message. This must be because I ctrlCed the whole piped command above.
IF ignore this last line, then the original LOGFILE and log.gz match EXACTLY! That's for a 40G file transmitted across the atlantic.
I am pretty impressed by that as it does exactly what I want. Does any reader think I was just "lucky" in this case - is this likely NOT to work in future?
Now, I just need to get a clean close of gzip. Perhaps sending a kill -9 to the tail PID as suggested below may do allow GZIP to finish its compression properly.
To get a full copy, use
tail -n +1 -f your file
If your don't use -n +1 option, you only get the tail part of the file.
Yet this does not solve the deleted/moved file problem.. In fact, the deleting/moving file problem is an IPC (inter-process communication) problem, or an inter-process co-operation problem. If you don't have the correct behavior model of the other process(es), you can't resolve the problem.
For example, if the other program COPY the log file to somewhere else, and then delete the current one, and the program then log outputs to that new log file... Apparently your tail can not read those outputs.
A related feature of unix (and unix-like system) worth of mentioning:
When a file is opened for read by process A, but is then deleted by
process B, the physical contents will not be immediately deleted,
since its reference count is not zero (someone is still using it, i.e.
process A). Process A can still access the file, until it closes the
file. Moving file is another question: If Process B, say, moves
file to the same physical file system (Note: you may have many
physical file system attached on your system), process A can still
access the file, even the file is growing. This kind of moving is
just to change name (path name + file name), nothing more. The
identity of the file (a.k.a. "i-node" in unix) does not change. Yet
if the file is moved to another physical file system, local or remote,
it is as if the file is copied and then removed. So the remove rule
mentioned can be applied.
The missing lines problem you mentioned is interesting, and may need more analysis on the behavior of the programs/processes which generate and move/delete the log file.
--update--
Happy to see you got some progress. Like I said, a process like tail can still access data after
the file is deleted, in a unix-like system.
You can use
( echo $BASHPID > /tmp/PID_tail; exec tail -n + 1 -f yourLogFile ) | gzip -c - > yourZipFile.gz
to gzip your log file, and kill the tail program by
kill -TERM `cat /tmp/PID_tail`
The gzip should finish by itself without error. Even if you are worried about that gzip will receive a broken
pipe signal, you can use this alternative way to prevent from the broken pipe:
( ( echo $BASHPID > /tmp/PID_tail; exec tail -n + 1 -f yourLogFile ) ; true ) | gzip -c - > yourZipFile.gz
The broken pipe is protected by a true, which prints nothing, but ends itself.
From the tail manpage: Emphasis mine
With --follow (-f), tail defaults to following the file
descriptor, which means that even if a tail'ed file is renamed,
tail will continue to track its end. This default behavior is
not desirable when you really want to track the actual name of
the file, not the file descriptor (e.g., log rotation). Use
--follow=name in that case. That causes tail to track the named
file in a way that accommodates renaming, removal and creation.
Therefore the solution to the problem you proposed is to use:
tail --follow=name LOGILE | gzip -c >> /faraway/log.gz
This way, when the file is deleted, tail stops reading it.

Continually output file content to console?

Quick question, hopefully... I'm building an application with a fairly extensive log file. I'd like the ability at any time to monitor what a specific instance of my application is doing. I could open and close the log file a bunch of times, but its kind of a pain. Optimally, as lines are written to the log file, they would be written to the console as well. So I'm hoping something along the lines of "cat" exists that will actually block and wait for more content to be available in the input file. Anyone have any ideas?
tail -f logfile
this will keep it open and 'follow' the new output.
tail -f yourlogfile
tail -f logfile
An alternate answer for variety: If you're already looking at the log file with less, press capital F to get it to do the same thing tail -f does: wait for new content to be appended and show it.
Look at tee utility
http://www.devdaily.com/blog/post/linux-unix/use-unix-linux-tee-command-send-output-two-or-more-directions-a

How do I jump to the first line of shell output? (shell equivalent of emacs comint-show-output)

I recently discovered 'comint-show-output' in emacs shell mode, which jumps to the first line of shell output, which I find incredibly handy when looking at shell output that exceeds a screen length. The advantages of this command over scrolling with 'page up' are A) you don't have to scan with your eyes for the first line of the output B) you only have to hit the key combo once (instead of 'page up' a number of times which probably is not known beforehand).
I thought about ending all my commands with '| more' but actually this is not what I want since most of the time, I want to retain all output in the terminal buffer, and I usually want to see the end of the shell output first.
I use OSX. Is there a terminal app (on os x) and shell (on remote linux) combination equivalent (so I can do something similar without using emacs all the time - I know, crazy talk)? I normally use bash, but would be fine with switching shells just for this feature.
The way I do this sort of thing is by sending my output to a file and then watching the file as it is written. You still get the results of the command dumped to terminal history in real time and can still inspect the output's actual contents further after the fact (or in another terminal, etc...)
command > output &
tail -f output
head output
You could always do something in bash like this:
alias foo='!! | more'
which would make foo run the previous command with more. I'm not sure of any way to do exactly what you are suggesting.
If you're expecting a lot of output and don't want to run your command twice, you can use tee(1) to fork the output:
my-command | tee /tmp/my-command.log | less
This will pipe the output to a paginator (less), while simultaneously logging the output to a file (in this case, a file named /tmp/my-command.log). If you need to review the output after you've quit from less, you can just cat the log file instead of re-running the command.

Resources