Continually output file content to console? - bash

Quick question, hopefully... I'm building an application with a fairly extensive log file. I'd like the ability at any time to monitor what a specific instance of my application is doing. I could open and close the log file a bunch of times, but its kind of a pain. Optimally, as lines are written to the log file, they would be written to the console as well. So I'm hoping something along the lines of "cat" exists that will actually block and wait for more content to be available in the input file. Anyone have any ideas?

tail -f logfile
this will keep it open and 'follow' the new output.

tail -f yourlogfile

tail -f logfile

An alternate answer for variety: If you're already looking at the log file with less, press capital F to get it to do the same thing tail -f does: wait for new content to be appended and show it.

Look at tee utility
http://www.devdaily.com/blog/post/linux-unix/use-unix-linux-tee-command-send-output-two-or-more-directions-a

Related

When should I use file descriptors in bash scripting?

i know what they are, but i dunno when i should use them. Are they useful? I think yes, but I want you to tell me in which situations a file descriptor could be useful. Thanks :D
The most obvious case which springs to mind is:
myProgram >myProgram.output_and_error 2>&1
which sends both standard output and error to the same file.
I've also used:
myProgram 2>&1 | less
which will allow me to page through the output and error in sequence (rather than having error got to the terminal in "arbitrary" places in the output).
Basically, any time when you need to get at an already existing file descriptor, you'll find yourself using this.

Redirecting shell stdout from application run by mono

I have run in to an interesting issue that I can't seem to work around so maybe you guru's can direct me in the right direction.
I want to have my app, which is run by mono direct its output to a logfile, but I want it to overwrite / empty the logfile before every write. Now it appends and I can't seem to change that no matter what.
Here are some of the options I've tried and their result:
mono myapp.exe > myapp.log & Append
mono myapp.exe >| myapp.log & Append
mono myapp.exe | tee myapp.log Append
mono myapp.exe >! myapp.log & Nothing
The above are called from within my bash-based startup script, which runs nicely otherwise. (just one iteration, don't need multiple) Directing to /dev/null is so far the best as it would otherwise grow the log file indefinitely.
Tha app outputs some info every 2 seconds and it'd be nice to view it with tail -f myapp.log whenever I wanted to.
Thank you for your time.
I don't get the problem, sorry. why not using rm and don't run the programms in background.
Or do you want just one specific line of the output ? In this case you need some additional tools like head or sed and you have to give more input.

direct stdout to cache

Does anyone happen to know how to direct STDOUT in Terminal to Cache? Sometimes I would like to copy text from STDOUT somewhere else, e.g. my mail program, and it seems always a bit inconvenient to me to either copy the output manually or create a new temporary file.
Is there an easy way to do this?
Thanks a lot!
Alex
It's not clear exactly what you're asking. But if you're talking about capturing stdout to file whilst still being able to see it on the console, then you can use tee (assuming you're using *nix):
./myApp | tee stdout.txt

Do I need to generate a second file to sort a file?

I want to sort a bunch of files. I can do
sort file.txt > foo.txt
mv foo.txt file.txt
but do I need this second file?
(I tried sort file.txt > file.txt of course, but then I just ended up with an empty file.)
Try:
sort -o file.txt file.txt
See http://ss64.com/bash/sort.html
`-o OUTPUT-FILE'
Write output to OUTPUT-FILE instead of standard output. If
OUTPUT-FILE is one of the input files, `sort' copies it to a
temporary file before sorting and writing the output to
OUTPUT-FILE.
The philosophy of classic Unix tools like sort includes that you can build a pipe with them. Every little tool reads from STDIN and writes to STDOUT. This way the next little tool down the pipe can read the output of the first as input and act on it.
So I'd say that this is a bug and not a feature.
Please also read about Pipes, Redirection, and Filters in the very nice book by ESR.
Because you're writing back to the same file you'll always end up with a problem of the redirect opening the output file before sort gets done loading the original. So yes, you need to use a separate file.
Now, having said that, there are ways to buffer the whole file into the pipe stream first but generally you wouldn't want to do that, although it is possible if you write something to do it. But you'd be inserting special tools at the beginning and the end to do the buffering. Bash, however, will open the output file too soon if you use it's > redirect.
Yes, you do need a second file! The command
sort file.txt > file.txt
would have bash to set up the redirection of stout before it starts executing sort. This is a certain way to clobber your input file.
If you want to sort many files try :
cat *.txt | sort > result.txt
if you are dealing with sorting fixed length records from a single file, then the sort algorithm can swap records within the file. There are a few available algorithms availabe. Your choice would depend on the amount of the file's randomness properties. Generally, quicksort tends to swap the fewest number of records and is usually the sort that completes first, when compared to othersorting algorithms.

Loop through a directory with Grep (newbie)

I'm trying to do loop through the current directory that the script resides in, which has a bunch of files that end with _list.txt I would like to grep each file name and assign it to a variable and then execute some additional commands and then move on to the next file until there are no more _list.txt files to be processed.
I assume I want something like:
while file_name=`grep "*_list.txt" *`
do
Some more code
done
But this doesn't work as expected. Any suggestions of how to accomplish this newbie task?
Thanks in advance.
If I understand you problem correctly, you don't need a grep. You can just do:
for file in *_list.txt
do
# use $file, like echo $file
done
grep is one of the most useful commands of Unix. You must comprehend it well; see some useful examples here. As far as your current requirement, I think following code will be useful:
for file in *.*
do
echo "Happy Programming"
done
In place of *.* you can also use regular expressions. For more such useful examples, see First Time Linux, or read all grep options at your terminal using man grep.

Resources