I am trying to log some output to a file in realtime using Ruby. I would like to be able to do a tail -f on the log file and watch the output get written. At the moment the file only gets written to once I stop the ruby script. What I am trying to do seems straight forward.
I create the logfile
log = File.open(logFileName, "a")
I later write to it using:
log.puts "#{variable}"
Again, the log file gets created and the correct entries are in it but only once I have stopped the script from running. I need to tail the log file and see in realtime.
Thanks in advance!
Normally file input and output is buffered to a degree. You can disable this behaviour by flipping a flag:
log.sync = true
This disables buffering by forcing a flush operation after each write. With that enabled, programs like tail -f can read the data in real-time.
Related
I'd like to log standard output form a script of mine to a file, but also have it display to me on screen for realtime monitoring. The script outputs something about 10 times every second.
I tried to redirect stdout to a file and then tail -f that file from another terminal, but for some reason tail is updating the screen significantly slower than the script is writing to the file.
What's causing this lag? Is there an alternate method of getting one standard output stream both on my terminal and into a file for later examination?
I can't say why tail lags, but you can use tee:
Redirect output to multiple files, copies standard input to standard output and also to any files given as arguments. This is useful when you want not only to send some data down a pipe, but also to save a copy.
Example: <command> | tee <outputFile>
How much of a lag do you see? A few hundred characters? A few seconds? Minutes? Hours?
What you are seeing is buffering. Almost all file reads and writes are buffered. This includes input and output and there is also some buffering taking place within pipes. It's just more efficient to pass a packet of data around rather than a byte at a time. I believe data on HFS+ file systems are stored in UTF-16 while Mac OS X normally use UTF-8 as a default. (NTFS also stores data using UTF-16 while Windows uses code pages for character data by default).
So, if you run tail -f from another terminal, you may be seeing buffering from tail, but when you use a pipe and then tee, you may have a buffer in the pipe, and in the tee command which maybe why you see the lag.
By the way, how do you know there's a lag? How do you know how quickly your program is writing to the disk? Do you print out something in your program to help track the writes to the file?
In that case, you might not be lagging as much as you think. File writes are also buffered. So, it is very possible that the lag isn't from the tail -f, but from your script writing to the file.
Use tee command:
tail -f /path/logFile | tee outfile
cat report.txt | sed 's/<\/li>/<\/li> \n/g' > report.txt
This obviously results in an empty file.
Is there a mechanism that allows you to store the data before processing it or store the output until the command has finished executing and then write the file?
http://en.wikipedia.org/wiki/Pipeline_(Unix)#Implementation:
"...a receiving program may only be able to accept 100 bytes per second, but no data is lost. Instead, the output of the sending program is held in a queue. When the receiving program is ready to read data, the operating system sends its data from the queue, then removes that data from the queue."
Sounds like there should be a simple trick to load this into a queue instead of writing it immediately to file, then unload it after the command has finished?
Thanks much!
What you need is editing in place:
sed -i 's/<\/li>/<\/li> \n/g' report.txt
I want to put my program's output into a file. I keyed in the following :
./prog > log 2>&1
But there is nothing in the file "log". I am using the Ubuntu 11.10 and the default shell is bash.
Anybody know the cause of this AND how I can debug this?
There are many possible causes:
The program reads the input from log file while you try to redirect into it with truncation (see Why doesn't "sort file1 > file1" work?)
The output is buffered so that you don't see data in the file until the output buffer is flushed. You can manually call fflush or output std::flush if using C++ I/O stream etc.
The program is smart enough and disables output if the output stream is not a terminal.
You look at the wrong file (i.e. in another directory).
You try to dump file's contents incorrectly.
Your program outputs '\0' as the first character so the output appears to be empty, even though there is some data.
Name your own.
Your best bet is to run this application under a debugger (like gdb) or use strace or ptrace (or both) and see what the program is doing. I mean, really, output redirection works for the last like 40 years, so the problem must be somewhere else.
Quick question, hopefully... I'm building an application with a fairly extensive log file. I'd like the ability at any time to monitor what a specific instance of my application is doing. I could open and close the log file a bunch of times, but its kind of a pain. Optimally, as lines are written to the log file, they would be written to the console as well. So I'm hoping something along the lines of "cat" exists that will actually block and wait for more content to be available in the input file. Anyone have any ideas?
tail -f logfile
this will keep it open and 'follow' the new output.
tail -f yourlogfile
tail -f logfile
An alternate answer for variety: If you're already looking at the log file with less, press capital F to get it to do the same thing tail -f does: wait for new content to be appended and show it.
Look at tee utility
http://www.devdaily.com/blog/post/linux-unix/use-unix-linux-tee-command-send-output-two-or-more-directions-a
I have a crontab job calling a python script and outputting to a file:
python run.py &> current_date.log
now sometimes when I do
tail -f current_date.log
I see the file filling up with the output, but other times the log file exists, but stays empty for a long time. I am sure that the python script is printing stuff right after it starts running, and the log file is created. Any ideas why does it stay empty some of the time?
Python buffers output when it detects that it is not writing to a tty, and so your log file may not receive any output right away. You can configure your script to flush output or you can invoke python with the -u argument to get unbuffered output.
$ python -h
...
-u : unbuffered binary stdout and stderr (also PYTHONUNBUFFERED=x)
see man page for details on internal buffering relating to '-u'
...
The problem is actually Python (not bash) and is by design. Python buffers output by default. Run python with -u to prevent buffering.
Another suggestion is to create a class (or special function) which calls flush() right after the write to the log file.