My program takes a text file instead of stdout as its output. It's constantly appending new lines to the file. I can tail the file every time I want to get the latest lines of content appended. But now I want the appended content to show on my terminal simultaneously, as if my program was taking stdout as its output.
I've found out an ugly solution: print the new appended content every five seconds by backing up the content of the text file five seconds earlier and diff the current content with it, like bellow:
#!/bin/sh
# show the appended text of a file every 5 seconds
echo `pwd`;
while true
do
cp $1 $1.earlier;
sleep 5;
echo `date`;
diff $1 $1.earlier;
done
I think what you want is:
$ tail -f file
From man tail:
-f, --follow[={name|descriptor}]
output appended data as the file grows; -f, --follow, and --follow=descriptor
are equivalent
If you want more than just the last few lines of output, as well as following, you can invoke:
less +F $file
(or press Shift-F while viewing the file in less).
While following the file in less, press Ctrl-C to stop following but keep the file open, then Shift-F to follow again.
Related
I use tail -f to show the contents of a logfile.
What I want is when the logfile content changes, instead of appending the new lines to my screen, only the newly added lines should be shown on my screen.
So as if a clearscreen was made every time before printing the new lines.
I tried to find a solution by web search but couldn't find anything useful.
edit:
In my case it happens that several lines will be added at once (it is a php error logfile). So I am looking for a solution where more than the single last line can be shown on screen.
The watch command in combination with the tail command shows the last line of a log file with the intervall of every 2 seconds. Basically it doesn't refresh whenever a new line is appended to the log file but since you could specifiy an intervall it might help you for your use case.
watch -t tail -1 <path_to_logfile>
If you need a faster intervall like every 0.5 seconds, then you could specify it with the 'n' option i.e.:
watch -t -n 0.5 tail -1 <path_to_logfile>
Try
$ watch 'tac FILE | grep -m1 -C2 PATTERN | tac'
where
PATTERN is any keyword (or regexp) to identify errors you seek in the log,
tac prints the lines in reverse,
-m is a max count of matching lines to grep,
-C is any number of lines of context (before and after the match) to show (optional).
That would be similar to
$ tail -f FILE | grep -C2 PATTERN
if you didn't mind just appending occurrences to the output in real-time.
But if you don't know any generic PATTERN to look for at all,
you'd have to just follow all the updates as the logfile grows:
$ tail -n0 -f FILE
Or even, create a copy of the logfile and then do a diff:
Copy: cp file.log{,.old}
Refresh the webpage with your .php code (or whatever, to trigger the error)
Run: diff file.log{,.old}
(or, if you prefer sort to diff: $ sort file.log{,.old} | uniq -u)
The curly braces is shorthand for both filenames (see Brace Expansion in $ man bash)
If you must avoid any temp copies, store the line count in memory:
z=$(grep -c ^ file.log)
Refresh the webpage to trigger an error
tail -n +$z file.log
The latter approach can be built upon, to create a custom scripting solution more suitable for your needs (check timestamps, clear screen, filter specific errors, etc). For example, to only show the lines that belong to the last error message in the log file updated in real-time:
$ clear; z=$(grep -c ^ FILE); while true; do d=$(date -r FILE); sleep 1; b=$(date -r FILE); if [ "$d" != "$b" ]; then clear; tail -n +$z FILE; z=$(grep -c ^ FILE); fi; done
where
FILE is, obviously, your log file name;
grep -c ^ FILE counts all lines in a file (that is almost, but not entirely unlike cat FILE|wc -l that would only count newlines);
sleep 1 sets the pause/delay between checking the file timestamps to 1 second, but you could change it to even a floating point number (the less the interval, the higher the CPU usage).
To simplify any repetitive invocations in future, you could save this compound command in a Bash script that could take a target logfile name as an argument, or define a shell function, or create an alias in your shell, or just reverse-search your bash history with CTRL+R. Hope it helps!
Is it possible to write to file in one bash process and read it with tail in another (same way you can read system generated logs with tail -f.
I would like to open and continuously write something to file
vi /tmp/myfile
And in other terminal prints what was written to that file
tail -f /tmp/myfile
I've tried this, but tail doesn't print anything after I save changes in vi (only initial lines, before save).
Motivation:
In my toy project. I would like to build shared clipboard using pipeto.me service. Where I would write to my file continuously and all changes captured by tail would be piped to curl. Something like watch log example from pipeto.me
tail -f logfile | curl -T- -s https://pipeto.me/2xrGcZtQ.
But instead of logfile it will watch my file, where I would write in vi
But apart from solving my problem, I'm looking for general answer if something like this is possible with vi and tail.
You can use cat command, by changing its output stream as /tmp/file that is whatever you type will be added to myfile,
cat > /tmp/myfile;
#input-> add text(standard input by default is set as keyboard)
#typing...
And to print the file with tail command with -F as argument,
tail -F /tmp/file; #-F -> output appended data as the file grows and with retry
#output-> input given to file
#typing....
Writing text to file with vim,
vi /tmp/file;
#typing...
#:w -> write text to file
tail -F /tmp/file;
#
#typing...
When you write to your file using vim, it doesn't write(save) it instantly as you type, instead when you exit the insert mode and save the file explicitly(:w), it is then the output of tail will be updated.
Hence you can use a plugin like Autosaveplugin which could help to save automatically, to display logs synchronously.
I would like to extract the first line from a file, read into a variable and delete right afterwards, with a single command. I know sed can read the first line as follows:
sed '1q' file.txt
or delete it as follows:
sed '1q;d' file.txt
but can I somehow do both with a single command?
The reason for this is that multiple processes will be reading the first line of the file, and I want to minimize the chances of them getting the same line.
It's impossible.
Except you read the manpage, and have Gnu-sed:
echo -e {1..3}"\n" > input
cat input
1
2
3
sed -n '1p;2,$ Woutput' input
1
cat output
2
3
Explanation:
sed -n '1p;2,$ Woutput' input
-n no output by default
1p; print line 1
2,$ from line 2 until $ last line
W (non posix) Write buffer to file
From the man page gnu sed:
w filename
Write the current pattern space to filename.
W filename
Write the first line of the current pattern space to filename. This is a GNU extension.
However, reading and experimenting takes longer, than opening the file in a full blown office suite and deleting the line by hand, or invoking a text-to-speech framework and training it, to do the job.
It doesn't work if invoked in posix style:
sed -n --posix '1p;2,$ Woutput' input
And you still have the hard hanwork of renaming output to input again.
I didn't try to write to input in place, because that could damage my carefully crafted input file - try it on own risk:
sed -n '1p;2,$ Winput' input
However, you might set up a filesystem notify job, which always rename freshly created output files to input again. But I fear you can't do it from within the sed command. Except ... (to be continued)
First off, I'm really bad at shell, as you'll notice :)
Now then, I have the following task: The script gets two arguments (fileName, N). If the number of lines in the file is greater then N, then I need to cut the last N lines, then overwrite the contents of the file with it.
I thought of saving the contents of the file into a variable, then just cat-ing that to the file. However for some reason it's not working.
I have problems with saving the last N lines to a variable.
This is how I tried doing it:
lastNLines=`tail -$2 $1`
cat $lastNLines > $1
Your lastNLines is not a filename. cat takes filenames. You also cannot open the input file for writing, because the shell truncates it before tail can get to it, which is why you need to use a temporary file.
However, if you insist on not using a temporary file, here's a non-portable solution:
tail -n$2 $1 | sponge $1
You may need to install moreutils for sponge.
The arguments cat takes are file names, not the content.
Instead, you can use a temp file, like this:
tail -$2 $1 > $1._tmp
mv $1._tmp $1
To save the content to a variable, you can do what you already included in your question, or:
lastNLines=`cat $1`
(after the mv command, of course)
This should be a no-brainer, but apparently I have no brain today.
I have 50 20-gig logs that contain entries from multiple apps, one of which addes a transaction ID to its log lines. I have 42 transaction IDs I need to review, and I'd like to parse out the appropriate lines into separate files.
To do a single file, the command would be simply,
grep CDBBDEADBEEF2020X02393 server.log* > CDBBDEADBEEF2020X02393.log
that creates a log isolated to that transaction, from all 50 server.logs.
Now, I have a file with 42 txnIDs (shortening to 4 here):
CDBBDEADBEEF2020X02393
CDBBDEADBEEF6548X02302
CDBBDE15644F2020X02354
ABBDEADBEEF21014777811
And I wrote:
#/bin/sh
grep $1 server.\* > $1.log
But that is not working. Changing the shebang to #/bin/bash -xv, gives me this weird output (obviously I'm playing with what the correct escape magic must be):
$ ./xtrakt.sh B7F6E465E006B1F1A
#!/bin/bash -xv
grep - ./server\.\*
' grep - './server.*
: No such file or directory
I have also tried the command line
grep - server.* < txids.txt > $1
But OBVIOUSLY that $1 is pointless and I have no idea how to get a file named per txid using the input redirect form of the command.
Thanks in advance for any ideas. I haven't gone the route of doing a foreach in the shell script, because I want grep to put the original filename in the output lines so I can examine context later if I need to.
Also - it would be great to have the server.* files ordered numerically (server.log.1, server.log.2 NOT server.log.1, server.log.10...)
try this:
while read -r txid
do
grep "$txid" server.* > "$txid.log"
done < txids.txt
and for the file ordering - rename files with one digit to two digit, with leading zeroes, e.g. mv server.log.1 server.log.01.