I am diff-ing two files and it scrolls off too much and I cannot see what was the diff at the start of the file. Is there a way for me to do diff and scroll line-by-line by pressing space or something or scrolling page-by-page just like the more command does?
You should be able to pipe the output of diff to more, like so:
diff a.txt b.txt | more
Related
I'm trying to integrate PVS-Studio analysis into CI for my homework. Everything seems to work fine except log printing; I want warnings to be colored or highlighted in some other way.
The best I could think of is to use plog-converter to output in html format and then use elinks -dump -dump-color-mode 1 to output that in terminal but it looks kinda weird.
Is there a better way to do it?
I think the best way is to modify the source of the plog-converter. The source code of the utility is published on GitHub so that users can expand the functionality for their tasks.
Since plog-converter can't do it out of the box and modifying its source code is a bit extreme, I decided to highlight output myself.
After a bit of a fiddling with syntax highlighting in terminal I found out that the simplest way is just using grep kinda like this:
plog-converter -t errorfile project.log | \
GREP_COLOR='01;31' grep -E --color=always 'error:|$' | \
GREP_COLOR='01;33' grep -E --color=always 'warning:|$'
I suppose errorfile format should only containt "errors" and "warning" so this colorizes just these two words with two different colors
Example: man -k ls
Output: A LOT of text, so much that I can only read the last 20 lines.
I don't want information on how to scroll up through the output.
I would like to know, if possible, how to format/control the output so that only the first 20 lines are shown, then, when I press enter/scroll down, the next 20 lines are shown.
This way I can read all the output at my own pace. The output waits for me to tell it to continue. Is there a simple command for this?
Notice: This isn't a text file I'm outputting (I think), its just standard output, and way too much, so much so that it is unreadable except for the last 20 lines.
Can you just pipe the output to less or more? Or redirect the output to files and then go through them after the output is generated?
E.g. To redirect stdout to a file:
prompt> command -args > output.txt
More information on redirecting stdout and stderr can be found here:
http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html
man -k ls | less
Found the answer, literally, right after I posted this question...
Apparently, the "| less" uses pipelines to make any command have scrolling output. I got this info from another site through a Google search.
I want to sort a bunch of files. I can do
sort file.txt > foo.txt
mv foo.txt file.txt
but do I need this second file?
(I tried sort file.txt > file.txt of course, but then I just ended up with an empty file.)
Try:
sort -o file.txt file.txt
See http://ss64.com/bash/sort.html
`-o OUTPUT-FILE'
Write output to OUTPUT-FILE instead of standard output. If
OUTPUT-FILE is one of the input files, `sort' copies it to a
temporary file before sorting and writing the output to
OUTPUT-FILE.
The philosophy of classic Unix tools like sort includes that you can build a pipe with them. Every little tool reads from STDIN and writes to STDOUT. This way the next little tool down the pipe can read the output of the first as input and act on it.
So I'd say that this is a bug and not a feature.
Please also read about Pipes, Redirection, and Filters in the very nice book by ESR.
Because you're writing back to the same file you'll always end up with a problem of the redirect opening the output file before sort gets done loading the original. So yes, you need to use a separate file.
Now, having said that, there are ways to buffer the whole file into the pipe stream first but generally you wouldn't want to do that, although it is possible if you write something to do it. But you'd be inserting special tools at the beginning and the end to do the buffering. Bash, however, will open the output file too soon if you use it's > redirect.
Yes, you do need a second file! The command
sort file.txt > file.txt
would have bash to set up the redirection of stout before it starts executing sort. This is a certain way to clobber your input file.
If you want to sort many files try :
cat *.txt | sort > result.txt
if you are dealing with sorting fixed length records from a single file, then the sort algorithm can swap records within the file. There are a few available algorithms availabe. Your choice would depend on the amount of the file's randomness properties. Generally, quicksort tends to swap the fewest number of records and is usually the sort that completes first, when compared to othersorting algorithms.
I have a .txt (Mac OS X Snow Leopard) file that has a lot of text. At the end of a paragraph, there is a hard return that moves the next paragraph onto another line. This is causing some issues with what I am wanting to do to get the content into my db, so I am wondering if there is anyway I can remove the hard returns? Is there some sort of script I can run? I am really hoping I don't have to go through and manually take the hard returns out.
To recap, here is what it looks like now:
This is some text. Text is what this is.
And then this is the next paragraph that is on a different line.
And this is what I would like to get to:
This is some text. Text is what this is. And then this is the next paragraph that is on a different line.
For all several thousand lines in my .txt file.
Thanks!
EDIT:
The text I am dealing with in my txt file is actually HTML:
<span class="text">1 </span> THis is where my text is<br/>
And when I run the cat command in terminal like mentioned below, only the first is there. Everything else is missing...
In a terminal:
cat myfile.txt | tr -d '\r' > file2.txt
There's probably a more efficient way to do this, since the "tr -d '\r'" is the active ingredient, but that's the idea.
I normally just use an editor with good Regular Expression support. TextWrangler is great.
An end of line in TextWrangler is \r, so to remove it, just search for \r and replace it with a space. TBH, I always wondered how it handles CRLF-encoded files, but somehow it works.
I believe you can do this with Applescript. Unfortunately I'm not familiar with it however the following should help you to acomplish this (it's for a different problem but it will lead you in the direction you need to go): http://macscripter.net/viewtopic.php?id=18762
Alternatively if you didn't want to do this with Applescript and have Excel installed (or access to a machine with it) then the following should help: http://www.mrexcel.com/forum/showthread.php?t=474054
In Linux terminal cat file.txt | tr -d "\r\n" | > new file.txt will do. Modify \r\n part to remove desired charters.
Quick question, hopefully... I'm building an application with a fairly extensive log file. I'd like the ability at any time to monitor what a specific instance of my application is doing. I could open and close the log file a bunch of times, but its kind of a pain. Optimally, as lines are written to the log file, they would be written to the console as well. So I'm hoping something along the lines of "cat" exists that will actually block and wait for more content to be available in the input file. Anyone have any ideas?
tail -f logfile
this will keep it open and 'follow' the new output.
tail -f yourlogfile
tail -f logfile
An alternate answer for variety: If you're already looking at the log file with less, press capital F to get it to do the same thing tail -f does: wait for new content to be appended and show it.
Look at tee utility
http://www.devdaily.com/blog/post/linux-unix/use-unix-linux-tee-command-send-output-two-or-more-directions-a