Text editor to view giant log files - windows

As I have not yet setup some log rotating solution, I have a 3gb (38-million line) log file which I need to find some information in from a certain date. As using cat | grep is horribly slow, and using my current editor (Large Text File Viewer) is equally slow, I was wondering: Is there any text editor that works well with viewing >35-million line log files? I could just use the cat | grep solution and leave it running overnight, but with millions of errors to sort through there has to be a better way.

You might want to try using grep by itself:
grep 2011-04-09 logfile.txt
instead of needlessly using cat:
cat logfile.txt | grep 2011-04-09
When dealing with large amounts of data, this can make a difference.
Interesting reading is a Usenet posting from last year: why GNU grep is fast.

Since you are on Windows, you should really try multiple implementations of grep. Not all implementations of grep are equal. There are some truly awful implementations.
It is not necessary to use cat: Grep can read directly from the log file, unless it is locked against being shared with readers.
grep pattern logfile > tmpfile
should do the trick. Then you can use most any editor to examine the selected records, assuming it is quite selective.

I don't think you're going to get any faster than grep alone (as others have noted, you don't need the cat).
I personally find "more" and "less" are useful (for smaller files). The reason is that sometimes a pattern will get you in the general vicinity of where you want (i.e. a date and time) and then you can scroll through the file at that point.
the "/" is the search command for regular expressions in more.

Related

Finding Duplicate image files

I have around 1 TB of images, stored in my hard disk. These are pictures taken over time of friends and family. Many of these pictures are duplicates, in the sense, same file saved in different locations, probably with different name too. I want to ask is there any tool, utility or approach(I can code one ) to find out the duplicate files.
I would recommend using md5deep or sha1deep. On Linux simply install package md5deep (it is included in most Linux distributions).
Once you have it installed, simply run it in recursive mode over your whole disk and save checksums for every file on your disk into text file using command like this:
md5deep -r -l . > filelist.txt
If you like sha1 better than md5, use sha1deep instead (it is part of the same package).
Once you have a file, simply sort it using sort (or pipe it into sort in previous step):
sort < filelist.txt > filelist_sorted.txt
Now, simply look at the result using any text editor - you will quickly see all the duplicates alongside with their locations on disk.
If you are so inclined, you can write simple script in Perl or Python to remove duplicates based on this file list.

Diff for 3 binary files

I have 3 binary files. Let's call them file1.bin, file2.bin and file3.bin.
file1.bin and file2.bin have some common parts.
file2.bin and file3.bin have some common parts.
I want to find the common parts between file1.bin and file2.bin that are different between file2.bin and file3.bin.
How do you recommend to accomplish that? I have already dumped the binary files to text files using xxd and then did a 3-way diff using vim -d file1.txt file2.txt file3.txt.
However, vim marks a part as changed in all the files even if it has only changed in one file and remains the same in the other two files. I want those special kind of occurrences to be marked differently.
Perhaps you can use the built-in unix diff (I think it is part of OSX), but use the --unchanged-group-format to list the similarities. Do that for file1 and file 2. Then do it for file2 and file3. You can then do a regular diff on the two resulting files.
For an idea of how to get the similarities, have a look at this post.
The tool that I work for (ECMerge) does that. You just have to diff the 3 binary files, it will present equal portions in front of each other, and modified bytes appropriately placed in between. No need to first get an hex dump. You can script in JavaScript to output whatever you like based on the diff results and the bytes in the files (it works also in command line).
Chromium uses bsdiff, then switched to courgette for doing binary diff as explained in their blog here. You might find useful leads from their blog.

very long lines - windows grep character (not a line based) tool

Is there a grep-like tool for windows where I can restrict the number of characters it outputs in a line where a searched for pattern is found.
One of the upstream software systems generates huge text files which we then feed as the input to our system.
Sometimes the input files get corrupted and I need to do a quick textual search to find if particular the bits of data are missing or not. To make it even worse - the input files is just one very very long line of text - and when I use grep or findstr - the result of the search is huge chunk of text.
I am wandering - how can I limit the number of characters grep to show before/after the pattern I searched for.
Cheers.
Two things spring to my mind:
Call grep with the --only-matching option so that only the text that matches is emitted. Depending on your regex, this may or may not help.
Write a very simple executable, call it trunc, which reads from stdin line by line and output the first n characters to stdout. Then simply pipe the output from grep to trunc.
The latter option is relatively simple. If you didn't want to go the whole hog and produce a proper native exe it could be quite easily achieved with a Perl/Python/Ruby etc. script.

Store and query a mapping in a file, without re-inventing the wheel

If I were using Python, I'd use a dict. If I were using Perl, I'd use a hash. But I'm using a Unix shell. How can I implement a persistent mapping table in a text file, using shell tools?
I need to look up mapping entries based on a string key, and query one of several fields for that key.
Unix already has colon-separated records for mappings like the system passwd table, but there doesn't appear to be a tool for reading arbitrary files formatted in this manner. So people resort to:
key=foo
fieldnum=3
value=$(cat /path/to/mapping | grep "^$key:" | cut -d':' -f$fieldnum)
but that's pretty long-winded. Surely I don't need to make a function to do that? Hasn't this wheel already been invented and implemented in a standard tool?
Given the conditions, I don't see anything hairy in the approach. But maybe consider awk to extract data. awk approach allows for picking only the first, or the last entry, or imposing any arbitrary additional conditions:
value=$(awk -F: "/^$key:/{print \$$fieldnum}" /path/to_mapping)
Once bundled in a function it's not that scary:)
I'm afraid there's no better way at least within POSIX. But you may also have a look at join command.
Bash supports arrays, which is not exactly the same. See for example this guide.
area[11]=23
area[13]=37
area[51]=UFOs
echo ${area[11]}
See this LinuxJournal article for Bash >= 4.0. For other versions of Bash you can fake it:
hput () {
eval hash"$1"='$2'
}
hget () {
eval echo '${hash'"$1"'#hash}'
}
# then
hput a blah
hget a # yields blah
Your example is one of several ways to do this using shell tools. Note that cat is unnecessary.
key=foo
fieldnum=3
filename=/path/to/mapping
value=$(grep "^$key:" "$filename" | cut -d':' -f$fieldnum)
Sometimes join comes in handy, too.
AWK, Python, Perl, sed and various XML, JSON and YAML tools as well as databases such as MySQL and SQLite can also be used, of course.
Without using them, everything else can sometimes be convoluted. Unfortunately, there isn't any "standard" utility. I would say that the answer posted by pooh comes closest. AWK is especially adept at dealing with plain-text fields and records.
The answer in this case appears to be: no, there's no widely-available implementation of the ‘passwd’ file format for the general case, and wheel re-invention is necessary in each case.

Great tools to find and replace in files? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I'm switching from a Windows PHP-specific editor to VIM, on the philosophy of "use one editor for everything and learn it really well."
However, one feature I liked in my PHP editor was its "find and replace" capability. I could approach things two ways:
Just find. Search all files in a project for a string, see all the occurrences listed, and click to dive into that file at that line.
Blindly replace all occurrences of "foo" with "bar".
And of course I could use the GUI to say what types of files, whether to look in subfolders, whether it was case sensitive, etc.
I'm trying to approximate this ability now, and trying to piece it together with bash is pretty tedious. Doable, but tedious.
Does anybody know any great tools for things like this, for Linux and/or Windows? (I would really prefer a GUI if possible.) Or failing that, a bash script that does the job well? (If it would list file names and line numbers and show code snippets, that would be great.)
Try sed. For example:
sed -i -e 's/foo/bar/g' myfile.txt
Vim has multi-file search built in using the command :vimgrep (or :grep to use an external grep program - this is the only option prior to Vim 7).
:vimgrep will search through files for a regex and load a list of matches into a buffer - you can then either navigate the list of results visually in the buffer or with the :cnext and :cprev commands. It also supports searching through directory trees with the ** wildcard. e.g.
:vimgrep "^Foo.*Bar" **/*.txt
to search for lines starting with Foo and containing Bar in any .txt file under the current directory.
:vimgrep uses the 'quickfix' buffer to store its results. There is also :lvimgrep which uses a local buffer that is specific to the window you are using.
Vim does not support multi-file replace out of the box, but there are plugins that will do that too on vim.org.
I don't get why you can't do this with VIM.
Just Find
/Foo
Highlights all instances of Foo in the file and you can do what you want.
Blindly Replace
:% s/Foo/Bar/g
Obviously this is just the tip of the iceberg. You have lots of flexibility of the scope of your search and full regex support for your term. It might not work exactly like your former editor, but I think your original 'use one editor' idea is a valid one.
Notepad++ allows me to search and replace in an entire folder (and subfolders), with regex support.
You can use perl in command prompt to replace text in files.
perl -p -i".backup" -e "s/foo/bar/g" test.txt
Since you are looking for a GUI tool, I generally use the following 2 tools. Both of them have great functionality including wildcat matching, regex, filetype filter etc. Both of them displays good useful information about the hit in files like filename/lines.
Visual Studio: fast yet powerful. I uses it if the file number is huge (say, tens of thousands...)
pspad: lightweight. And a good feature about find/replace for pspad is that it will organize hits in different files in a tree hierarchy, which is very clear.
There are a number of tools that you can use to make things easier. Firstly, to search all the files in the project from vim you can use :grep like so:
:grep 'Function1' myproject/
This essentially runs a grep and lets you quickly jump from/to locations where it has been found.
Ctags is a tool that finds declarations in your code and then allows vim to jump to these declarations. To do this, run ctags and then place your cursor over a function call and then use Ctrl-]. Here is a link with some more ctags information:
http://www.davedevelopment.co.uk/2006/03/13/vim-ctags-and-php-5/
I don't know if it is an option for you, but if you load all your files into vim with
vim *.php
than you can
:set hidden
:argdo %s/foo/bar/g => will execute the substitue command in all opened buffers
:wall => will write all opened buffers
Or instead of loading all your files into vim try :help vimgrep and a cominbation of :help argdo and :help argadd
For Windows, I think that grepWin is hard to beat -- a GUI to a powerful and flexible grep tool for Windows. It searches, and replaces, knows about regular expressions, that sort of stuff.
look into sed ... powerful command line tool that should accomplish most of what you're looking for ... its supports regex, so your find/replace is quite easy.
(man sed)
Notepad++ has support for syntax highlighting in many languages and supports find and replace across all open files with regex and basic \n \r \t support.
The command grep -rn "search terms" * will search for the specified terms in all files (including those in sub-directories) and will return matching lines including file name and line number. Armed with this info, it is easy to jump to a particular file/line in VIM.
As was mentioned before, sed is extremely powerful for doing find-and-replace.
You can run both of these tools from inside VIM as well.
Some developers I currently work with swear by Textpad. It has a UI and also supports using regex's -- everything you're looking for and more.
A very useful search tool is ack. (Ubuntu refers to it as "ack-grep" in the repositories and man pages.)
The short version of what it does is a combination of find and grep that's more powerful and intelligent than that pair.

Resources