I have a script located at /usr/local/bin/gq which is returned by the command whereis gq, well almost. What is actually returned is gq: /usr/local/bin/gq. But the following gives me just the filepath (with some white space)
whereis gq | cut -d ":" -f 2
What I’d like to do is be able to pipe that into cat, so I can see the contents. However the old pipe isn’t working. Any suggestions?
If you want to cat the contents of gq, then how about:
cat $(which gq)
The command which gq will result in /usr/local/bin/gq, and the cat command will act on that.
Directory name F1 F2 F3……F120
Inside each directory, a file with a common name ‘xyz.txt’
File xyz.txt has a value
Example:
F1
Xyz.txt
3.345e-2
F2
Xyz.txt
2.345e-2
F3
Xyz.txt
1.345e-2
--
F120
Xyz.txt
0.345e-2
I want to extract these values and paste them in a single file say ‘new.txt’ in a column like
New.txt
3.345e-2
2.345e-2
1.345e-2
---
0.345e-2
Any help please? Thank you so much.
If your files look very similar then you can use grep. For example:
cat F{1..120}/xyz.txt | grep -E '^[0-9][.][0-9]{3}e-[0-9]$' > new.txt
This is a general example as any number can be anything. The regular expression says that the whole line must consist of: a any digit [0-9], a dot character [.], three digits [0-9]{3}, the letter 'e' and any digit [0-9].
If your data is more regular you can also try more simple solution:
cat F{1..120}/xyz.txt | grep -E '^[0-9][.]345e-2$' > new.txt
In this solution only the first digit can be anything.
If your files might contain something else than the line, but the line you want to extract can be unambiguously extracted with a regex, you can use
sed -n '/^[0-9]\.[0-9]*e-*[0-9]*$/p' F*/Xyz.txt >new.txt
The same can be done with grep, but you have to separately tell it to not print the file name. The -x option can be used as a convenience to simplify the regex.
grep -h -x '[0-9]\.[0-9]*e-*[0-9]*' F*/Xyz.txt >new.txt
If you have some files which match the wildcard which should be excluded, try a more complex wildcard, or multiple wildcards which only match exactly the files you want, like maybe F[1-9]/Xyz.txt F[1-9][0-9]/Xyz.txt F1[0-9][0-9]/Xyz.txt
This might work for you (GNU parallel and grep):
parallel -k grep -hE '^[0-9][.][0-9]{3}e-[0-9]$' F{}/xyz.txt ::: {1..120}
Process files in parallel but output results in order.
If the files contain just one line, and you want the whole thing, you can use bash range expansion:
cat /path/to/F{1..120}/Xyz.txt > output.txt
(this keeps the order too).
If the files have more lines, and you need to actually extract the value, use grep -o (-o is not posix, but your grep probably has it).
grep -o '[0-9].345-e2' /path/to/F{1..120}/Xyz.txt > output.txt
I am running FINDSTR command to find specific text in .txt files. I want to print matching lines as well as 1 previous line.
findstr "ActualStartDate:" * > a.txt
if my file is like this
abcd
defg
cds
ActualStartDate: invalid date
Result should be like this
cds
ActualStartDate: invalid date
try this with grep for Windows:
grep -1 "ActualStartDate:" *.txt
output is eg.:
file.txt-cds
file.txt:ActualStartDate: invalid date
There is a tool written as a batch file that can do this easily, which uses built in Windows scripting.
findrepl.bat - http://www.dostips.com/forum/viewtopic.php?f=3&t=4697
I am redirecting the output of a cvs diff onto a log.txt file.
C:\Temp> cvs diff -b -B -r 1.5 -r 1.6 Project\src\Sample.java > log.txt
The generated content of the log.txt file upon executing the above command is like this :
Index: project/src/Sample.java
===================================================================
RCS file: \repobase/src/Sample.java,v
retrieving revision 1.5
retrieving revision 1.6
diff -r1.5 -r1.6
78a79,82
> public java.lang.Class getJavaClass() {
> return Sample.class;
> }
>
92c96
< return Demo.getparentClass(this.getClass());
---
> return MyClass.clazz;
All lines of this file that start with < or > are not necessary. I want to ignore all such lines only to push in the minimal rest into the log.txt file. How can I do this via windows command line?
Perhaps the cvs diff --brief option will give you what you want.
If not, then you can pipe the diff output to FINDSDTR and let it filter the lines
cvs diff -b -B -r 1.5 -r 1.6 Project\src\Sample.java | findstr /vbl "< >" > log.txt
/v option means print lines that don't contain any of the strings
/b option means match the search string starting at the beginning of each line
/l means literal search string (as opposed to a regular expression)
Search string is split at each space, so "< >" is really 2 search strings.
For more more help on FINDSTR use `FINDSTR /?'.
For additional help see What are the undocumented features and limitations of the Windows FINDSTR command?
I need to concatenate some relatively large text files, and would prefer to do this via the command line. Unfortunately I only have Windows, and cannot install new software.
type file1.txt file2.txt > out.txt
allows me to almost get what I want, but I don't want the 1st line of file2.txt to be included in out.txt.
I have noticed that more has the +n option to specify a starting line, but I haven't managed to combine these to get the result I want. I'm aware that this may not be possible in Windows, and I can always edit out.txt by hand to get rid of the line, but is there a simple way of doing it from the command line?
more +2 file2.txt > temp
type temp file1.txt > out.txt
or you can use copy. See copy /? for more.
copy /b temp+file1.txt out.txt
I use this, and it works well for me:
TYPE \\Server\Share\Folder\*.csv >> C:\Folder\ConcatenatedFile.csv
Of course, before every run, you have to DELETE C:\Folder\ConcatenatedFile.csv
The only issue is that if all files have headers, then it will be repeated in all files.
I don't have enough reputation points to comment on the recommendation to use *.csv >> ConcatenatedFile.csv, but I can add a warning:
If you create ConcatenatedFile.csv file in the same directory that you are using for concatenation it will be added to itself.
Use the FOR command to echo a file line by line, and with the 'skip' option to miss a number of starting lines...
FOR /F "skip=1" %i in (file2.txt) do #echo %i
You could redirect the output of a batch file, containing something like...
FOR /F %%i in (file1.txt) do #echo %%i
FOR /F "skip=1" %%i in (file2.txt) do #echo %%i
Note the double % when a FOR variable is used within a batch file.
I would put this in a comment to ghostdog74, except my rep is too low, so here goes.
more +2 file2.txt > temp
This code will actually ignore rows 1 and 2 of the file. OP wants to keep all rows from the first file (to maintain the header row), and then exclude the first row (presumably the same header row) on the second file, so to exclude only the header row OP should use more +1.
type temp file1.txt > out.txt
It is unclear what order results from this code. Is temp appended to file1.txt (as desired), or is file1.txt appended to temp (undesired as the header row would be buried in the middle of the resulting file).
In addition, these operations take a REALLY LONG TIME with large files (e.g. 300MB)
Here's how to do this:
(type file1.txt && more +1 file2.txt) > out.txt
In powershell:
Get-Content file1.txt | Out-File out.txt
Get-Content file2.txt | Select-Object -Skip 1 | Out-File -Append out.txt
I know you said that you couldn't install any software, but I'm not sure how tight that restriction is. Anyway, I had the same issue (trying to concatenate two files with presumably the same headers) and I thought I'd provide an alternative answer for others who arrive at this page, since it worked just great for me.
After trying a whole bunch of commands in windows and being severely frustrated, and also trying all sorts of graphical editors that promised to be able to open large files, but then couldn't, I finally got back to my Linux roots and opened my Cygwin prompt. Two commands:
cp file1.csv out.csv
tail -n+2 file2.csv >> out.csv
For file1.csv 800MB and file2.csv 400MB, those two commands took under 5 seconds on my machine. In a Cygwin prompt, no less. I thought Linux commands were supposed to be slow in Cygwin but that approach took far less effort and was way easier than any windows approach I could find.
You can also simply try this
type file2.txt >> file1.txt
It will append the content of file2.txt at the end of file1.txt
If you need original file1.txt, take a backup beforehand. Or you can do this
type file1.txt > out.txt
type file2.txt >> out.txt
If you want to have a line break at the end of the first file, you can try the following command before appending.
type file1.txt > out.txt
printf "\n" >> out.txt
type file2.txt >> out.txt
The help for copy explains that wildcards can be used to concatenate multiple files into one.
For example, to copy all .txt files in the current folder that start with "abc" into a single file named xyz.txt:
copy abc*.txt xyz.txt
more +2 file1.txt > type > out.txt && type file2.txt > out.txt
This takes Test.txt with headers and appends Test1.txt and Test2.txt and writes results to Testresult.txt file after stripping headers from second and third files respectively:
type C:\Test.txt > C:\Testresult.txt && more +1 C:\Test1.txt >> C:\Testresult.txt && more +1 C:\Test2.txt >> C:\Testresult.txt