Finding a newline in the csv file - bash

I know there are a lot of questions about this (latest one here.), but almost all of them are how to join those broken lines into one from a csv file or remove them. I don't want to remove, but I just want to display/find that line (or probably the line number?)
Example data:
22224,across,some,text,0,,,4 etc
33448,more,text,1,,3,,,4 etc
abcde,text,number,444444,0,1,,,, etc
358890,more
,text,here,44,,,, etc
abcdefg,textds3,numberss,413,0,,,,, etc
985678,93838,text,,,,
,text,continuing,from,previous,line,,, etc
More search on this, and I know I shouldn't use bash to accomplish this, but rather shoud use perl. I tried (from various website, I don't know perl), but apparently I don't have the Text::CSV package and I don't have permission to install one.
As I told I have no idea how to even start looking for this, so I don't have any script. This is not a windows file, this is very much unix file so we can ignore the CR problem.
Desired output:
358890,more
,text,here,44,,,, etc
985678,93838,text,,,,
,text,continuing,from,previous,line,,, etc
or
Line 4: 358890,more
,text,here,44,,,, etc
Line 7: 985678,93838,text,,,,
,text,continuing,from,previous,line,,, etc
Much appreciated.

You can use perl to count the number of fields(commas), and append the next line until it reaches the correct number
perl -ne 'if(tr/,/,/<28){$line=$.;while(tr/,/,/<28){$_.=<>}print "Line $line: $_\n"}' file

I do love Perl but I don't think it is the best tool for this job.
If you want a report of all lines that DO NOT have exactly the correct number of commas/delimiters, you could use the unix language awk.
For example, this command:
/usr/bin/awk -F , 'NF != 8' < csv_file.txt
will print all lines that DO NOT have exactly 7 commas. Comma is specified as the Field with -F and the Number of Fields is specified with NF.

Related

text manipulation using unix commands only

I have a task where I need to parse through files and extract information. I can do this easy using bash but I have to get it done through unix commands only.
For example, I have a file similar to the following:
Set<tab>one<tab>two<tab>three
Set<tab>four<tab>five<tab>six
ENDSET
Set<tab>four<tab>two<tab>nine
ENDSET
Set<tab>one<tab>one<tab>one
Set<tab>two<tab>two<tab>two
ENDSET
...
So on and so forth. I want to be able to extract a certain number of sets, say the first 10. Also, I want to be able to extract info from the columns.
Once again, this is a trivial thing to do using bash scripting, but I am unsure of how to do this with unix commands only. I can combine the commands together in a shell script but, once again, only unix commands.
Without an output example, it's hard to know your goal, but anyway, one UNIX command you can use is AWK.
Examples:
Extract 2 sets from your data sample (without include "ENDSET" nor blank lines):
$ awk '/ENDSET/{ if(++count==2) exit(0);next; }NF{print}' file.txt
Set one two three
Set four five six
Set four two nine
Extract 3 sets and print 2nd column only (Note 1st column is always "Set"):
$ awk '/ENDSET/{ if(++count==3) exit(0);next; }$2{print $2}' file.txt
two
five
two
one
two
And so on... (more info: $ man awk)

Extracting lines with specific character count

I have a python script that is pulling URLs from pastebin.com/archive, which has links to pastes (which have 8 random digits after pastbin.com in the url). My current output is a .txt with the below data in it, I only want the links to pastes present (Example: http://pastebin.com///Y5JhyKQT) and not links to other pages such as pastebin.com/tools). This is so I can set wget to go pull each individual paste.
The only way I can think of doing this is writing a bash script to count the number of characters in each line and only keep lines with 30 characters exactly (this is the length of the URLs linking to pastes).
I have no idea how I'd go about implementing something like this using grep or awk, perhaps using a while do loop? Any help would be appreciated!
http://pastebin.com///tools
http://pastebin.com//top.location.href
http://pastebin.com///trends
http://pastebin.com///Y5JhyKQT <<< I want to keep this
http://pastebin.com//=
http://pastebin.com///>
From the sample you posted it looks like all you need is:
grep -E '/[[:alnum:]]{8}$' file
or maybe:
grep -E '^.{30}$' file
If that doesn't work for you, explain why and provide a better sample.
This is the algorithm
Find all characters between new line characters or read one line at a time.
Count them or store them in variable and get its count. This is the length of your line.
Only process those lines that are exactly same count as you want.
In python there is both functions character count of string and reading line as well.
#!/usr/bin/env zsh
while read aline
do
if [[ ${#aline} == 30 ]]; then
#do something
fi
done
This is documented in the bash man pages under the "Parameter Expansion" section.
EDIT=this solution is zsh-only

Using cloc (count Lines of Codes) result

I am writing a script for my research, and I want to get the total number of lines in a source file. I came around cloc and I think I am going to use it in my script.
However, cloc gives result with too many information (unfortunately since I am a new member I cannot upload a photo). It gives number of files, number of lines, number of blank lines, number of comment lines, and other graphical representation stuff.
I am only interested in the number of lines to use it on my calculations. Is there a way to get that number easily (maybe by performing some options in command line (although I went through the available options and didn't find something useful for my case))?
I thought to do regular expression on the result to get the number; however, this is my first time using cloc and there might be a better/professional way of doing it.
Any thought?
Regards,
Arwa
I am not sure about CLOC. But it is worth using default shell command.
Please have a look at this question.
To get number of lines of code individually
find . -name '*.*' | xargs wc -l
To get total number of lines of code in a directory.
(find ./ -name '*.*' -print0 | xargs -0 cat) | wc -l
Please note that if you need number of lines from files with specific extension you could use *.ext. *.rb, if it is ruby.
For something very quick and simple you could just use:
Dir.glob('your_directory/**/*.rb').map do |file|
File.foreach(file).count
end.reduce(:+)
This will count all the lines of .rb files in your_directory and it's sub directories. Although I would recommend adding some handling for blank lines as well as comment lines. For more on Dir::glob
#BinaryMee and #engineersmnky thanks for your response.
I tried two different solutions, one using "readlines" got the answer from #gicappa
Count the length (number of lines) of a CSV file?
the other solution using cloc. I ran the command
%x{perl #{ClocPath} #{path-to-file} > result.txt}
and saved the result in result.txt
cloc returns result in a graphical form (I cannot upload image), it also reports number of blank lines, comment lines, and code lines. As I said, I am interested in code lines. So, I opened the file and used regular expression to get the number I needed.
content = File.read("#{path}/result.txt")
line = content.scan(/(\s+\d+\s+\d+\s+\d+\s+\d+)/)
total = line[0][0].split(' ').last
content here will have the content of a file, then line will get this line from the file:
C# 1 3 3 17
C# is the language of a file, 1 is number of files, 3 is number of blank lines, 3 is number of comment lines, and 17 is number of code lines. I got the help of the format from the script of cloc. total then will have number 17.
This solution will help if you are reading a specific file only, you need to add more solutions if you are reading the lines of more than one file.
Hopefully this will help who needs it.
Regards,
Arwa

Create CSV from specific columns in another CSV using shell scripting

I have a CSV file with several thousand lines, and I need to take some of the columns in that file to create another CSV file to use for import to a database.
I'm not in shape with shell scripting anymore, is there anyone who can help with pointing me in the correct direction?
I have a bash script to read the source file but when I try to print the columns I want to a new file it just doesn't work.
while IFS=, read symbol tr_ven tr_date sec_type sec_name name
do
echo "$name,$name,$symbol" >> output.csv
done < test.csv
Above is the code I have. Out of the 6 columns in the original file, I want to build a CSV with "column6, column6, collumn1"
The test CSV file is like this:
Symbol,Trading Venue,Trading Date,Security Type,Security Name,Company Name
AAAIF,Grey Market,22/01/2015,Fund,,Alternative Investment Trust
AAALF,Grey Market,22/01/2015,Ordinary Shares,,Aareal Bank AG
AAARF,Grey Market,22/01/2015,Ordinary Shares,,Aluar Aluminio Argentino S.A.I.C.
What am I doing wrong with my script? Or, is there an easier - and faster - way of doing this?
Edit
These are the real headers:
Symbol,US Trading Venue,Trading Date,OTC Tier,Caveat Emptor,Security Type,Security Class,Security Name,REG_SHO,Rule_3210,Country of Domicile,Company Name
I'm trying to get the last column, which is number 12, but it always comes up empty.
The snippet looks and works fine to me, maybe you have some weird characters in the file or it is coming from a DOS environment (use dos2unix to "clean" it!). Also, you can make use of read -r to prevent strange behaviours with backslashes.
But let's see how can awk solve this even faster:
awk 'BEGIN{FS=OFS=","} {print $6,$6,$1}' test.csv >> output.csv
Explanation
BEGIN{FS=OFS=","} this sets the input and output field separators to the comma. Alternatively, you can say -F=",", -F, or pass it as a variable with -v FS=",". The same applies for OFS.
{print $6,$6,$1} prints the 6th field twice and then the 1st one. Note that using print, every comma-separated parameter that you give will be printed with the OFS that was previously set. Here, with a comma.

search&replace on huge txt files

I need a text processing tool that can perform search and replace operations PER LINE on HUGE TEXT FILES (>0.5 GB). Can be either windows or linux based. (I don't know if there is anything like a streamreader/writer in Linux but I have a feeling that it would be the ideal solution. The editors I have tries so far load the whole file into the momory.)
Bonus question: a tool that can MERGE two huge texts on a per line basis, separated with e.g. tabs
Sounds like you want sed. For example,
sed 's/foo/bar/' < big-input-file > big-output-file
should replace the first occurrence of foo by bar in each line of big-input-file, writing the results to big-output-file.
Bonus answer: I just learned about paste, which seems to be exactly what you want for your bonus question.
'sed' is built into Linux/Unix, and is available for Windows. I believe that it only loads a buffer at a time (not the whole file) -- you might try that.
What would you be trying to do with the merge -- interleaved in some way, rather than just concatenating?
Add: interleave.pl
use strict;
use warnings;
my $B;
open INA, $ARGV[0];
open INB, $ARGV[1];
while (<INA>) {
print $_;
$B = <INB>;
print $B;
}
close INA;
close INB;
run: perl interleave.pl fileA fileB > mergedFile
Note that this is a very bare-bones utility. It does not check if the files exist, and it expects that the files have the same number of lines.
I would use perl for this. It is easy to read a file line by line, has great search/repace available using regular expressions, and will enable you to merge, and you can make your perl script aware of both files.

Resources