I got the temp file with a few lines of files
...
14G /Users/admin/Desktop/xy
1G /Users/admin/Desktop/yz
3G /Users/admin/Desktop/za
18G /Users/admin/Desktop
...
I only want to get one line with the "/Users/admin/Desktop" as output, but don't know how to do it.
You can use grep:
grep "/Users/admin/Desktop$" file
The $ will anchor the regular expression to the end of the line so you don't pick up the lines that contain subdirectories
You can use a minimal awk statement for this like,
awk '$2=="/Users/admin/Desktop"{print $1}' file
18G
(or) the entire line as
awk '$2=="/Users/admin/Desktop"' file
18G /Users/admin/Desktop
Related
I have one file test.sh. In this my content is look like
Nas /mnt/enjayvol1/backup/test.sh lokesh
thinclient rsync /mnt/enjayvol1/esync/lokesh.sh lokesh
crm rsync -arz --update /mnt/enjayvol1/share/mehul mehul mehul123
I want to retrieve string where it match content /mnt
I want output line
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
I have tried
grep -i "/mnt" test.sh | awk -F"mnt" '{print $2}'
but this will not give me accurate output. Please help
Could you please try following awk approach too and let me know if this helps you.
awk -v RS=" " '$0 ~ /\/mnt/' Input_file
Output will be as follows.
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
Explanation: Making record separator as space and then checking if any line has /mnt string in it, if yes then not mentioning any action so by default print will happen. So it will print those lines which have /mtn sting in them.
Short grep approach (assuming that /mnt... path doesn't contain whitespaces):
grep -o '\/mnt\/[^[:space:]]*' lokesh.sh
The output:
/mnt/enjayvol1/backup/test.sh
/mnt/enjayvol1/esync/lokesh.sh
/mnt/enjayvol1/share/mehul
I have a question:
file:
154891
145690
165211
190189
135901
290134
I want to output like this: (Every three uid separated by comma)
154891,145690,165211
190189,135901,290134
How can I do it?
You can use pr:
pr -3 -s, -l 1
Print in 3 columns, with commas as separators, with a 'page length' of 1.
154891,145690,165211
190189,135901,290134
sed ':1;N;s/\n/,/;0~3b;t1' file
or
awk 'ORS=NR%3?",":"\n"' file
There could be many ways to do that, pick one you like, with/out comma ",":
$ awk '{printf "%s%s",$0,(NR%3?",":RS)}' file
154891,145690,165211
190189,135901,290134
$ xargs -n3 -a file
154891 145690 165211
190189 135901 290134
I am having such of file that contains lines as below:
/folder/share/folder1
/folder/share/folder1/file.gz
/folder/share/folder2/11072012
/folder/share/folder2/11072012/file1.rar
I am trying to remove these lines:
/folder/share/folder1/
/folder/share/folder2/11072012
To get a final result the following:
/folder/share/folder2/11072012/file1.rar
/folder/share/folder1/file.gz
In other words, I am trying to keep only the path for files and not directories.
This
awk -F/ '$NF~/\./{print}'
splits input records on the character "/" using the command line switch -F
examines the last field of the input record $NF (where NF is the number of fields in the input record) to see if it DOES contain the character "." (the !~ operator)
if it matches, oputput the record.
Example
$ echo -e '/folder/share/folder.2/11072012
/folder/share/folder2/11072012/file1.rar' | mawk -F/ '$NF~/\./{print}'
/folder/share/folder2/11072012/file1.rar
$
NB: my microscript looks at . ONLY in the filename part of the full path.
Edit in my 1st post I reversed the logic, to print dotless files instead of dotted ones.
You could to use the find command to get only the file list
find <directory> -type f
With awk:
awk -F/ '$NF ~ /\./{print}' File
Set / as delimiter, check if last field ($NF) has . in it, if yes, print the line.
Text only result
sed -n 'H
$ {g
:cycle
s/\(\(\n\).*\)\(\(\2.*\)\{0,1\}\)\1/\3\1/g
t cycle
s/^\n//p
}' YourFile
Based on file name and folder name assuming that:
line that are inside other line are folder and uniq are file (could be completed by a OS file existence file on result)
line are sorted (at least between folder and file inside)
posix version so --posixon GNU sed
How to get the first few lines from a gziped file ?
I tried zcat, but its throwing an error
zcat CONN.20111109.0057.gz|head
CONN.20111109.0057.gz.Z: A file or directory in the path name does not exist.
zcat(1) can be supplied by either compress(1) or by gzip(1). On your system, it appears to be compress(1) -- it is looking for a file with a .Z extension.
Switch to gzip -cd in place of zcat and your command should work fine:
gzip -cd CONN.20111109.0057.gz | head
Explanation
-c --stdout --to-stdout
Write output on standard output; keep original files unchanged. If there are several input files, the output consists of a sequence of independently compressed members. To obtain better compression, concatenate all input files before compressing
them.
-d --decompress --uncompress
Decompress.
On some systems (e.g., Mac), you need to use gzcat.
On a mac you need to use the < with zcat:
zcat < CONN.20111109.0057.gz|head
If a continuous range of lines needs be, one option might be:
gunzip -c file.gz | sed -n '5,10p;11q' > subFile
where the lines between 5th and 10th lines (both inclusive) of file.gz are extracted into a new subFile. For sed options, refer to the manual.
If every, say, 5th line is required:
gunzip -c file.gz | sed -n '1~5p;6q' > subFile
which extracts the 1st line and jumps over 4 lines and picks the 5th line and so on.
If you want to use zcat, this will show the first 10 rows
zcat your_filename.gz | head
Let's say you want the 16 first row
zcat your_filename.gz | head -n 16
This awk snippet will let you show not only the first few lines - but a range you can specify. It will also add line numbers which i needed for debugging an error message pointing to a certain line way down in a gzipped file.
gunzip -c file.gz | awk -v from=10 -v to=20 'NR>=from { print NR,$0; if (NR>=to) exit 1}'
Here is the awk snippet used in the one liner above. In awk NR is a built-in variable (Number of records found so far) which usually is equivalent to a line number. the from and to variable are picked up from the command line via the -v options.
NR>=from {
print NR,$0;
if (NR>=to)
exit 1
}
I'm looking for a way to remove lines within multiple csv files, in bash using sed, awk or anything appropriate where the file ends in 0.
So there are multiple csv files, their format is:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLElong,60,0
EXAMPLEcon,120,6
EXAMPLEdev,60,0
EXAMPLErandom,30,6
So the file will be amended to:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
A problem which I can see arising is distinguishing between double digits that end in zero and 0 itself.
So any ideas?
Using your file, something like this?
$ sed '/,0$/d' test.txt
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
For this particular problem, sed is perfect, as the others have pointed out. However, awk is more flexible, i.e. you can filter on an arbitrary column:
awk -F, '$3!=0' test.csv
This will print the entire line is column 3 is not 0.
use sed to only remove lines ending with ",0":
sed '/,0$/d'
you can also use awk,
$ awk -F"," '$NF!=0' file
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
this just says check the last field for 0 and don't print if its found.
sed '/,[ \t]*0$/d' file
I would tend to sed, but there is an egrep (or: grep -e) -solution too:
egrep -v ",0$" example.csv