cut command to retrieve 2nd part of words - bash

i have a few name that I need to cut out and get the 2nd part of the name
agent-tom
agent-harry
agent-disk-see
I used cut -d "-" -f2
I only manage to get "tom", "harry" and "disk"
question: how do I use the cut command in order to cut the 3rd agent so that i could get "disk-see" ??
Thanks

cut -d '-' -f 2-
Cuts from 2nd column to end and will get all regardless of dash count.

Adapted from Wikipedia:
To output the second field through the end of the line of each line using the "-" character as the field delimiter:
cut -d "-" -f 2- file

If you're willing to consider tools other than cut:
pax> sed 's/^[^-]*-//' inputFile
tom
harry
disk-see
This command uses the stream editor sed to remove the part of the line from the start up to the first - character. I generally prefer sed for tasks like these since it's more adaptable when doing things other than simple one-level-deep field manipulations.
Having said that, this is a fairly simple task so a slight modification to your cut command will suffice. Simply use -f2- (field 2 onwards) in place of -f2 (field two only):
pax> cut -d'-' -f2- inputFile
tom
harry
disk-see

Related

How to delete lowercase letters after the second upper case found in the line?

I have a file with names:
Smith, John.
Brown, Aaron K.
And want to get:
Smith, J
Brown, A K
or better:
SmithJ
BrownAK
Can this task be solved in bash?
You can solve it with different tools and different methods. I will show two solutions using sed and one without.
Solution 1
You want to use some command on part of the line.
You can remove all non-uppercase characters from a string with echo "${string}" | tr -cd "[:upper:]".
With sed s/../../e the resulting line from the substitition is given to the shell.
Combining these give you:
sed -r 's/([^,]*)(.*)/echo "\1\$(echo "\2" | tr -cd "[:upper:]")"/e' file
Solution 2
Less creative but easier to write is temporarily splitting each line in two lines, and execute the substition on the even lines. Put the lines together and your finished.
sed -e 's/,/\n/' file | sed '0~2s/[^A-Z]//g' | paste -d '' - -
Solution 3
With the tr from the first and the paste from the second solution you can avoid sed.
Be aware that the tr characterset must include a newline.
paste -d '' <(cut -d, -f1 file) <(cut -d, -f2 file | tr -cd ':[A-Z]:\n')
IMHO the second solution looks best. The first one is slow on large files.

Remove certain characters or keywords from a TXT file in bash

I was wondering if there was a way to remove certain keywords from a text file, say I have a large file with lines saying
My name is John
My name is Peter
My name is Joe
Would there be a way to remove "My name is" without removing the entire line? Could this be done with grep somehow? I tried to find a solution but pretty much all of the ones I came across simply focus on deleting entire lines. Even if I could delete the text up until a certain column, that would fix my issue.
You need a text processing tool like sed or awk to do this, but not grep.
Try this:
sed 's/My name is//g' file
EDIT
Purpose of grep:
$ man grep | grep -A2 DESCRIPTION
DESCRIPTION
grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) for lines containing a
match to the given PATTERN. By default, grep prints the matching lines.
With GNU grep:
grep -Po "My name is\K.*" file
Output with a leading white space:
John
Peter
Joe
-P: Interpret PATTERN as a Perl regular expression
-o: Print only the matched (non-empty) parts of a matching line, with each such part on a separate output line.
\K: Remove matched part before \K.
try with one more simple grep.
grep -o '[^ ]*$' Input_file
-o will print only matched part of line, now in regex where it will look for text from last space to till last of the line.
An awk solution which first removes empty
lines and then prints last field.
awk '!/^$/{print $NF}' file
John
Peter
Joe
Using cut:
cut -d' ' -f4 input_file
GNU cut features a complement option, used to remove the area specified with -f. If the input_file had surnames such as "My name is John Doe", the previous code would print "John", and this would print "John Doe":
cut --complement -d' ' -f1-3 input_file
cut needs less memory, compared to other utils:
# these numbers will vary by *nix version and disto...
wc -c `which cut sed awk grep` | head -n -1 | sort -n
43224 /usr/bin/cut
109000 /bin/sed
215360 /bin/grep
662240 /usr/bin/awk

To find a word and copy the following word with shell(ubuntu)?

is there a posibility to find a word in a file and than to copy the following word?
Example:
abc="def"
bla="no_need"
line_i_need="information_i_need"
still_no_use="blablabla"
so the third line, is exactly the line i need!
is it possible to find this word with shell orders?
thanks for your support
Using an awk with custom field separator it is much simpler:
awk -F '[="]+' '$1=="line_i_need"{print $2}' file
information_i_need
-F '[="]+' sets field separator as 1 or more of = or "
Use grep:
grep file_name line_i_need
It will print:
line_i_need="information_i_need"
This finds the line with grep an cuts the second column using " separator
grep file_name line_i_need | cut -d '"' -f2

bash - get usernames from command last output

I need to get all users from file, containing information about all loggins within some time interval. (The delimiter is : )
So, I need to get all users from output of command last -f.
I tried to do this:
last -f file| cut -d ":" -f1
but in the output aren't just the usernames. It seems to me like some record take more than just one line and therefore it can't distinguish the records. I don't know.
Could you help me please? I would be grateful for any advice.
You could say:
last -f file | awk '{print $1}'
If you want to use cut, say:
last -f file| cut -d " " -f1

Cut from column to end of line

I'm having a bit of an issue cutting the output up from egrep. I have output like:
From: First Last
From: First Last
From: First Last
I want to cut out the "From: " (essentially leaving the "First Last").
I tried
cut -d ":" -f 7
but the output is just a bunch of blank lines.
I would appreciate any help.
Here's the full code that I am trying to use if it helps:
egrep '^From:' $file | cut -d ":" -f 7
NOTE: I've already tested the egrep portion of the code and it works as expected.
The cut command lines in your question specify colon-separated fields and that you want the output to consist only of field 7; since there is no 7th field in your input, the result you're getting isn't what you intend.
Since the "From:" prefix appears to be identical across all lines, you can simply cut from the 7th character onward:
egrep '^From:' $file | cut -c7-
and get the result you intend.
you were really close.
I think you only need to replace ":" with " " as separator and add "-" after the "7": like this:
cut -d " " -f 2-
I tested and works pretty well.
The -f argument is for what fields. Since there is only one : in the line, there's only two fields. So changing -f 7 to -f 2- will give you want you want. Albeit with a leading space.
You can combine the egrep and cut parts into one command with sed:
sed -n 's/^From: //gp' $file
sed -n turns off printing by default, and then I am using p in the sed command explicitly to print the lines I want.
You can use sed:
sed 's/^From: *//'
OR awk:
awk -F ': *' '$1=="From"{print $2}'
OR grep -oP
grep -oP '^From: *\K.*'
Here is a Bash one-liner:
grep ^From file.txt | while read -a cols; do echo ${cols[#]:1}; done
See: Handling positional parameters at wiki.bash-hackers.org
cut itself is a very handy tool in bash
cut -d (delimiter character) -f (fields that you want as output)
a single field is given directly as -f 3 ,
range of fields can be selected as -f 5-9
so in your this particular case code would be
egrep '^From:' $file | cut -d\ -f 2-3
the delimiter is space here and can be escaped using a \
-f 1 corresponds to " From " and 2-3 corresponds to " First Last "

Resources