What is a unix command for deleting the first N characters of a line? - bash

For example, I might want to:
tail -f logfile | grep org.springframework | <command to remove first N characters>
I was thinking that tr might have the ability to do this but I'm not sure.

Use cut. Eg. to strip the first 4 characters of each line (i.e. start on the 5th char):
tail -f logfile | grep org.springframework | cut -c 5-

sed 's/^.\{5\}//' logfile
and you replace 5 by the number you want...it should do the trick...
EDIT
if for each line
sed 's/^.\{5\}//g' logfile

You can use cut:
cut -c N- file.txt > new_file.txt
-c: characters
file.txt: input file
new_file.txt: output file
N-: Characters from N to end to be cut and output to the new file.
Can also have other args like: 'N' , 'N-M', '-M' meaning nth character, nth to mth character, first to mth character respectively.
This will perform the operation to each line of the input file.

Here is simple function, tested in bash. 1st param of function is string, 2nd param is number of characters to be stripped
function stringStripNCharsFromStart {
echo ${1:$2:${#1}}
}
Usage:
$ stringStripNCharsFromStart "12abcdefgh-" 2
# 2abcdefgh-
Screenshot:

tail -f logfile | grep org.springframework | cut -c 900-
would remove the first 900 characters
cut uses 900- to show the 900th character to the end of the line
however when I pipe all of this through grep I don't get anything

I think awk would be the best tool for this as it can both filter and perform the necessary string manipulation functions on filtered lines:
tail -f logfile | awk '/org.springframework/ {print substr($0, 6)}'
or
tail -f logfile | awk '/org.springframework/ && sub(/^.{5}/,"",$0)'

x=hello
echo ${x:1}
returns ello
replace 1 with N as required

Related

how to use cut command -f flag as reverse

This is a text file called a.txt
ok.google.com
abc.google.com
I want to select every subdomain separately
cat a.txt | cut -d "." -f1 (it select ok From left side)
cat a.txt | cut -d "." -f2 (it select google from left side)
Is there any way, so I can get result from right side
cat a.txt | cut (so it can select com From right side)
There could be few ways to do this, one way which I could think of right now could be using rev + cut + rev solution. Which will reverse the input by rev command and then set field separator as . and print fields as per they are from left to right(but actually they are reversed because of the use of rev), then pass this output to rev again to get it in its actual order.
rev Input_file | cut -d'.' -f 1 | rev
You can use awk to print the last field:
awk -F. '{print $NF}' a.txt
-F. sets the record separator to "."
$NF is the last field
And you can give your file directly as an argument, so you can avoid the famous "Useless use of cat"
For other fields, but counting from the last, you can use expressions as suggested in the comment by #sundeep or described in the users's guide under
4.3 Nonconstant Field Numbers. For example, to get the domain, before the TLD, you can substract 1 from the Number of Fields NF :
awk -F. '{ print $(NF-1) }' a.txt
You might use sed with a quantifier for the grouped value repeated till the end of the string.
( Start group
\.[^[:space:].]+ Match 1 dot and 1+ occurrences of any char except a space or dot
){1} Close the group followed by a quantifier
$ End of string
Example
sed -E 's/(\.[^[:space:].]+){1}$//' file
Output
ok.google
abc.google
If the quantifier is {2} the output will be
ok
abc
Depending on what you want to do after getting the values then you could use bash for splitting your domain into an array of its components:
#!/bin/bash
IFS=. read -ra comps <<< "ok.google.com"
echo "${comps[-2]}"
# or for bash < 4.2
echo "${comps[${#comps[#]}-2]}"
google

bash check for words in first file not contained in second file

I have a txt file containing multiple lines of text, for example:
This is a
file containing several
lines of text.
Now I have another file containing just words, like so:
this
contains
containing
text
Now I want to output the words which are in file 1, but not in file 2. I have tried the following:
cat file_1.txt | xargs -n1 | tr -d '[:punct:]' | sort | uniq | comm -i23 - file_2.txt
xargs -n1 to put each space separated substring on a newline.
tr -d '[:punct:] to remove punctuations
sort and uniq to make a sorted file to use with comm which is used with the -i flag to make it case insensitive.
But somehow this doesn't work. I've looked around online and found similar questions, however, I wasn't able to figure out what I was doing wrong. Most answers to those questions were working with 2 files which were already sorted, stripped of newlines, spaces, and punctuation while my file_1 may contain any of those at the start.
Desired output:
is
a
file
several
lines
of
paste + grep approach:
grep -Eiv "($(paste -sd'|' <file2.txt))" <(grep -wo '\w*' file1.txt)
The output:
is
a
file
several
lines
of
I would try something more direct:
for A in `cat file1 | tr -d '[:punct:]'`; do grep -wq $A file2 || echo $A; done
flags used for grep: q for quiet (don't need output), w for word match
One in awk:
$ awk -F"[^A-Za-z]+" ' # anything but a letter is a field delimiter
NR==FNR { # process the word list
a[tolower($0)]
next
}
{
for(i=1;i<=NF;i++) # loop all fields
if(!(tolower($i) in a)) # if word was not in the word list
print $i # print it. duplicates are printed also.
}' another_file txt_file
Output:
is
a
file
several
lines
of
grep:
$ grep -vwi -f another_file <(cat txt_file | tr -s -c '[a-zA-Z]' '\n')
is
a
file
several
lines
of
This pipeline will take the original file, replace spaces with newlines, convert to lowercase, then use grep to filter (-v) full words (-w) case insensitive (-i) using the lines in the given file (-f file2):
cat file1 | tr ' ' '\n' | tr '[:upper:]' '[:lower:]' | grep -vwif file2

Count number of Special Character in Unix Shell

I have a delimited file that is separated by octal \036 or Hexadecimal value 1e.
I need to count the number of delimiters on each line using a bash shell script.
I was trying to use awk, not sure if this is the best way.
Sample Input (| is a representation of \036)
Example|Running|123|
Expected output:
3
awk -F'|' '{print NF-1}' file
Change | to whatever separator you like. If your file can have empty lines then you need to tweak it to:
awk -F'|' '{print (NF ? NF-1 : 0)}' file
You can try
awk '{print gsub(/\|/,"")}'
Simply try
awk -F"|" '{print substr($3,length($3))}' OFS="|" Input_file
Explanation: Making field separator -F as | and then printing the 3rd column by doing $3 only as per your need. Then setting OFS(output field separator) to |. Finally mentioning Input_file name here.
This will work as far as I know
echo "Example|Running|123|" | tr -cd '|' | wc -c
Output
3
This should work for you:
awk -F '\036' '{print NF-1}' file
3
-F '\036' sets input field delimiter as octal value 036
Awk may not be the best tool for this. Gnu grep has a cool -o option that prints each matching pattern on a separate line. You can then count how many matching lines are generated for each input line, and that's the count of your delimiters. E.g. (where ^^ in the file is actually hex 1e)
$ cat -v i
a^^b^^c
d^^e^^f^^g
$ grep -n -o $'\x1e' i | uniq -c
2 1:
3 2:
if you remove the uniq -c you can see how it's working. You'll get "1" printed twice because there are two matching patterns on the first line. Or try it with some regular ascii characters and it becomes clearer what the -o and -n options are doing.
If you want to print the line number followed by the field count for that line, I'd do something like:
$grep -n -o $'\x1e' i | tr -d ':' | uniq -c | awk '{print $2 " " $1}'
1 2
2 3
This assumes that every line in the file contains at least one delimiter. If that's not the case, here's another approach that's probably faster too:
$ tr -d -c $'\x1e\n' < i | awk '{print length}'
2
3
0
0
0
This uses tr to delete (-d) all characters that are not (-c) 1e or \n. It then pipes that stream of data to awk which just counts how many characters are left on each line. If you want the line number, add " | cat -n" to the end.

Get last four characters from a string

I am trying to parse the last 4 characters of Mac serial numbers from terminal. I can grab the serial with this command:
serial=$(ioreg -l |grep "IOPlatformSerialNumber"|cut -d ""="" -f 2|sed -e s/[^[:alnum:]]//g)
but I need to output just the last 4 characters.
Found it in a linux forum echo ${serial:(-4)}
Using a shell parameter expansion to extract the last 4 characters after the fact works, but you could do it all in one step:
ioreg -k IOPlatformSerialNumber | sed -En 's/^.*"IOPlatformSerialNumber".*(.{4})"$/\1/p'
ioreg -k IOPlatformSerialNumber returns much fewer lines than ioreg -l, so it speeds up the operation considerably (about 80% faster on my machine).
The sed command matches the entire line of interest, and replaces it with the last 4 characters before the " that ends the line; i.e., it returns the last 4 chars. of the value.
Note: The ioreg output line of interest looks something like this:
| "IOPlatformSerialNumber" = "A02UV13KDNMJ"
As for your original command: cut -d ""="" is the same as cut -d = - the shell simply removes the empty strings around the = before cut sees the value. Note that cut only accepts a single delimiter char.
You can also do: grep -Eo '.{4}$' <<< "$serial"
I don't know how the output of ioreg -l looks like, but it looks to me that you are using so many pipes to do something that awk alone could handle:
use = as field separator
vvv
awk -F= '/IOPlatformSerialNumber/ { #match lines containing IOPlatform...
gsub(/[^[:alnum:]]/, "", $2) # replace all non alpha chars from 2nd field
print substr($2, length($2)-3, length($2)) # print last 4 characters
}'
Or even sed (a bit ugly one since the repetition of command): catch the first 4 alphanumeric characters occuring after the first =:
sed -rn '/IOPlatformSerialNumber/{
s/^[^=]*=[^a-zA-Z0-9]*([a-zA-Z0-9])[^a-zA-Z0-9]*([a-zA-Z0-9])[^a-zA-Z0-9]*([a-zA-Z0-9])[^a-zA-Z0-9]*([a-zA-Z0-9]).*$/\1\2\3\4/;p
}'
Test
$ cat a
aaa
bbIOPlatformSerialNumber=A_+23B/44C//55=ttt
IOPlatformSerialNumber=A_+23B/44C55=ttt
asdfasd
The last 4 alphanumeric characters between the 1st and 2nd = are 4C55:
$ awk -F= '/IOPlatformSerialNumber/ {gsub(/[^[:alnum:]]/, "", $2); print substr($2, length($2)-3, length($2))}' a
4C55
4C55
Without you posting some sample output of ioreg -l this is untested and a guess but it looks like all you need is something like:
ioreg -l | sed -r -n 's/IOPlatformSerialNumber=[[:alnum:]]+([[:alnum:]]{4})/\1/'

How to strip a number in the output of an executable?

I run an executable which outputs a lot of lines to stdout. The last line is
Run in 100 seconds
The code in the C program of the executable to write the last line is
printf("Ran in %g seconds\n", time);
So there is a newline character at the end.
I want to strip the last number, e.g. 100, from the stdout, so in bash
./myexecutable > output
Then I wonder how to further parse output to get the time number in bash? Do I need some applications to do that?
Thanks!
You could use grep:
grep -oP 'Ran in \K\d+' output
or
grep -oP '(?<=Ran in )\d+(?= seconds)' output
Let's say:
s='Run in 100 seconds'
Using tr:
tr -cd '[[:digit:]]' <<< "$s"
100
Using sed:
sed 's/[^0-9]*//g' <<< "$s"
100
However if you want to grab last number in a line then use this lookahead regex:
s='Run 10 in 100 seconds'
grep -oP '\d+(?!\D*\d)' <<< "$s"
100
Or, use tail to grab the last line (tail -n 1 <file>) and extract the number by either -
Using sed with three pattern groups and printing the second group match:
tail -n 1 output | sed 's/\(^Run in \)\([0-9]\+\)\( seconds$\)/\2/g'
Using awk to print the third ($3) token:
tail -n 1 output | awk '{print $3}'

Resources