Grep command - Text processing - bash

I have a links.txt. I need this output.
links.txt:
http://www.google.com/test
https://bing.com/web2
www.yahoo.com/link/link2
output.txt
http://www.google.com/test
https://bing.com/web2
www.yahoo.com/link/link2

There is various ways, but I'll use the below
for i in $(cat links.txt); do echo "$i"; done

Related

Pipe output of command in Shell, how to redirect without save it between in an file?

do someone know how to make this in one line of code.
I mean by that without save the output of the ls in temp.txt between
ls | cat $x > temp.txt
while read line; do echo foo/$line; done < temp.txt
Use printf with a glob.
printf 'foo/%s\n' *

Split output of cat command into separate lines

I have a trouble with splitting output of cat command (cat /proc/meminfo) into separate lines for working with them.
#!/bin/bash
CURR_DUMP=$(cat /proc/meminfo)
arrIN=(${CURR_DUMP// kB/})
for t in "${arrIN[#]}"
do
echo $t
done
exit 0
But instead of separate lines I have a mess of parts of each line.
What's going wrong with my solution?
Thanks in advance.
You can use mapfile:
mapfile -t arr < <(sed 's/ kB$//' /proc/meminfo)

I'm trying to validate the domains from a .csv file in bash script

Here is what I have and not working:
for i in `cat cnames.csv`
do nslookup $i | grep -v "8.8.8.8\|=\|Non-authoritative" >> output.txt
done
Any better solutions?
This is Bash FAQ 001; you don't iterate over a file using a for loop.
while IFS= read -r i; do
nslookup "$i"
done < cnames.csv | grep -v "8.8.8.8\|=\|Non-authoritative" > output.txt
Note that you don't need to run grep separate for each call to nslookup; you can pipe the aggregate output to a single call.
You can use the exit status of nslookup.
for i in $(cat cnames.csv); do
if nslookup "$i"; then
echo "$i is valid"
else
echo "$i not found"
fi
done
Is cnames.csv a real .csv file? Wouldn't that require to extract only the column with addresses in them? Right now the commas and other fields (if existing) are read too.
You could probably get them all looked up faster in parallel and more succinctly with GNU Parallel
parallel -a cnames.csv nslookup {} | grep ...

How to avoid special characters when redirecting output in bash scripts

when redirecting output from a bash script to a file, I get special characters in the file. For example,
for file in *; do echo $file; done > output.txt
then if I cat output.txt
cat output.txt
I get
file1.txt
file2.txt
file3.txt
output.txt
but when editing the file, I see this:
^[[0m^[[0mfile1.txt
^[[0m^[[0mfile2.txt
^[[0m^[[0mfile3.txt
^[[0m^[[0moutput.txt
How do I avoid those nasty characters?
Solution:
I had the following line in the .bashrc:
trap 'echo -ne "\e[0m"' DEBUG
by removing it, I solved the problem.
Thank you all for your help.
These are ANSI escape codes, used for formatting text in a terminal. Rather than trying to remove them, you should prevent them from being written in the first place.
Are you sure you're getting this from the exact code you posted? If so, your files actually have these characters in your names, and you should simply rename them.
The far more common way of seeing this is having tools that output ANSI escapes sequences. This is a reproducible way of showing the same issue:
ls --color=always > file
If your posted code was an untested example, you should go through and find the tool responsible for the ANSI codes and make it stop (make especially sure you're not looping over ls output).
Here's an example of the problem you're seeing, with touch as stand-in for some process/script that accidentally created filenames with ANSI escapes:
# Reproduce the problem
$ touch $'\x1B[0m\x1B[0mfile.txt'
# Symptoms of the problem
$ ls *.txt
?[0m?[0mfile.txt
$ for f in *.txt; do echo "$f"; done
file.txt
$ for f in *.txt; do echo "$f"; done | cat -v
^[[0m^[[0mfile.txt
# Fix the problem by renaming the bad files
$ crud=$'\x1B[0m'; for f in *"$crud"*; do mv "$f" "${f//$crud/}"; done
# Now works as expected
$ ls *.txt
file.txt
$ for f in *.txt; do echo "$f"; done
file.txt
$ for f in *.txt; do echo "$f"; done | cat -v
file.txt
Run the raw ls command
/bin/ls
as
/bin/ls > your_file
and you will avoid the special characters in the output file your_file.
You can avoid those nasty characters with the ls -f option (which disables color) like so:
for file in $(ls -f *); do echo $file; done > output.txt
Paul

grep-ing multiple files

I want to grep multiple files in a directory and collect the output of each grep in a separate file ..So if I grep 20 files, I should get 20 output-files which contain the searched item. Can anybody help me with this? Thanks.
Use a for statement:
for a in *.txt; do grep target $a >$a.out; done
just one gawk command
gawk '/target/ {print $0 > FILENAME".out"}' *.txt
you can use just the shell, no need external commands
for file in *.txt
do
while read -r line
do
case "$line" in
*pattern*) echo $line >> "${file%.txt}.out";;
esac
done < "$file"
done

Resources