How to redirect stderr to a file for the whole pipe? - bash

I am running a command like this:
mycmd1 | mycmd2 | mycmd3 | lp
Is there a way to redirect stderr to a file for the whole pipe instead of repeating it for each command?
That is to say, I'd rather avoid doing this:
mycmd1 2>/myfile | mycmd2 2>/myfile | mycmd3 2>/myfile | lp 2>/myfile

Either
{ mycmd1 | mycmd2 | mycmd3 | lp; } 2>> logfile
or
( mycmd1 | mycmd2 | mycmd3 | lp ) 2>> logfile
will work. (The first version might be have a slightly faster (~1ms) startup time depending on the shell).

I tried the following, and it seems to work:
(mycmd1 | mycmd2 | mycmd3 | lp) 2>>/var/log/mylogfile.log
I use >> because I want to append to the logfile rather than overwriting it every time.

Related

How to check if a folder has any tab delimited file in it?

I am trying to search for all the tab delimited file in one folder, and if found any then I need to transfer all of them to a another folder using bash.
In my code, I am currently trying to find all files, but somehow it is not working.
Here is my code:
>nul 2>nul dir /a-d "folderName\*" && (echo Files exist) || (echo No file found)
Thanks in advance :)
For a simple move (or copy -- replace mv with cp) of files, #tripleee's answer is sufficient. To recursively search for files and run a command on each, find comes in handy.
Example:
find <src> -type f -name '*.tsv' -exec cp {} <dst> \;
Where <src> is the directory to copy from, and <dst> is the directory to copy to. Note that this searches recursively, so any files with duplicate names will cause overwrites. You can pass -i to cp to have it prompt before overwriting:
find <src> -type f -name '*.tsv' -exec cp -i {} <dst> \;
Explained:
find <src> -type f -name '*.tsv' -exec cp -i {} <dst> \;
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^
| | | | | | | | | | | ||
| | | | | | | | | | | | --- terminator
| | | | | | | | | | | --- escape for terminator
| | | | | | | | | | --- destination directory
| | | | | | | | | --- the path of each file found
| | | | | | | | --- prompt before overwriting
| | | | | | | --- the copy command
| | | | | | --- flag for executing a command (cp in this case)
| | | | | --- pattern of files to match
| | | | --- flag for specifying file name pattern
| | | --- 'f' for a regular file (as opposed to e.g. 'd' for directory)
| | --- flag for specifying the file type
| --- location to search
--- the find command, useful for searching for files
To get a feel for what happens without actually having find run the real command, you can prefix it with echo to just print each command instead of running it:
find <src> -type f -name '*.tsv' -exec echo cp -i {} <dst> \;
Your attempt has very little valid Bash script in it.
mv foldername/*.tsv otherfolder/
There will be an error message if there are no matching files.
"it is not working". That means very little on stackoverflow.
Let's first examine what you've done:
>nul 2>nul dir /a-d "folderName\*"
So, you're doing a dir (most Linux users would use ls, but soit) on
/a-d
everything under folderName
and the output is in the file nul. For debugging purposes, it would be good to see what is in nul (do cat nul). I would bet it is something like:
dir: cannot access '/a-d': No such file or directory
dir: cannot access 'folderName\*': No such file or directory
That means that dir exits with an error. So, echo No file found will be executed.
This means that your output is probably
No file found
Which is exactly as expected.
In your code, you said you want to find all files. That means you want the output of
ls folderName
or
find folderName
if you want to do things recursively. Because find has been explained above by jsageryd, I won't elaborate on that.
If you just want to look in that specific directory, you might do:
if dir folderName/*.tsv > nul 2>nul ; then
echo "Files exist"
else
echo "No file found"
fi
and go from there.

Foreach loop in bash

I have two files, one with about 100 root domains, and second file with URLs only. Now I have to filter that URL list to get third file which contains only URLs that have domains from the list.
Example of URL list:
| URL |
| ------------------------------|
| http://github.com/name |
| http://stackoverflow.com/name2|
| http://stackoverflow.com/name3|
| http://www.linkedin.com/name3 |
Example of word list:
github.com
youtube.com
facebook.com
Resut:
| http://github.com/name |
My goal is to filter out whole row where URL contain specific word. This is what I tried:
for i in $(cat domains.csv);
do grep "$i" urls.csv >> filtered.csv ;
done
Result is strange, I've got some of the links, but not all of them that contain root domains from the first file. Then I tried to do the same thing with python and saw that bash doesn't do what I wanted, I've got better result with python script, but it takes more time to write python script than running bash commands.
How shoud I accomplish this with bash in further ?
Using grep:
grep -F -f domains.csv url.csv
Test Results:
$ cat wordlist
github.com
youtube.com
facebook.com
$ cat urllist
| URL |
| ------------------------------|
| http://github.com/name |
| http://stackoverflow.com/name2|
| http://stackoverflow.com/name3|
| http://www.linkedin.com/name3 |
$ grep -F -f wordlist urllist
| http://github.com/name |

Simplify lots of SED command

I have the following command that I use to rewrite some maxscale output to be able to use it in other software:
maxadmin list servers | sed -r 's/[^a-z 0-9]//gi;/^\s*$/d;1,3d;' | awk '$1=$1' | cut -d ' ' -f 1,5 | sed -e 's/ /":"/g' | sed -e 's/\(.*\)/"\1"/' | tr '\n' ',' | sed 's/.$/}\n/' | sed 's/^/{/'
I am thinking this is way to complex for what I want to do, but I am not able to see a simpler version of this myself. What I want is to rewrite this (output of maxadmin list servers):
Servers.
-------------------+-----------------+-------+-------------+--------------------
Server | Address | Port | Connections | Status
-------------------+-----------------+-------+-------------+--------------------
svr_node1 | 192.168.178.1 | 3306 | 0 | Master, Synced, Running
svr_node2 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
svr_node3 | 192.168.178.1 | 3306 | 0 | Slave, Synced, Running
-------------------+-----------------+-------+-------------+--------------------
Into this:
{"svrnode1":"Master","svrnode2":"Slave","svrnode3":"Slave"}
My command does a good job but as I said, there should be a simpler way with less sed commands being run hopefully.
You can use awk, like this:
json.awk
BEGIN {
printf "{"
}
# Everything after line for and before the last ------ line
# plus the last empty line (if any).
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9) # Remove trailing comma
printf "%s\"%s\":\"%s\"",s,$1,$9
s="," # Set comma separator after first iteration
}
END {
print "}"
}
Run it like this:
maxadmin list servers | awk -f json.awk
Output:
{"svr_node1":"Master","svr_node2":"Slave","svr_node3":"Slave"}
In comments there came up the question how to achieve that without an extra json.awk file:
maxadmin list servers | awk 'BEGIN{printf"{"}NR>4&&!/^([-]|$)/{sub(/,/,"",$9);printf"%s\"%s\":\"%s\"",s,$1,$9;s=","}END{print"}"}'
Ugly, but works. ;)
If you want to put this into a shell script, consider a multiline version like this:
maxadmin list servers | awk '
BEGIN{printf"{"}
NR>4&&!/^([-]|$)/{
sub(/,/,"",$9)
printf"%s\"%s\":\"%s\"",s,$1,$9
s=","
}
END{print"}"}'

shell script to extract the name and IP address

Is there a way to use shell script to get only the name and net from the result as below:
Result
6cb7f14e-6466-4211-9a09-2b8e7ad92703 | name-erkoev4ja3rv | 2e3900ff36574cf9937d88223403da77 | ACTIVE | Running | net0=10.1.1.2; ing-net=10.1.1.3; net=10.1.1.4;
Expected Result
name-erkoev4ja3rv: 10.1.1.4
$ input="6cb7f14e-6466-4211-9a09-2b8e7ad92703 | name-erkoev4ja3rv | 2e3900ff36574cf9937d88223403da77 | ACTIVE | Running | net0=10.1.1.2; ing-net=10.1.1.3; net=10.1.1.4;"
$ echo "$input" | sed -E 's,^[^|]+ \| ([^ ]+).* net=([0-9.]+).*$,\1: \2,g'
name-erkoev4ja3rv: 10.1.1.4
echo "6cb7f14e-6466-4211-9a09-2b8e7ad92703 | name-erkoev4ja3rv | 2e3900ff36574cf9937d88223403da77 | ACTIVE | Running | net0=10.1.1.2; ing-net=10.1.1.3; net=10.1.1.4;" | awk -F ' ' '{print $3}{print $13}'
Does this satisfy your case?

Formatting Neo4J-Shell cypher query results?

Is there an option to export the results of a Neo4J-Shell cypher query in a comma-separated-value format, i.e. instead of
echo "START n=node(*) MATCH n-[r]->m RETURN n.value, type(r), m.value ORDER BY n.value, type(r), m.value;" | neo4j-shell -v -path neo4j-database/ > /tmp/output.csv
less /tmp/output.csv
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| n.value | type(r) | m.value |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| "http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa" | "http://www.w3.org/1999/02/22-rdf-syntax-ns#type" | "http://www.w3.org/2002/07/owl#Class" |
| "http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa" | "http://www.w3.org/2000/01/rdf-schema#label" | "Rosa" |
| "http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa" | "http://www.w3.org/2000/01/rdf-schema#subClassOf" | "http://www.co-ode.org/ontologies/pizza/pizza.owl#NamedPizza" |
...
i would like to get the following output
less /tmp/output.csv
"http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa", "http://www.w3.org/1999/02/22-rdf-syntax-ns#type", "http://www.w3.org/2002/07/owl#Class"
"http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa", "http://www.w3.org/2000/01/rdf-schema#label", "Rosa"
"http://www.co-ode.org/ontologies/pizza/pizza.owl#Rosa", "http://www.w3.org/2000/01/rdf-schema#subClassOf", "http://www.co-ode.org/ontologies/pizza/pizza.owl#NamedPizza"
...
like in MySQL, where the ascii table is omitted when the client is used by an echo command from the shell.
You can use neo4j-JDBC to run your cypher queries via JDBC. With that in place you can use any JBCD tool that allows you to create csv.
Use the groovy script from https://gist.github.com/5736410

Resources