using sqlite3 in for loop in shell script gives error - shell

Need to query a certain value from each file in a directory and put it in a file. I use the code:
#!/bin/bash
ls -lrt | grep -w "458752" | awk '{print $9}' | sort -V > list
for linename in cat list
do
/d/home/alima0152/Desktop/sqlite3 $linename "select trace_count from volume"; >> trc_count
done
rm list
But I get this error:
file is encrypted or is not a database

This code is trying to open the files cat and list.
To execute something and insert its output, use `...` or $(...):
for linename in $(cat list)

Related

How can I cut off tail of each line in the text file in all subfolders with shell script?

I have this project,
Parent
|-ChildA
|-ChildB
|- ....
|-ChildZ
and each child directory contains requirements.txt that has python package information like this,
packageA==0.1
packageB==5.9.3
...
packageZ==2.9.18.23
I want to cut off all version information so that output file will be,
packageA
packageB
...
packageZ
I am trying,
cat requirements.txt | grep "==" | cut -d "=" -f 1
but it does not iterate all subdirectories and does not save. How can I make it?
Thanks!
*I am using ubuntu20.04
In order to Execute the command on all the requirements.txt files, you'll need to iterate through all the Child directories, You can do so using this simple script:
#!/bin/sh
for child in ./Child* ; do
cat "$child/requirements.txt" | grep "==" | cut -d "=" -f 1
done
Now if you wish to "save" the new version of each file, you can just redirect each command output to the file using the > operator. Using this operator will overwrite your file, so I suggest you redirect the output to a new file.
Heres the script with the redirected output:
#!/bin/sh
for child in ./Child* ; do
cat "$child/requirements.txt" | grep "==" | cut -d "=" -f 1 > $child/cut-requirements.txt
done

Multiple output in single line Shell commend with pipe only

For example:
ls -l -d */ | wc -l | awk '{print $1}' | tee /dev/tty | ls -l
This shell command print the result of wc and ls -l with single line, but tee is used.
Is it possible to using one Shell commend line to achieve multiple output without using “&&” “||” “>” “>>” “<” “;” “&”,tee and temp file?
When you want the output of date and ls -rtl | head -1 on one line, you can use
echo "$(date): $(ls -rtl | head -1)"
Yes, you can achieve writing to multiple files with awk which is not in the list of things you appear not to like:
echo hi | awk '{print > "a.txt"; print > "b.txt"}'
Then check a.txt and b.txt.

bash, execute "edmEventSize" command but it is not found when i tyoe bash script.sh

i hava a file in which using the command "edmEventSize" i can
extract a piece of information of that file (it is a number)
but know i have 700 files on which i have to execute that command
and i am trying to do it on a bash script but i cannot event do it for just
one file since i get "edmEventSize command not found", i already look for
more information but since i am new at bash i can not solve this task
Thank you in advanced
this is my script
#/usr/bin/env sh
for i in {1..700};
do
FILE="Py6_BstoJpsiKs0_7TeV_RECO_Run-0${i}.root"
edmEventSize... $FILE.root > salida${i}.log
done
head *.log | grep "^File" | cut -f4 > a.txt
rm *.log
As everyone would suggest, you can simplify your script like this:
#/bin/bash
for i in {1..700}; do
FILE="Py6_BstoJpsiKs0_7TeV_RECO_Run-0${i}.root"
/path/to/EdmEventSize "$FILE.root"
done | awk -F $'\t' '/^File/{print $4}' > a.txt
If your files actually are in the format of Py6_BstoJpsiKs0_7TeV_RECO_Run-####.root maybe the command you really need is:
printf -v FILE 'Py6_BstoJpsiKs0_7TeV_RECO_Run-%04d.root' "$i"

Parsing CSV file in bash script [duplicate]

This question already has answers here:
How to extract one column of a csv file
(18 answers)
Closed 7 years ago.
I am trying to parse in a CSV file which contains a typical access control matrix table into a shell script. My sample CSV file would be
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
I would be using this list in order to create files in their respective folders. The problem is how do I get it to store the values of column 2/3 (admin/security)? The output I'm trying to achieve is to group/sort all users that have admin/security rights and create files in their respective folders. (My idea is to probably store all admin/security users into different files and run from there.)
The environment does not allow me to use any Perl or Python programs. However any awk or sed commands are greatly appreciated.
My desired output would be
$ cat sample.csv
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
$ cat security.csv
user2
user3
$ cat admin.csv
user1
user3
if you can use cut(1) (which you probably can if you're on any type of unix) you can use
cut -d , -f (n) (file)
where n is the column you want.
You can use a range of columns (2-3) or a list of columns (1,3).
This will leave the quotes but you can use a sed command or something light-weight for that.
$ cat sample.csv
"user","admin","security"
"user1","x",""
"user2","","x"
"user3","x","x"
$ cut -d , -f 2 sample.csv
"admin"
"x"
""
"x"
$ cut -d , -f 3 sample.csv
"security"
""
"x"
"x"
$ cut -d , -f 2-3 sample.csv
"admin","security"
"x",""
"","x"
"x","x"
$ cut -d , -f 1,3 sample.csv
"user","security"
"user1",""
"user2","x"
"user3","x"
note that this won't work for general csv files (doesn't deal with escaped commas) but it should work for files similar to the format in the example for simple usernames and x's.
if you want to just grab the list of usernames, then awk is pretty much the tool made for the job, and an answer below does a good job that I don't need to repeat.
But a grep solution might be quicker and more lightweight
The grep solution:
grep '^\([^,]\+,\)\{N\}"x"'
where N is the Nth column, with the users being column 0.
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv
"user1","x",""
"user3","x","x"
$ grep '^\([^,]\+,\)\{2\}"x"' sample.csv
"user2","","x"
"user3","x","x"
from there on you can use cut to get the first column:
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv | cut -d , -f 1
"user1"
"user3"
and sed 's/"//g' to get rid of quotes:
$ grep '^\([^,]\+,\)\{1\}"x"' sample.csv | cut -d , -f 1 | sed 's/"//g'
user1
user3
$ grep '^\([^,]\+,\)\{2\}"x"' sample.csv | cut -d , -f 1 | sed 's/"//g'
user2
user3
Something to get you started (please note this will not work for csv files with embedded commas and you will have to use a csv parser):
awk -F, '
NR>1 {
gsub(/["]/,"",$0);
if($2!="" && $3!="")
print $1 " has both privileges";
print $1 > "file"
}' csv

How to convert from command line to bash script?

I have a chain of commands that can execute all at once, however I wish to put it inside of a bash script. The problem is that I have no clue how to. My command is like so:
/usr/bin/sort -n db | /usr/bin/awk -F: '{print $1; print $2}' | db5.1_load -T -t hash newdb
How can I convert the above into a bash script?
This should normally be as simple as putting the shell command into a text file, and putting the Unix shebang on the first line of the file, which defines which program to use to run the script (in this case, /bin/bash). So this would look like:
#!/bin/bash
/usr/bin/sort -n db | /usr/bin/awk -F: '{print $1; print $2}' | db5.1_load -T -t hash newdb

Resources