Brace expansion with step increment not working bash - bash

I'm trying to stack some text files as new columns. The files are named energies_Strength0.0BosonsXXX.txt where XXX is 80,90,100 or 110. When I run the following command:
paste energies_Strength0.0Bosons{110..80..10}.txt | column -s $'\t' -t > energies_Strength0.0.txt
I get the following error:
paste: energies_Strength0.0Bosons{110..80..10}.txt: No such file or directory
paste: energies_Strength0.1Bosons{110..80..10}.txt: No such file or directory
paste: energies_Strength0.05Bosons{110..80..10}.txt: No such file or directory
paste: energies_Strength0.15Bosons{110..80..10}.txt: No such file or directory
This same command works just fine if files are indexed in unit steps. This is, if XXX={80,81,82,...,109,110} and I run the command:
paste energies_Strength0.0Bosons{110..80}.txt | column -s $'\t' -t > energies_Strength0.0.txt
EDIT:
Hello there, I have tried the following lines based on your idea:
#$ -S /bin/bash
LANG=C
for ((i=110; i>=80; i-=10));
do
paste energies_Strength0.0Bosons$i.txt | column -s $'\t' -t > energies_Strength0.0.txt
done
but it only pastes the ...Bosons80.txt file. I need to build an structure like the following:
paste ...80.txt ...90.txt ...100.txt ...110.txt | column -s $'\t' -t > energies_Strength0.0.txt

{110..80..10} syntax is only supported on BASH 4+ VERSIONS.
On OSX your BASH version is 3.2.xx
You can use this alternative arithmetic looping:
for ((i=110; i>=80; i-=10)); do echo $i.txt; done

bash >=4 {100..80..10}
bash <4 you could seq 80 10 100
example:
kent$ seq -f '%g.txt' 80 10 110
80.txt
90.txt
100.txt
110.txt

{A..B} is well-defined Bash syntax called brace expansion, when A and B are integers. {A..B..C} does not follow this pattern, so it's interpreted as a literal.

Related

Userdel bash script issue [duplicate]

This question already has answers here:
Read a file line by line assigning the value to a variable [duplicate]
(10 answers)
Fast way of finding lines in one file that are not in another?
(11 answers)
Closed 4 years ago.
I'm writing a bash script that deletes users that are not permitted within the system, but im running into a problem.
#!/bin/bash
getent passwd {1000..60000} | cut -d: -f1 > allusers.txt;
diff allowedusers.txt allusers.txt > del.user;
for user in "cat del.user";
do userdel -r $user;
done
When I run it, everything goes smoothly until the userdel command. It just outputs usage of userdel.
Usage: userdel [options] LOGIN
Options:
-f, --force force removal of files,
even if not owned by user
-h, --help display this help message and exit
-r, --remove remove home directory and mail spool
-R, --root CHROOT_DIR directory to chroot into
-Z, --selinux-user remove any SELinux user mapping for the user
No changes are made to users after the script has run. Any help would be appreciated.
diff produces the result with the number of lines where the difference is found:
:~$ cat 1
User1
User2
User3
$ cat 2
User233
User43
User234
User1
And result is:
$ diff 1 2
0a1,3
> User233
> User43
> User234
2,3d4
< User2
< User3
Instead of diff try grep (to show differences in 2d file) :
grep -v -F -x -f file1 file2
where:
-F, --fixed-strings
Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched.
-x, --line-regexp
Select only those matches that exactly match the whole line.
-v, --invert-match
Invert the sense of matching, to select non-matching lines.
-f FILE, --file=FILE
Obtain patterns from FILE, one per line. The empty file contains zero patterns, and therefore matches nothing.
Example result is:
$ grep -v -F -x -f 1 2
User233
User43
User234
Your user variable is not iterating over the users in the file. It is iterating over the literal string "cat del.user" instead of the contents of the file del.user.
To get the contents of the file, I believe you meant to use a subshell to cat the file:
for user in $(cat del.user); do
userdel -r $user
done

How to read all text file with head linux command?

I can't read or apply any other commands like cat or strings on .txt files because it is not allowed. I need to read a file named flag.txt, but this file is also on the blacklist. So, is there any way to read *.txt using the head command? The head command is allowed.
blacklist=\
'flag\|<\|$\|"\|'"'"'\|'\
'cat\|tac\|*\|?\|less\|more\|pico\|nano\|edit\|hexdump\|xxd\|'\
'sed\|tail\|diff\|grep\|paste\|strings\|bas64\|sort\|uniq\|cut\|awk\|'\
'bzip\|gzip\|xz\|tar\|ar\|'\
'mv\|cp\|ln\|nl\|'\
'python\|perl\|sh\|cc\|g++\|php\|hd\|g++\|gcc\|curl\|tcp\|udp\|'\
'scp\|sftp\|wget\|nc\|netcat'
Thanks
do you want some alternative of the command head *.txt? if so, ls/findand xargs will help, but it can not identify .txt file, it will read all the file under the directory.
ls -1| xargs head
You can use the ` (backtick) in the following way:
head `ls -1`
Backtick has a very special meaning. Everything you type between
backticks is evaluated (executed) by the shell before the main command
So the command will do the following:
`ls -1` - will result with the file names
head - will show the start of the files listed in ls -1
More info about backtick can be found in this answer
If you need a glob that matches flag.txt but can use neither * not the string flag, you can use fl[a]g.txt instead. Then, to print the entire file using head, use -c and pass it the size of the file:
head -c $(stat -c '%s' fl[a]g.txt) fl[a]g.txt
Another approach would be to use the shell to read the file:
while IFS= read -r c; do echo $c; done < fl[a]g.txt
You could also just use paste:
paste fl[a]g.txt

argument getting truncated while printing in unix after merging files

I am trying to combine two tab seperated text files but one of the fields is being truncated by awk when I use the command (pls suggest something other than awk if it is easier to do so)
pr -m -t test_v1 test.predict | awk -v OFS='\t' '{print $4,$5,$7}' > out_test8
The format of the test_v1 is
478 192 46 10203853138191712
but I only print 10203853138 for $4 truncating the other digits. Should I use string format?
Actually I found out after a suggestion given that pr -m -t itself does not give the correct output
478^I192^I46^I10203853138^I^I is the output of the command
pr -m -t test_v1 test.predict | cat -vte
I used paste test_v1 test.predict instead of pr and got the right answer.
You problem is use pr -m (merge) here which as per manual:
-m, --merge
print all files in parallel, one in each column, truncate lines, but join lines of full length with -J
You can use:
paste test_v1 test.predict
Run dos2unix on your files first, you've just got control-Ms in your input file(s).

Pass Every Line of Input as stdin for Invocation of Utility

I have a file containing valid xmls (one per line) and I want to execute a utility (xpath) on each line one by one.
I tried xargs but that seems doesn't seem to have an option to pass the line as stdin :-
% cat <xmls-file> | xargs -p -t -L1 xpath -p "//Path/to/node"
Cannot open file '//Path/to/node' at /System/Library/Perl/Extras/5.12/XML/XPath.pm line 53.
I also tried parallel --spreadstdin but that doesn't seem to work either :-
% cat <xmls-file> | parallel --spreadstdin xpath -p "//Path/to/node"
junk after document element at line 2, column 0, byte 1607
If you want every line of a file to be split off and made stdin for a utility
you could use a for loop in bash shell:
cat xmls-file | while read line
do ( echo $f > /tmp/input$$;
xpath -p "//Path/to/node" </tmp/input$$
rm -f /tmp/input$$
);
done
The $$ appends the process id number, creating a unique name
I assume xmls-file contains, on each line, what you want iterated into $f and that you want this as stdin for a command line, not as a parameter to the command.
On the other hand, your specification may be incorrect and maybe instead you need each line
to be part of a command. In that case, delete the echo and rm lines, and change the xpath command to include $f wherever the line from the file is needed.
I've not done much XML so the do command may need to be edited.
You are very close with the GNU Parallel version; only -n1 missing:
cat <xmls-file> | parallel -n1 --spreadstdin xpath -p "//Path/to/node"

Extract part of a filename shell script

In bash I would like to extract part of many filenames and save that output to another file.
The files are formatted as coffee_{SOME NUMBERS I WANT}.freqdist.
#!/bin/sh
for f in $(find . -name 'coffee*.freqdist)
That code will find all the coffee_{SOME NUMBERS I WANT}.freqdist file. Now, how do I make an array containing just {SOME NUMBERS I WANT} and write that to file?
I know that to write to file one would end the line with the following.
> log.txt
I'm missing the middle part though of how to filter the list of filenames.
You can do it natively in bash as follows:
filename=coffee_1234.freqdist
tmp=${filename#*_}
num=${tmp%.*}
echo "$num"
This is a pure bash solution. No external commands (like sed) are involved, so this is faster.
Append these numbers to a file using:
echo "$num" >> file
(You will need to delete/clear the file before you start your loop.)
If the intention is just to write the numbers to a file, you do not need find command:
ls coffee*.freqdist
coffee112.freqdist coffee12.freqdist coffee234.freqdist
The below should do it which can then be re-directed to a file:
$ ls coffee*.freqdist | sed 's/coffee\(.*\)\.freqdist/\1/'
112
12
234
Guru.
The previous answers have indicated some necessary techniques. This answer organizes the pipeline in a simple way that might apply to other jobs as well. (If your sed doesn't support ‘;’ as a separator, replace ‘;’ with ‘|sed’.)
$ ls */c*; ls c*
fee/coffee_2343.freqdist
coffee_18z8.x.freqdist coffee_512.freqdist coffee_707.freqdist
$ find . -name 'coffee*.freqdist' | sed 's/.*coffee_//; s/[.].*//' > outfile
$ cat outfile
512
18z8
2343
707

Resources