I keep getting this error when i execute the bash script (**./partion.sh: line 11: $'n\np\n1\n\nw\n': command not found
**):
./partion.sh: line 11: $'n\np\n1\n\nw\n': command not found
Checking that no-one is using this disk right now ... OK
Disk /dev/sdd: 3 GiB, 3221225472 bytes, 6291456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0ca4ca9d
Old situation:
>>>
New situation:
Here is the script:
#!/bin/bash
hdd="/dev/sdd"
for i in $hdd;do
echo
"n
p
1
w
"|sfdisk $i;done
I am still a newbie so really appreciate all the help i can get=)
I think it should be:
#!/bin/bash
hdd="/dev/sdd"
for i in $hdd;do
echo "n
p
1
w
"|sfdisk $i;done
Multi-line echo should start on the same line. Else, next line is treated as a command on its own.
You have an extra newline between the command echo and the string that you are telling the shell to echo, which is what is causing your error message.
But you are also sending interactive commands to sfdisk, which is not an interactive tool. Your code appears to be based on the top of this article, which #GregTarsa linked in his comment, but that is sending those commands to fdisk, not sfdisk. (You are also missing another newline between the l and w fdisk commands.)
The sfdisk program is designed to take a text description of the desired partitioning layout and apply it to a disk. It doesn't expect single-letter commands on input; it expects its input to look like this:
# partition table of /dev/sda
unit: sectors
/dev/sda1 : start= 2048, size= 497664, Id=83, bootable
/dev/sda2 : start= 501758, size=1953021954, Id= 5
/dev/sda3 : start= 0, size= 0, Id= 0
/dev/sda4 : start= 0, size= 0, Id= 0
/dev/sda5 : start= 501760, size=1953021952, Id=8e
Which is less generic than the fdisk commands, since it requires actual block numbers, but much easier to use (and for someone reading your script to understand), since you're not trying to remote-control an interactive program by echoing commands to it.
If you are going to remote-control an interactive program from the shell, I recommend looking into expect instead of doing blind echos.
Also, in general, if you are trying to print multiline data, echo is probably not the way to go. With lines this short, I would reach for printf:
printf '%s\n' n p l "" "" w | fdisk "$i"
But the more generally useful tool is the here-document:
fdisk "$i" <<EOF
n
p
l
w
EOF
A newline terminates a command. If you want to pass a multiline argument to echo, you need to move your quote. For example:
for i in $hdd; do
echo "n
p
1
w
" | sfdisk $i
done
It would probably look better to write:
for i in $hdd; do printf 'n\np\n1\n\nw\n' | sfdisk $i; done
A third option is to use a heredoc:
for i in $hdd; do
sfdisk $i << EOF
n
p
1
w
EOF
done
Related
i need to process large binary files in segments. in concept this would be similar to split, but instead of writing each segment to a file, i need to take that segment and send it as the input of another process. i thought i could use dd to read/write the file in chunks, but the results aren't at all what i expected. for example, if i try :
dd if=some_big_file bs=1M |
while : ; do
dd bs=1M count=1 | processor
done
... the output sizes are actually 131,072 bytes and not 1,048,576.
could anyone tell me why i'm not seeing output blocked to 1M chunks and how i could better accomplish what i'm trying to do ?
thanks.
According to dd's manual:
bs=bytes
[...] if no data-transforming conv option is specified, input is copied to the output as soon as it's read, even if it is smaller than the block size.
So try with dd iflag=fullblock:
fullblock
Accumulate full blocks from input. The read system call may
return early if a full block is not available. When that
happens, continue calling read to fill the remainder of the
block. This flag can be used only with iflag. This flag is
useful with pipes for example as they may return short reads.
In that case, this flag is needed to ensure that a count=
argument is interpreted as a block count rather than a count
of read operations.
First of all, you don't need the first dd. A cat file | while or done < file would do the trick as well.
dd bs=1M count=1 might return less than 1M, see
When is dd suitable for copying data? (or, when are read() and write() partial)
Instead of dd count=… use head with the (non-posix) option -c ….
file=some_big_file
(( m = 1024 ** 2 ))
(( blocks = ($(stat -c %s "$file") + m - 1) / m ))
for ((i=0; i<blocks; ++i)); do
head -c "$m" | processor
done < "$file"
Or posix conform but very inefficient
(( octM = 4 * 1024 * 1024 ))
someCommand | od -v -to1 -An | tr -d \\n | tr ' ' '\\' |
while IFS= read -rN $octM block; do
printf %b "$block" | processor
done
I used crc32 to calculate checksums from strings a long time ago, but I cannot remember how I did it.
echo -n "LongString" | crc32 # no output
I found a solution [1] to calculate them with Python, but is there not a direct way to calculate that from a string?
# signed
python -c 'import binascii; print binascii.crc32("LongString")'
python -c 'import zlib; print zlib.crc32("LongString")'
# unsigned
python -c 'import binascii; print binascii.crc32("LongString") % (1<<32)'
python -c 'import zlib; print zlib.crc32("LongString") % (1<<32)'
[1] How to calculate CRC32 with Python to match online results?
I came up against this problem myself and I didn't want to go to the "hassle" of installing crc32. I came up with this, and although it's a little nasty it should work on most platforms, or most modern linux anyway ...
echo -n "LongString" | gzip -1 -c | tail -c8 | hexdump -n4 -e '"%u"'
Just to provide some technical details, gzip uses crc32 in the last 8 bytes and the -c option causes it to output to standard output and tail strips out the last 8 bytes. (-1 as suggested by #MarkAdler so we don't waste time actually doing the compression).
hexdump was a little trickier and I had to futz about with it for a while before I came up with something satisfactory, but the format here seems to correctly parse the gzip crc32 as a single 32-bit number:
-n4 takes only the relevant first 4 bytes of the gzip footer.
'"%u"' is your standard fprintf format string that formats the bytes as a single unsigned 32-bit integer. Notice that there are double quotes nested within single quotes here.
If you want a hexadecimal checksum you can change the format string to '"%08x"' (or '"%08X"' for upper case hex) which will format the checksum as 8 character (0 padded) hexadecimal.
Like I say, not the most elegant solution, and perhaps not an approach you'd want to use in a performance-sensitive scenario but an approach that might appeal given the near universality of the commands used.
The weak point here for cross-platform usability is probably the hexdump configuration, since I have seen variations on it from platform to platform and it's a bit fiddly. I'd suggest if you're using this you should try some test values and compare with the results of an online tool.
EDIT As suggested by #PedroGimeno in the comments, you can pipe the output into od instead of hexdump for identical results without the fiddly options. ... | od -t x4 -N 4 -A n for hex ... | od -t d4 -N 4 -A n for decimal.
Or just use the process substitution:
crc32 <(echo "LongString")
Your question already has most of the answer.
echo -n 123456789 | python -c 'import sys;import zlib;print(zlib.crc32(sys.stdin.read())%(1<<32))'
correctly gives 3421780262
I prefer hex:
echo -n 123456789 | python -c 'import sys;import zlib;print("%08x"%(zlib.crc32(sys.stdin.read())%(1<<32)))'
cbf43926
Be aware that there are several CRC-32 algorithms:
http://reveng.sourceforge.net/crc-catalogue/all.htm#crc.cat-bits.32
On Ubuntu, at least, /usr/bin/crc32 is a short Perl script, and you can see quite clearly from its source that all it can do is open files. It has no facility to read from stdin -- it doesn't have special handling for - as a filename, or a -c parameter or anything like that.
So your easiest approach is to live with it, and make a temporary file.
tmpfile=$(mktemp)
echo -n "LongString" > "$tmpfile"
crc32 "$tmpfile"
rm -f "$tmpfile"
If you really don't want to write a file (e.g. it's more data than your filesystem can take -- unlikely if it's really a "long string", but for the sake for argument...) you could use a named pipe. To a simple non-random-access reader this is indistinguishable from a file:
fifo=$(mktemp -u)
mkfifo "$fifo"
echo -n "LongString" > "$fifo" &
crc32 "$fifo"
rm -f "$fifo"
Note the & to background the process which writes to fifo, because it will block until the next command reads it.
To be more fastidious about temporary file creation, see: https://unix.stackexchange.com/questions/181937/how-create-a-temporary-file-in-shell-script
Alternatively, use what's in the script as an example from which to write your own Perl one-liner (the presence of crc32 on your system indicates that Perl and the necessary module are installed), or use the Python one-liner you've already found.
Here is a pure Bash implementation:
#!/usr/bin/env bash
declare -i -a CRC32_LOOKUP_TABLE
__generate_crc_lookup_table() {
local -i -r LSB_CRC32_POLY=0xEDB88320 # The CRC32 polynomal LSB order
local -i index byte lsb
for index in {0..255}; do
((byte = 255 - index))
for _ in {0..7}; do # 8-bit lsb shift
((lsb = byte & 0x01, byte = ((byte >> 1) & 0x7FFFFFFF) ^ (lsb == 0 ? LSB_CRC32_POLY : 0)))
done
((CRC32_LOOKUP_TABLE[index] = byte))
done
}
__generate_crc_lookup_table
typeset -r CRC32_LOOKUP_TABLE
crc32_string() {
[[ ${#} -eq 1 ]] || return
local -i i byte crc=0xFFFFFFFF index
for ((i = 0; i < ${#1}; i++)); do
byte=$(printf '%d' "'${1:i:1}") # Get byte value of character at i
((index = (crc ^ byte) & 0xFF, crc = (CRC32_LOOKUP_TABLE[index] ^ (crc >> 8)) & 0xFFFFFFFF))
done
echo $((crc ^ 0xFFFFFFFF))
}
printf 'The CRC32 of: %s\nis: %08x\n' "${1}" "$(crc32_string "${1}")"
# crc32_string "The quick brown fox jumps over the lazy dog"
# yields 414fa339
Testing:
bash ./crc32.sh "The quick brown fox jumps over the lazy dog"
The CRC32 of: The quick brown fox jumps over the lazy dog
is: 414fa339
I use cksum and convert to hex using the shell builtin printf:
$ echo -n "LongString" | cksum | cut -d\ -f1 | xargs echo printf '%0X\\n' | sh
5751BDB2
The cksum command first appeared on 4.4BSD UNIX and should be present in all modern systems.
You can try to use rhash.
http://rhash.sourceforge.net/
https://github.com/rhash/RHash
http://manpages.ubuntu.com/manpages/bionic/man1/rhash.1.html
Testing:
## install 'rhash'...
$ sudo apt-get install rhash
## test CRC32...
$ echo -n 123456789 | rhash --simple -
cbf43926 (stdin)
I am using command to bring back the Read MB/s.
hdparm -t /dev/sda | awk '/seconds/{print $11}'
From what I was reading it was a good idea to test three times. Add those values up and then divide by 3 for your average.
Sometimes I will have 3 to 16 drives, so I would like to create a question that ask how many drives I have installed. Then perform the hdparm on each drive... Was wondering if there was a simple way to change the SDA all the way up to SDB, SDC, SDD, etc. without typing that command so many times...
Thank you
Bash makes it easy to enumerate all the drives:
$ echo /dev/sd{a..h}
/dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh
Then you said you wanted to average the timing output, so let's define a function to do that:
perform_timing() {
for i in {1..3}; do hdparm -t "$1"; done |
awk '/seconds/ { total += $11; count++ } END { print (total / count) }'
}
Then you can run it on all the drives:
for drive in /dev/sd{a..h}; do
printf '%s: %s\n' "$drive" "$(perform_timing "$drive")"
done
Beaking it Down
The perform_timing function does two things: 1) runs hdparm three times, then 2) averages the output. You can see how the first part works by running it manually:
# for i in {1..3}; do hdparm -t "/dev/sdc"; done
/dev/sdc:
Timing buffered disk reads: 1536 MB in 3.00 seconds = 511.55 MB/sec
/dev/sdc:
Timing buffered disk reads: 1536 MB in 3.00 seconds = 511.97 MB/sec
/dev/sdc:
Timing buffered disk reads: 1538 MB in 3.00 seconds = 512.24 MB/sec
The second part combines your awk code with logic to average all the lines, instead of printing them individually. You can see how the averaging works with a simple awk example:
$ printf '1\n4\n5\n'
1
4
5
$ printf '1\n4\n5\n' | awk '{ total += $1; count++ } END { print (total / count) }'
3.33333
We wrap all that logic in a function called perform_timing as a good programming practice. That lets us call it as if it were any other command:
# perform_timing /dev/sdc
512.303
Finally, instead of writing:
perform_timing /dev/sda
perform_timing /dev/sdb
...
We wrap it all in a loop, which this simplified loop should help explain:
# for drive in /dev/sd{a..c}; do printf '%s\n' "$drive"; done
/dev/sda
/dev/sdb
/dev/sdc
Just use without any loops:
#hdparm -i /dev/sd{a..d}
I have a text file with the following contents:
QAM Mode : QAM-16
QAM Annex : Annex A
Frequency : 0 Hz
IF Frequency : 0 Hz
Fast Acquisition : 0
Receiver Mode : cable
QAM Lock : 1
FEC Lock : 1
Output PLL Lock : 0
Spectrum Inverted : 0
Symbol Rate : -1
Symbol Rate Error : 0
IF AGC Level (in units of 1/10 percent) : 260
Tuner AGC Level (in units of 1/10 percent) : 1000
Internal AGC Level (in units of 1/10 percent) : 0
SNR Estimate (in 1/100 dB) : 2260
**FEC Corrected Block Count (Since last tune or reset) : 36472114
FEC Uncorrected Block Count (Since last tune or reset) : 0
FEC Clean Block Count (Since last tune or reset) : 0**
Cumulative Reacquisition Count : 0
Uncorrected Error Bits Output From Viterbi (Since last tune or reset) : 0
Total Number Of Bits Output from Viterbi (Since last tune or reset) : 0
viterbi bit error rate (in 1/2147483648 th units) : 0
Carrier Frequency Offset (in 1/1000 Hz) : -2668000
Carrier Phase Offset (in 1/1000 Hz) : 0
**Good Block Count (Reset on read) : -91366870**
**BER Raw Count (Reset on read) : 0**
DS Channel Power (in 10's of dBmV units ) : -760
Channel Main Tap Coefficient : 11846
Channel Equalizer Gain Value in dBm : 9
**Post Rs BER : 2147483648
Post Rs BER Elapsed Time (in Seconds) : 0**
Interleave Depth : 1
I need to parse the numbers from the bolded lines using a bash script but I haven't been able to do this with the command set I have available. This is my first time every using BASH scripts and the searches I've found that could help used some grep, sed, and cut options that weren't available. The options I have are listed below:
grep
Usage: grep [-ihHnqvs] PATTERN [FILEs...]
Search for PATTERN in each FILE or standard input.
Options:
-H prefix output lines with filename where match was found
-h suppress the prefixing filename on output
-i ignore case distinctions
-l list names of files that match
-n print line number with output lines
-q be quiet. Returns 0 if result was found, 1 otherwise
-v select non-matching lines
-s suppress file open/read error messages
sed
BusyBox v1.00-rc3 (00:00) multi-call binary
Usage: sed [-efinr] pattern [files...]
Options:
-e script add the script to the commands to be executed
-f scriptfile add script-file contents to the
commands to be executed
-i edit files in-place
-n suppress automatic printing of pattern space
-r use extended regular expression syntax
If no -e or -f is given, the first non-option argument is taken as the sed
script to interpret. All remaining arguments are names of input files; if no
input files are specified, then the standard input is read. Source files
will not be modified unless -i option is given.
awk
BusyBox v1.00-rc3 (00:00) multi-call binary
Usage: awk [OPTION]... [program-text] [FILE ...]
Options:
-v var=val assign value 'val' to variable 'var'
-F sep use 'sep' as field separator
-f progname read program source from file 'progname'
Can someone please help me with this? Thanks!
AWK can do that for you:
awk '/^(FEC.*Block|Good Block|BER|Post)/{print $NF}' textfile
grep -e "^FEC " -e "^Good Block" -e "BER" file.txt | awk '{print $NF}'
grep: Match lines that: start with FEC or start with Good Block or contains BER
awk: Print the last space-separated field in each line
If you have the right grep, you can do this with grep alone, using a regex look-ahead:
$ /bin/grep -Po "(?<=Post Rs BER : )(.+)" data.txt
2147483648
$
I got the inspiration for this here
In addition, you can do this with a pure bash one-liner, no awk, sed, grep, or other helpers:
$ { while read line; do if [[ $line =~ "Post Rs BER : (.*)$" ]]; then echo ${BASH_REMATCH[1]}; fi; done; } < data.txt
2147483648
$
or
$ cat data.txt | { while read line; do if [[ $line =~ "Post Rs BER : (.*)$" ]]; then echo ${BASH_REMATCH[1]}; fi; done; }
2147483648
$
Assume that I have programs P0, P1, ...P(n-1) for some n > 0. How can I easily redirect the output of program Pi to program P(i+1 mod n) for all i (0 <= i < n)?
For example, let's say I have a program square, which repeatedly reads a number and than prints the square of that number, and a program calc, which sometimes prints a number after which it expects to be able to read the square of it. How do I connect these programs such that whenever calc prints a number, square squares it returns it to calc?
Edit: I should probably clarify what I mean with "easily". The named pipe/fifo solution is one that indeed works (and I have used in the past), but it actually requires quite a bit of work to do properly if you compare it with using a bash pipe. (You need to get a not yet existing filename, make a pipe with that name, run the "pipe loop", clean up the named pipe.) Imagine you could no longer write prog1 | prog2 and would always have to use named pipes to connect programs.
I'm looking for something that is almost as easy as writing a "normal" pipe. For instance something like { prog1 | prog2 } >&0 would be great.
After spending quite some time yesterday trying to redirect stdout to stdin, I ended up with the following method. It isn't really nice, but I think I prefer it over the named pipe/fifo solution.
read | { P0 | ... | P(n-1); } >/dev/fd/0
The { ... } >/dev/fd/0 is to redirect stdout to stdin for the pipe sequence as a whole (i.e. it redirects the output of P(n-1) to the input of P0). Using >&0 or something similar does not work; this is probably because bash assumes 0 is read-only while it doesn't mind writing to /dev/fd/0.
The initial read-pipe is necessary because without it both the input and output file descriptor are the same pts device (at least on my system) and the redirect has no effect. (The pts device doesn't work as a pipe; writing to it puts things on your screen.) By making the input of the { ... } a normal pipe, the redirect has the desired effect.
To illustrate with my calc/square example:
function calc() {
# calculate sum of squares of numbers 0,..,10
sum=0
for ((i=0; i<10; i++)); do
echo $i # "request" the square of i
read ii # read the square of i
echo "got $ii" >&2 # debug message
let sum=$sum+$ii
done
echo "sum $sum" >&2 # output result to stderr
}
function square() {
# square numbers
read j # receive first "request"
while [ "$j" != "" ]; do
let jj=$j*$j
echo "square($j) = $jj" >&2 # debug message
echo $jj # send square
read j # receive next "request"
done
}
read | { calc | square; } >/dev/fd/0
Running the above code gives the following output:
square(0) = 0
got 0
square(1) = 1
got 1
square(2) = 4
got 4
square(3) = 9
got 9
square(4) = 16
got 16
square(5) = 25
got 25
square(6) = 36
got 36
square(7) = 49
got 49
square(8) = 64
got 64
square(9) = 81
got 81
sum 285
Of course, this method is quite a bit of a hack. Especially the read part has an undesired side-effect: termination of the "real" pipe loop does not lead to termination of the whole. I couldn't think of anything better than read as it seems that you can only determine that the pipe loop has terminated by try to writing write something to it.
A named pipe might do it:
$ mkfifo outside
$ <outside calc | square >outside &
$ echo "1" >outside ## Trigger the loop to start
This is a very interesting question. I (vaguely) remember an assignment very similar in college 17 years ago. We had to create an array of pipes, where our code would get filehandles for the input/output of each pipe. Then the code would fork and close the unused filehandles.
I'm thinking you could do something similar with named pipes in bash. Use mknod or mkfifo to create a set of pipes with unique names you can reference then fork your program.
My solutions uses pipexec (Most of the function implementation comes from your answer):
square.sh
function square() {
# square numbers
read j # receive first "request"
while [ "$j" != "" ]; do
let jj=$j*$j
echo "square($j) = $jj" >&2 # debug message
echo $jj # send square
read j # receive next "request"
done
}
square $#
calc.sh
function calc() {
# calculate sum of squares of numbers 0,..,10
sum=0
for ((i=0; i<10; i++)); do
echo $i # "request" the square of i
read ii # read the square of i
echo "got $ii" >&2 # debug message
let sum=$sum+$ii
done
echo "sum $sum" >&2 # output result to stderr
}
calc $#
The command
pipexec [ CALC /bin/bash calc.sh ] [ SQUARE /bin/bash square.sh ] \
"{CALC:1>SQUARE:0}" "{SQUARE:1>CALC:0}"
The output (same as in your answer)
square(0) = 0
got 0
square(1) = 1
got 1
square(2) = 4
got 4
square(3) = 9
got 9
square(4) = 16
got 16
square(5) = 25
got 25
square(6) = 36
got 36
square(7) = 49
got 49
square(8) = 64
got 64
square(9) = 81
got 81
sum 285
Comment: pipexec was designed to start processes and build arbitrary pipes in between. Because bash functions cannot be handled as processes, there is the need to have the functions in separate files and use a separate bash.
Named pipes.
Create a series of fifos, using mkfifo
i.e fifo0, fifo1
Then attach each process in term to the pipes you want:
processn < fifo(n-1) > fifon
I doubt sh/bash can do it.
ZSH would be a better bet, with its MULTIOS and coproc features.
A command stack can be composed as string from an array of arbitrary commands
and evaluated with eval. The following example gives the result 65536.
function square ()
{
read n
echo $((n*n))
} # ---------- end of function square ----------
declare -a commands=( 'echo 4' 'square' 'square' 'square' )
#-------------------------------------------------------------------------------
# build the command stack using pipes
#-------------------------------------------------------------------------------
declare stack=${commands[0]}
for (( COUNTER=1; COUNTER<${#commands[#]}; COUNTER++ )); do
stack="${stack} | ${commands[${COUNTER}]}"
done
#-------------------------------------------------------------------------------
# run the command stack
#-------------------------------------------------------------------------------
eval "$stack"