Debugging why SPI Master is Reading Arbitary Values - linux-kernel

I have an SPI bus between a MAX V device and an AM335x processor.
The MAX V device has an SPI setup to repeatedly send a STD_LOGIC_VECTOR defined as "x0100".
This seems to work fine. The output on a scope is repeatedly the same value.
In Linux, I seem to get either shifted data, or some random data. Using spi-tools from here https://github.com/cpb-/spi-tools
When these tools are used, I get the following:
# spi-config -d /dev/spidev0.0 -m 1 -s 10000000
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 0202
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 0a0a
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 2a2a
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 aaaa
0000002
# spi-pipe -d /dev/spidev0.0 -b 2 -n 1 < /dev/urandom | hexdump
0000000 aaaa
0000002
You can see how the device is configured there. On the scope, the MISO pin is clearly outputting "00000010 00000000" for every 16 clock cycles on SCLK. What is happening here? How can I repeatedly get the correct value from the device?
For clarity, here is the relevant parts of the device tree and the kernel configuration.
Kernel
CONFIG_SPI=y
CONFIG_SPI_MASTER=y
CONFIG_SPI_GPIO=y
CONFIG_SPI_BITBANG=y
CONFIG_SPI_OMAP24XX=y
CONFIG_SPI_TI_QSPI=y
CONFIG_SPI_SPIDEV=y
CONFIG_REGMAP_SPI=y
CONFIG_MTD_SPI_NOR=y
CONFIG_SPI_CADENCE_QUADSPI=y
Device Tree
&spi1 {
/* spi1 bus is connected to the CPLD only on CS0 */
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&spi1_pins>;
ti,pindir-d0-out-d1-in;
cpld_spidev: cpld_spidev#0 {
status = "okay";
compatible = "linux,spidev";
spi-max-frequency = <1000000>;
reg = <0>;
};
};
Also here is a screengrab of the waveforms produced.
Really the end goal is an app to report the version stated as the STD_LOGIC_VECTOR, on the MAX V device. So 0100 is intended to be version 1.00.

Use the uboot_overlay in /boot/uEnv.txt called BB-SPIDEV0-00A0.dtbo.
If you need any more info, please ask. Oh! And there is a fellow, Dr. Molloy, that had produced a book a while back.
chp08/spi/ is the location of the file you will need to test the SPI Device.
The command is simply spidev_test

Related

ELF go binaries default byte alignment

I empirically see that go ELF binaries use 16 bytes alignment. For example:
$ wget https://github.com/gardener/gardenctl/releases/download/v0.24.2/gardenctl-linux-amd64
$ readelf -W -s gardenctl-linux-amd64 | grep -E "FUNC" | wc -l
44746
$ readelf -W -s gardenctl-linux-amd64 | grep -E "0[ ]+[0-9]* FUNC" | wc -l
44744
so vast majority have 0 in their least significant byte. Is it always like that in go binaries?
This depends on the platform. If you have a source repo checked out:
% cd go/src/cmd/link/internal
% grep "funcAlign =" */*.go
amd64/l.go: funcAlign = 32
arm/l.go: funcAlign = 4 // single-instruction alignment
arm64/l.go: funcAlign = 16
mips64/l.go: funcAlign = 8
ppc64/l.go: funcAlign = 16
riscv64/l.go: funcAlign = 8
s390x/l.go: funcAlign = 16
x86/l.go: funcAlign = 16
the alignment for amd64 may go back down to 16 in the future; it is 32 for a while because of https://github.com/golang/go/issues/35881

Why do these hash-based duplicate filters work differently from sort | uniq and sort -u?

I came across a shell script that is fast at eliminating duplicates without sorting the lines. During the later investigation I also found that a similar method is suggested with awk and perl.
However, I noticed that these methods work a bit differently than the usual sort -u and sort | uniq.
$ dd if=/dev/urandom of=sortable bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.10448 s, 1.0 GB/s
$ wc -l sortable
409650 sortable
$ cat sortable | sort -u | wc -l
404983
$ cat sortable | sort | uniq | wc -l
404983
$ cat sortable | ./sortuniq | wc -l
406650
$ cat sortable | awk '!x[$0]++' | wc -l
406651
$ cat sortable | perl -ne 'print if ! $x{$_}++' | wc -l
406650
Why the differences? I tried setting up a small test file with empty lines, lines of 0, lines padded with whitespace. And I found all the methods to do the same.
I was able to use cmp find that awk line count is greater simply because it added a newline at the end. But I was not able to get my head around the other cases. I sorted the array-uniqued file and found that in my case the first difference was in the line 12. I printed some lines from both files (awk '!x[$0]++;' file | sort and sort -u file) and the line seems to have shifted with lines 12, 13 and 14 inserted between the sort's 11 and 12.:
$ sed -n '11p' sorted | hexdump
0000000 9300 000a
0000003
$ sed -n '12p' sorted | hexdump
0000000 b700 ef00 d062 d0b4 6478 1de1 a9e8 c6fd
0000010 4e05 e385 e678 7cbb 5f46 ce85 3d26 1875
0000020 56e4 baf1 b34a 0006 1dda 06cc efd6 9627
0000030 edbe 3bf7 a2c7 8b3f 1fe0 790e 9b1b 237e
0000040 42ac 3f5b 827b 535d 2e59 4001 3ce1 bd7d
0000050 7712 21c9 e025 751d c591 84ce b809 4038
0000060 372a 07d4 220f 59cc 3e2f 7ac3 88bb 23b1
0000070 fe37 1a36 31f8 fde6 7368 bd89 631b f3a9
0000080 8467 b413 9a28 000a
0000087
$ sed -n '13p' sorted | hexdump
0000000 f800 cb00 f473 583d e2c5 2a8c 7c81 cbcd
0000010 3af1 9cf7 4992 2aab 90ed b018 5f4f b03b
0000020 40f1 8731 17fa d74a ba7e db12 6f8d 5a37
0000030 dd97 837e 4eb2 05d4 7d28 722e 8e49 7ffa
0000040 176d c54b a0a0 a63a 26a2 db5e 4ea8 5f44
0000050 33fe 26a7 40bb 98b0 6660 62bd b56a 949e
0000060 eaa7 9dd1 9427 5fab 7840 f509 4fbf 06ea
0000070 d389 15c8 fbf0 3ea6 4a53 909f 1c75 2acb
0000080 7074 d41e 40f2 14b7 b8aa 04e2 00bf 7b6e
0000090 ff3f 4822 c3e6 b3e9 1708 6a93 55fd a5f6
00000a0 ad3b 9b7d 7c2e faa1 4d25 2f32 c434 4a8c
00000b0 a42e 6d8c 138f 030b accd 086b baa2 6f92
00000c0 6256 e959 b19a c371 f7bf 7c63 773c 9e4d
00000d0 bb2b f555 bc05 9454 29a6 f221 e088 c259
00000e0 9bed ab59 0591 2d30 9162 1dd1 91ea c928
00000f0 cb8f 60bc 6f25 62b2 a424 2f97 0058 0d3e
0000100 95f2 7cf4 d53b 6208 6cba c013 3505 9704
0000110 5a1f f63f 9bea 7d45 2dd6 8084 d078 d8b1
0000120 5fdc fb57 8cf8 6ae8 b791 23bd f2f5 70eb
0000130 9094 407a 228d 5818 a0fa d480 53f7 eb8e
0000140 f07b b288 e39b 60c7 a581 8481 97da 68d9
0000150 7240 2fb1 6ec6 fc57 78cd 4988 90a2 52d3
0000160 2fb6 3efd c140 d890 c2ff 2c0c ad02 47db
0000170 106e da82 dd0f 3f7f 49c1 2d2c dc0f 4a1e
0000180 01d3 95de 000a
0000185
$ sed -n '14p' sorted | hexdump
0000000 c400 0ac8
0000004
$ sed -n '11p' awksorted | hexdump
0000000 9300 000a
0000003
$ sed -n '12p' awksorted | hexdump
0000000 a100 000a
0000003
$ sed -n '13p' awksorted | hexdump
0000000 ff00 000a
0000003
$ sed -n '14p' awksorted | hexdump
0000000 d200 000a
0000003
$ sed -n '15p' awksorted | hexdump
0000000 b700 ef00 d062 d0b4 6478 1de1 a9e8 c6fd
0000010 4e05 e385 e678 7cbb 5f46 ce85 3d26 1875
0000020 56e4 baf1 b34a 0006 1dda 06cc efd6 9627
0000030 edbe 3bf7 a2c7 8b3f 1fe0 790e 9b1b 237e
0000040 42ac 3f5b 827b 535d 2e59 4001 3ce1 bd7d
0000050 7712 21c9 e025 751d c591 84ce b809 4038
0000060 372a 07d4 220f 59cc 3e2f 7ac3 88bb 23b1
0000070 fe37 1a36 31f8 fde6 7368 bd89 631b f3a9
0000080 8467 b413 9a28 000a
0000087
$ sed -n '16p' awksorted | hexdump
0000000 f800 cb00 f473 583d e2c5 2a8c 7c81 cbcd
0000010 3af1 9cf7 4992 2aab 90ed b018 5f4f b03b
0000020 40f1 8731 17fa d74a ba7e db12 6f8d 5a37
0000030 dd97 837e 4eb2 05d4 7d28 722e 8e49 7ffa
0000040 176d c54b a0a0 a63a 26a2 db5e 4ea8 5f44
0000050 33fe 26a7 40bb 98b0 6660 62bd b56a 949e
0000060 eaa7 9dd1 9427 5fab 7840 f509 4fbf 06ea
0000070 d389 15c8 fbf0 3ea6 4a53 909f 1c75 2acb
0000080 7074 d41e 40f2 14b7 b8aa 04e2 00bf 7b6e
0000090 ff3f 4822 c3e6 b3e9 1708 6a93 55fd a5f6
00000a0 ad3b 9b7d 7c2e faa1 4d25 2f32 c434 4a8c
00000b0 a42e 6d8c 138f 030b accd 086b baa2 6f92
00000c0 6256 e959 b19a c371 f7bf 7c63 773c 9e4d
00000d0 bb2b f555 bc05 9454 29a6 f221 e088 c259
00000e0 9bed ab59 0591 2d30 9162 1dd1 91ea c928
00000f0 cb8f 60bc 6f25 62b2 a424 2f97 0058 0d3e
0000100 95f2 7cf4 d53b 6208 6cba c013 3505 9704
0000110 5a1f f63f 9bea 7d45 2dd6 8084 d078 d8b1
0000120 5fdc fb57 8cf8 6ae8 b791 23bd f2f5 70eb
0000130 9094 407a 228d 5818 a0fa d480 53f7 eb8e
0000140 f07b b288 e39b 60c7 a581 8481 97da 68d9
0000150 7240 2fb1 6ec6 fc57 78cd 4988 90a2 52d3
0000160 2fb6 3efd c140 d890 c2ff 2c0c ad02 47db
0000170 106e da82 dd0f 3f7f 49c1 2d2c dc0f 4a1e
0000180 01d3 95de 000a
0000185
$ sed -n '17p' awksorted | hexdump
0000000 c400 0ac8
0000004
sortuniq
Here is the sortuniq code. I found it in this shell script collection (that's why I refer to it as a "shell script").
#!/usr/bin/php
<?php
$in = fopen('php://stdin',"r");
$d = array();
while($z = fgets($in))
#$d[$z]++;
if($argc > 1 and ($argv[1] == 'c' or $argv[1] == '-c'))
foreach($d as $a => $b)
echo ("$b $a");
else
foreach($d as $a => $b)
echo ("$a");
Just be careful, this is dangerously fast. I was planning to ask about the speed itself before I found this issue during performance tests.
The uniq of coreutils does not actually check if the lines are unique but if the lines have different sort order in the current locale.
We can check that with a collation that's not "hard" we get the same results as with awk. This effectively disables collations and just compares the bytes:
$ ( LC_COLLATE=POSIX sort -u sortable | wc -l)
406651
$ ( LC_COLLATE=C sort -u sortable | wc -l)
406651
Example
Knowing the reasons, it's simple to reproduce this behaviour with a valid text file. Take japanese, arabic or whatever characters and use a locale where these characters have no defined sort order.
$ echo 'あ' > utf8file
$ echo 'い' >> utf8file
$ file utf8file
utf8file: UTF-8 Unicode text
$ sort -u utf8file
あ
$ (LC_COLLATE=en_US.UTF-8 sort -u utf8file)
あ
$ (LC_COLLATE=C sort -u utf8file)
あ
い
$ (LC_COLLATE=POSIX sort -u utf8file)
あ
い
$ (LC_COLLATE=C.UTF-8 sort -u utf8file)
あ
い
The code
We can trace this starting with the different function in util. It uses xmemcoll if it determines that the locale is hard - not C or POSIX. xmemcoll seems to be a memcoll wrapper that adds error reporting. The memcoll source explains that bytewise-different strings will be rechecked for equality using strcoll:
/* strcoll is slow on many platforms, so check for the common case
where the arguments are bytewise equal. Otherwise, walk through
the buffers using strcoll on each substring. */
if (s1len == s2len && memcmp (s1, s2, s1len) == 0)
{
errno = 0;
diff = 0;
}
else
{
//
}
Interestingly, a \0 byte is not a problem for memcoll. While C strcoll will stop at \0, memcoll function is dropping the bytes from the start of string one by one to work around this - see lines 39 to 55.

Inconsistencies when packing hex string

I am having some inconsistencies when using hexdump and xxd. When I run the following command:
echo -n "a42d9dfe8f93515d0d5f608a576044ce4c61e61e" \
| sed 's/\(..\)/\1\n/g' \
| awk '/^[a-fA-F0-9]{2}$/ { printf("%c",strtonum("0x" $0)); }' \
| xxd
it returns the following results:
00000000: c2a4 2dc2 9dc3 bec2 8fc2 9351 5d0d 5f60 ..-........Q]._`
00000010: c28a 5760 44c3 8e4c 61c3 a61e ..W`D..La...
Note the "c2" characters. This also happens with I run xxd -p
When I run the same command except with hexdump -C:
echo -n "a42d9dfe8f93515d0d5f608a576044ce4c61e61e" \
| sed 's/\(..\)/\1\n/g' \
| awk '/^[a-fA-F0-9]{2}$/ { printf("%c",strtonum("0x" $0)); }' \
| hexdump -C
I get the same results (as far as including the "c2" character):
00000000 c2 a4 2d c2 9d c3 be c2 8f c2 93 51 5d 0d 5f 60 |..-........Q]._`|
00000010 c2 8a 57 60 44 c3 8e 4c 61 c3 a6 1e |..W`D..La...|
However, when I run hexdump with no arguments:
echo -n "a42d9dfe8f93515d0d5f608a576044ce4c61e61e" \
| sed 's/\(..\)/\1\n/g' \
| awk '/^[a-fA-F0-9]{2}$/ { printf("%c",strtonum("0x" $0)); }' \
| hexdump
I get the following [correct] results:
0000000 a4c2 c22d c39d c2be c28f 5193 0d5d 605f
0000010 8ac2 6057 c344 4c8e c361 1ea6
For the purpose of this script, I'd rather use xxd as opposed to hexdump. Thoughts?
The problem that you observe is due to UTF-8 encoding and little-endiannes.
First, note that when you try to print any Unicode character in AWK, like 0xA4 (CURRENCY SIGN), it actually produces two bytes of output, like the two bytes 0xC2 0xA4 that you see in your output:
$ echo 1 | awk 'BEGIN { printf("%c", 0xA4) }' | hexdump -C
Output:
00000000 c2 a4 |..|
00000002
This holds for any character bigger than 0x7F and it is due to UTF-8 encoding, which is probably the one set in your locale. (Note: some AWK implementations will have different behavior for the above code.)
Secondly, when you use hexdump without argument -C, it displays each pair of bytes in swapped order due to little-endianness of your machine. This is because each pair of bytes is then treated as a single 16-bit word, instead of treating each byte separately, as done by xxd and hexdump -C commands. So the xxd output that you get is actually the correct byte-for-byte representation of input.
Thirdly, if you want to produce the precise byte string that is encoded in the hexadecimal string that you are feeding to sed, you can use this Python solution:
echo -n "a42d9dfe8f93515d0d5f608a576044ce4c61e61e" | sed 's/\(..\)/0x\1,/g' | python3 -c "import sys;[open('tmp','wb').write(bytearray(eval('[' + line + ']'))) for line in sys.stdin]" && cat tmp | xxd
Output:
00000000: a42d 9dfe 8f93 515d 0d5f 608a 5760 44ce .-....Q]._`.W`D.
00000010: 4c61 e61e La..
Why not use xxd with -r and -p?
echo a42d9dfe8f93515d0d5f608a576044ce4c61e61e | xxd -r -p | xxd
output
0000000: a42d 9dfe 8f93 515d 0d5f 608a 5760 44ce .-....Q]._`.W`D.
0000010: 4c61 e61e La..

How to assure the selection of an open port in shell

So I have a script that creates a tunnel. To do that it uses random ports.
This is the logic for random port generation
RPORT=1
while [ $RPORT -lt 2000 ]
do
RPORT=$[($RANDOM % 3000) + 1]
done
This is good only if the port that it selects isn't in use. If that port is active, I am unable to access that server while that port is being used.
I want to do something like this
while [netsat -nat | grep $RPORT] = true
do
RPORT=$[($RANDOM % 3000) + 1]
So I want to check first if that port is in use, if so, search for another random port, check if it is in use, if no then use it.
Thank you very much in advance for you time and help!
function random_unused_port {
(netstat --listening --all --tcp --numeric |
sed '1,2d; s/[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*:\([0-9]*\)[[:space:]]*.*/\1/g' |
sort -n | uniq; seq 1 1000; seq 1 65535
) | sort -n | uniq -u | shuf -n 1
}
RANDOM_PORT=$(random_unused_port)
This was the function that helped me out!
Thank you Nahuel Fouilleul for the link!
To fix the answer, also because port from 1 to 1000 are reserved seq starts at 1001
grep -F -x -v -f <(
netstat --listening --all --tcp --numeric |
sed '1,2d; s/[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*[[:space:]]*[^[:space:]]*:\([0-9]*\)[[:space:]]*.*/\1/g' |
sort -nu
) <(seq 1001 65536) | shuf -n 1

Optimised random number generation in bash

I'd like to generate a lot of integers between 0 and 1 using bash.
I tried shuf but the generation is very slow. Is there another way to generate numbers ?
This will output an infinite stream of bytes, written in binary and separated by a space :
cat /dev/urandom | xxd -b | cut -d" " -f 2-7 | tr "\n" " "
As an example :
10100010 10001101 10101110 11111000 10011001 01111011 11001010 00011010 11101001 01111101 10100111 00111011 10100110 01010110 11101110 01000011 00101011 10111000 01010110 10011101 01000011 00000010 10100001 11000110 11101100 11001011 10011100 10010001 01000111 01000010 01001011 11001101 11000111 11110111 00101011 00111011 10110000 01110101 01001111 01101000 01100000 11011101 11111111 11110001 10001011 11100001 11100110 10101100 11011001 11010100 10011010 00010001 00111001 01011010 00100101 00100100 00000101 10101010 00001011 10101101 11000001 10001111 10010111 01000111 11011000 01111011 10010110 00111100 11010000 11110000 11111011 00000110 00011011 11110110 00011011 11000111 11101100 11111001 10000110 11011101 01000000 00010000 00111111 11111011 01001101 10001001 00000010 10010000 00000001 10010101 11001011 00001101 00101110 01010101 11110101 10111011 01011100 00110111 10001001 00100100 01111001 01101101 10011011 00100001 01101101 01001111 01101000 00100001 10100011 00011000 01000001 00100100 10001101 10110110 11111000 01110111 10110111 11001000 00101000 01101000 01001100 10000001 11011000 11101110 11001010 10001101 00010011^C
If you don't want spaces between bytes (thanks #Chris):
cat /dev/urandom | xxd -b | head | cut -d" " -f 2-7 | tr -d "\n "
1000110001000101011111000010011011011111111001000000011000000100111101000001110110011011000000001101111111011000000100101001001110110001111000010100100100010110110000100111111110111011111100101000011000010010111010010001001001111000010101000110010010011011110000000011100110000000100111010001110000000011001011010101111001
tr -dc '01' < /dev/urandom is a quick and dirty way to do this.
If you're on OSX, tr can work a little weird, so you can use perl instead: perl -pe 'tr/01//dc' < /dev/urandom
Just for fun --
A native-bash function to print a specified number of random bits, extracted from the smallest possible number of evaluations of $RANDOM:
randbits() {
local x x_bits num_bits
num_bits=$1
while (( num_bits > 0 )); do
x=$RANDOM
x_bits="$(( x % 2 ))$(( x / 2 % 2 ))$(( x / 4 % 2 ))$(( x / 8 % 2 ))$(( x / 16 % 2 ))$(( x / 32 % 2 ))$(( x / 64 % 2 ))$(( x / 128 % 2 ))$(( x / 256 % 2 ))$(( x / 512 % 2 ))$(( x / 1024 % 2 ))$(( x / 2048 % 2 ))$(( x / 4096 % 2))$(( x / 8192 % 2 ))$(( x / 16384 % 2 ))"
if (( ${#x_bits} < $num_bits )); then
printf '%s' "$x_bits"
(( num_bits -= ${#x_bits} ))
else
printf '%s' "${x_bits:0:num_bits}"
break
fi
done
printf '\n'
}
Usage:
$ randbits 64
1011010001010011010110010110101010101010101011101100011101010010
Because this uses $RANDOM, its behavior can be made reproducible by assigning a seed value to $RANDOM before invoking it. This can be handy if you want to be able to reproduce bugs in software that uses "random" inputs.
Since the question asks for integers between 1 and 0, there is this extremely random and very fast method. A good one-liner for sure:
echo "0.$(printf $(date +'%N') | md5sum | tr -d '[:alpha:][:punct:]')"
This command will give you an output similar to this when thrown inside a for loop with 10 iterations:
0.97238535471032972041395
0.8642459339189067551494
0.18109959700829495487820
0.39135471514800072505703651
0.624084503017958530984255
0.41997456791539740171
0.689027289676627803
0.22698852059605560195614
0.037745437519184791498537
0.428629619193662260133
And if you need to print random strings of 1's and 0's, as others have assumed, you can make a slight change to the command like this:
printf $(date +'%N') | sha512sum | tr -d '[2-9][:alpha:][:punct:]'
Which will yield an output of random 0's and 1's similar to this when thrown into a for loop with 10 iterations:
011101001110
001110011011
0010100010111111
0000001101101001111011111111
1110101100
00010110100
1100101101110010
101100110101100
1100010100
0000111101100010001001
To my knowledge, and from what I have found online, this is the closest to true randomness we can get in bash. I have even made a game of dice (where the dice has 10 sides 0-9) to test the randomness, using this method for generating a single number from 0 to 9. Out of 100 dice throws, each side lands almost a perfect 10 times. Out of 1000 throws, each side hits around 890-1100 times. The variation of what side lands doesn't change much after 1000 throws. So you can be very sure that this method is highly ideal, at least for bash tools generating pseudo-random numbers, for the job.
And if you need just an absolute mind-blowingly ridiculous amount of randomness, the simple md5sum checksum command can be compounded upon itself many, many times and still be very fast. As an example:
printf $(date +'%N') | md5sum | md5sum | md5sum | tr -d '[:punct:][:space:]'
This will have a not-so-random number, obtained from printing the date command's nanosecond option, piped into md5sum. Then that md5 hash is piped into md5sum and then "that" hash is sent into md5sum for a last time. The output is a completely randomized hash that you can use tools like awk, sed, grep, and tr to control what you want printed.
Hope this helps.

Resources