Read data at given address out of hex file using srec tools - shell

I'm wondering if there is a way to read data out of a .hex file at a given address using the srec tool family like srec_cat or srec_info. I know that I could parse the file by myself but there must be a tool out there already. Does anyone have already done something similar?

Today I found a solution using srec_cat to write only part of the hex file to a binary output file.
srec_cat.exe my.hex -intel -crop 0x08010000 0x08010040 -offset -0x08010000 -o out.bin -binary
The datasheet pointed out that is also possible to print the result to std:out. For me this is not working at all. Have no clue why.
Output filename [ format ] This option may be used to specify the
output file to be used. The special file name “−[rq] is understood to
mean the standard output. Output defaults to the standard output if
this option is not used.
What do I have to write to use this functionality:
srec_cat.exe my.hex -intel -crop 0x08010000 0x08010040 -offset -0x08010000 -o -[rq] -binary
Anyway the workaround with the file is working as expected. Without the file step would be a nice add on.

Related

How to save ninja build output without loosing the compressed format of output

I tride to duplicate the output of the ninja build system to the separate file but I want to save original campresed look of the ninja output.
If I tee ninja (ninja all | tee -a someFile) I get wall of text enter image description here
Instead of updating one line.
If there is a better way to duplicate the output of ninja to file without loosing the compress formatig of output please let me know!
UPD: I find out that ninja update lines with [K escape sequence (erasing the line) and after capturing or rederecting ninga output it vanishing. If some body know how to allowed system to capture all tipy of escape sequence, it will solve my problem

How do you enter data in PlistBuddy

I am trying to change a data value with PlistBuddy and can't figure it out.
/usr/libexec/PlistBuddy -c "Set :Kernel:Emulate:Cpuid1Mask AAAAAAAAAAAAAAACAAAAAA==" ~/Desktop/test.plist
Instead of writing the data I want, when I view the file, I get: QUFBQUFBQUFBQUFBQUFBQ0FBQUFBQT09
I have played with hex, dec, bin, everything I can think of, but it never writes correctly.
I have been searching everywhere, and there's nothing I can find that explains how to do it. Everything is on entering strings, and nothing tells how to enter data, or it's format.
I need to change that value back and forth from AAAAAAAAAAAAAAACAAAAAA== to AAAAAAAAAAAAAAAAAAAAAA==
I tried printing it to see the output, so I could see the format, but it's blank in terminal.
Anyone know how to do it?
PlistBuddy can do it with the help of base64. First decode your incoming Base64 stream into binary data.
base64 -D <<< AAAAAAAAAAAAAAACAAAAAA== > /tmp/tmp.bin
Then use PlistBuddy's Import command.
/usr/libexec/PlistBuddy -c "Import :Kernel:Emulate:Cpuid1Mask /tmp/tmp.bin" ~/Desktop/test.plist
Delete your binary data if not needed anymore.
rm /tmp/tmp.bin
PS: I am using this frequently to change data values in OpenCore.
I figured it out.
Both defaults and PlistBuddy cannot do it.
plutil works fine without corrupting the data string.

arm - how to check endianness of an object file

Using objdump, how do you check if an .obj is little- or big-endian?
So if you run objdump -d <filename>, you should see at the top of the disassembled code a line that is in this format:
<filename>: file format (string that contains littlearm or bigarm)
I assume that littlearm implies little endian and bigarm implies big endian.
Simple approach is to use the file command that will give you the result what you expect.
$file your_object_file.obj
example output
$firmware.img: Linux jffs2 filesystem data little endian

Append to PDF file with convert in bash

I'm basically downloading some images from a website using wget to then append them into a PDF file using the command line program "convert". But this last thing seems not to work.
I'm getting all the .jpg images and storing them into one folder with no problems, but when I try to merge them into the PDF file, it always reminds with the last appended image. I've read of the convert's -append argument, but it still won't work.
This is how my code looks like:
for file in *.jpg
do
convert "${file}" -append "myfile.pdf"
done
But as logical as it seems, myfile.pdf always ends up having only the last jpg appended image.
I know that using convert like:
convert img1.jpg img2.jpg img3.jpg myfile.pdf
Would do the trick. But as I don't know how many images will I have in the download directory, I cannot hardcode the arguments, so I guess a loop for each image in that directory as I'm trying would be the best solution.
Does anybody know how to achieve my goal? Any help will be much appreciated.
Thanks in advance.
bash automatically expands wildcard arguments (unless if they are quoted or escaped) so even if convert does not support wildcard expansion, bash does. So you could just do
convert *.jpg myfile.pdf
note that if there are too many files, this can result with "arglist too long". But that should be OK for several hundred files.
If your file name follows a pattern like img1.jpg img2.jpg ..... . Then you may also use bash range:
convert img{1..5}.jpg
this will work for img1.jpg img2.jpg img3.jpg img4.jpg img5.jpg . You can change your range as per your requirement.
For converting all the jpg files , answer is already present in other answer by #Jean-François Fabre.

Mac issue with file encoding

I have a script which is reading some data from one server and storing it in a file. But the file seems somehow corrupt. I can print it to the display, but checking the file with file produces
bash$ file -I filename
filename: text/plain; charset=unknown-8bit
Why is it telling me that the encoding is unknown? The first line of the file displays for me as
“The Galaxy A5 and A3 offer a beautifully crafted full metal unibody
A hex dump reveals that the first three bytes are 0xE2, 0x80, 0x9C followed by the regular ASCII text The Galaxy A5...
What's wrong? Why does file tell me the encoding is unknown, and what is it actually?
Based on the information in the question, the file is a perfectly good UTF-8 file. The first three bytes encode LEFT DOUBLE QUOTATION MARK (U+201C) aka a curly quote.
Maybe your version of file is really old.
You can use iconv to convert the file into the desired charset. E.G.
iconv --from-code=UTF8 --to-code=YOURTARGET
To get a list of supported targets, use the --list flag.

Resources