How to print text during assembly in gnu assembler? - gcc

For example, I want to know how many bytes are alloted for certain instructions without running the output file.
start:
movl $0,%eax;movl $0,%ebx
end:
.print end-start

Related

How can i see what is inside of $0x27 and %ecx while using gdb debugger in linux?

cmp $0x27,%ecx
I am currently looking for to see what is the value of $0x27 and %ecx. What is the command I can see to find this.
In gdb you can display the value of the ecx register with p $ecx (note gdb uses a $ instead of % since it treats $ecx like one of its internal variables). You can also use info registers to see the contents of all registers.
There's nothing "inside" $0x27 - it's a literal immediate value, not a memory address. It's like C compare_into_flags(ecx, 0x27);

Record separators in text files written by a Fortran program

As a part of a larger program, written in the new Fortran standard, I am interested to write some text on a file that will be read by another program over which I have no control.
Long ago when I learned Fortran an output record generated by a format statement should begin with an LF (linefeed) and end with a CR (carriage return). This means that each output record should be separated by a sequence CRLF.
To my surprise I find that this seems no longer to be true except when I compile and run my program on a Windows computer. When I compile and run my program on a Mac the output records are separated by a single LF. I know this is a Linux standard but I guess I assumed the output from a Fortran program should not depend on the operating system.
The consequence of this is that when I generate the output on Windows my output file can be read by the other program (which only exists on Windows) whereas when I generate the same output on my Mac it fails.
I have no idea how the other program reads the file but I assume it is a standard Fortran read.
I have also compared my output file from Windows and Mac using "diff" and that indicates all lines are different. However "diff -w" indicates the files are identical.
I would like to be able to generate output that can be read by the other program independently if I generate the file on a Mac or Windows. I know I can use things like #ifdef to check the OS when I compile but I wonder if there is any other way, are there some option in the Fortran write? I know there are a lot of new things line "noadvance" etc. Any option to force a "CRLF" record separator?
I use GNU Fortran version 5.2 on Windows and what seems to be called version 7.2 on the Mac
According to the documentation, gfortran allows a peculiar flag for the OPEN statement, called CARRIAGECONTROL, as in:
program prg
integer :: u
open (newunit = u, file = 'test.txt', carriagecontrol = 'fortran')
write (u, '(A,I2)') '-', 12
write (u, '(A,I2)') '-', 34
close (u)
end program prg
This option is only supported when the code is compiled with the -fdec switch. The content of the output file is then:
> hexdump -C test.txt
00000000 0a 31 32 0d 0a 33 34 0d |.12..34.|
00000008
which is exactly the format that you described. The minus signs in the I/O lists in the above code are consumed by the compiler runtime library and used to redefine the output format. (Tested with gfortran 9.2.)

MIPS 32 ori arguments

Here is a screenshot from qtspim:
I took it while running line by line. Why does ori $2, $0, 5 put 4 into register 2 (v0) instead of 5?
Thank you!
ori $2, $0, 5 loads 5 into $2, that is correct.
Like most debugger, the highlighted line in qtspim is the next instruction to be executed. It highlights the Instruction pointed to by IP/PC (instruction pointer or process counter).
Look at the PC in the Register Window which points to the current (next to execute) Instruction and not the last executed.

How to run a program and parse the output using KornShell

I am pretty new to KornShell (ksh). I have a project that needs to be done with ksh. The question is:
Please write a ksh script which will run the ‘bonnie’ benchmark
utility and parse the output to grab the values for block write, block
read and random seeks/s. Also consider how you might use these values
to compare to the results from previous tests. For the purpose of
this test, please limit yourself to standard GNU utilities (sed, awk,
grep, cut, etc.).
Here is the output from the ‘bonnie’ utility:
# bonnie -s 50M -d /tmp
File '/tmp/Bonnie.2001096837', size: 52428800
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1.S.e.eker 2.S.e.eker 3...start 'em...done...done...done...
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
50.0 36112 34.1 138026 1.9 179048 7.0 51361 51.1 312242 4.3 15211.4 10.3
Any suggestion of how to write this script would be really appreciate.
Thanks for reading!
Here's a simple solution to experiment with, that assumes the last line will always contain the data you want:
# -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
# Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
# 50.0 36112 34.1 138026 1.9 179048 7.0 51361 51.1 312242 4.3 15211.4 10.3
# block write, block read and random seeks/s
bonnie++ \
| awk '
{line=$0}
END{
# print "#dbg:last_line=" $line
split(line, lineArr)
printf ("blkWrt=%s\tblkRd=%s\tRandSks=%s\n", lineArr[4], lineArr[8], lineArr[12])
}' # > bonnieOutput
# ------^^ remove # to write output to file
(Note that the \ char after bonnie++ must be the last character on the line, NO SPACES OR TABS allowed!!! (It will blow up otherwise!) ;-) )
Awk reads all lines of input passed thru a pipe. When you're in the END{} block of awk, you put the last line read into the lineArr[], and then print out just the elements you want from that line, using the index number of the field in your data, so lineArr[4] is going to return the 4th field in that last line of data, lineArr[12], the 12th, etc. You may have to adjust what index number you use to get the data you want to display. (You'll have to figure that out! ;-)
To save the data to a file, use the shell redirection by uncommenting (removing the # char between }' and > bonnieOutput. Leave the # char in place until you get the output you need, THEN you can redirect it to a file.
Needless to say the labels that I've used in the printf like blkWrt= are mostly for debugging. Once you are sure about what data you need to capture, and that it reliably appears in the same location each time, then you can remove those labels and then you'll have a nice clean datafile that can process with other programs.
Keep in mind that almost all Unix toolbox utilities are line oriented, that is they expect to process 1 line of data at a time and there are often tricks to see what is being processed. Note the #dbg line I've included at the top of the END{} block. You'll have to remove the '#' to uncomment it to see the debug output.
There's a lot more than can be done, but if you want to learn ksh/unix toolbox with awk, you'll have to spend the time understanding what the features are. If you've read the chapter that included the question you're working with and don't understand how to even start solving this problem, maybe you better read the chapter again, OK? ;-)
Edit
Note that in awk, the variable $0 contains all text in the current line (as defined by the RS variable value, usually the Unix line ending char, \n). Other numbered values, i.e. $1, $2, indicate the first, or second "field" on the current line ($0).
Based on my new understand from you comment below, you want to extract values from lines that contain the text "Latency". This is even easier to process. The basic pattern will be
bonnie++ \
| awk '
/Latency/{
# print "#dbg:latency_line=" $0
printf ("blkWrt=%s\tblkRd=%s\tRandSks=%s\n", $4, $8, $12)
}' # > bonnieOutput
So this code says, read all output from bonnie++ into awk, through the pipe, and when you find a line containing the text "Latency", print the values found in the 4th, 8th, and 12th fields, using the printf format string that contains self-describing tags like blkWrt, etc.
You'll have to change the $4, etc to correctly match the number in the current line for each element of data. I.E. maybe it $5, $9, $13, or $3, $9, $24? OK?
Note that /Latency/ is case sensitive, and if there are other places in the output where the word appears, then we'll have to revise the reg-exp "rule" used to filter the output.
As a learning exercise, and as a very basic tool that any Unix person uses every day, skip awk, and just see what bonnie++| grep 'Latency' gets you.
IHTH
Just got the answer by the help from Shellter!
bonnie++\
| awk '/Machine/ {f=1;next}f{
print "#dbg: line_needed=" $0
printf("blkWrt=%s\t blkRd=%s\t RandSks=%s\n", $4, $8, $12);exit
}'

How do I split a large file in unix repeatedly?

I have a situation with a failing LaCie 500GB hard drive. It stays on for only about 10 minutes, then becomes unusable. For those 10 minutes or so I do have complete control.
I can't get my main mov file(160GB) transferred off that quickly, so I was thinking if I split it into small chunks, I could move them all off. I tried splitting the movie file using the SPLIT command, but it of course took longer than 10 minutes. I ended up with about 14GBs of files 2GB each before it failed.
Is there a way I can use a split command and skip any existing files chunks, so as I'm splitting this file it will see xaa, xab, xac and start after that point so it will continue to split the file starting with xad?
Or is there a better option that can split a file in multiple stages? I looked at csplit as well, but that didn't seem like an option either.
Thanks!
-------- UPDATE ------------
Now with the help of bcat and Mark I was able to do this using the following
dd if=/Volumes/badharddrive/file.mov of=/Volumes/mainharddrive/movieparts/moviepart1 bs=1g count=4
dd if=/Volumes/badharddrive/file.mov of=/Volumes/mainharddrive/movieparts/moviepart2 bs=1g count=4 skip=4
dd if=/Volumes/badharddrive/file.mov of=/Volumes/mainharddrive/movieparts/moviepart3 bs=1g count=4 skip=8
etc
cat /Volumes/mainharddrive/movieparts/moviepart[1-3] -> newmovie.mov
You can always use the dd command to copy chunks of the old file into a new location. This has the added benefit of not doing unnecessary writes to the failing drive. Using dd like this could get tedious with such a large mov file, but you should be able to write a simple shell script to automate part of the process.
Duh! bcat's answer is way better than mine, but since I wrote some code I figured I'd go ahead and post it.
input = ARGV[0]
length = ARGV[1].to_i
offset = ARGV[2].to_i
File.open "#{input}-#{offset}-#{length}", 'w' do |file|
file.write(File.read input, length, offset)
end
Use it like this:
$ ruby test.rb input_file length offset

Resources