I have a custom bios ( I actually have the image of the SPI Flash Bios, but in my opinion it should be the same ) and I would like to debug with GDB. Is it possible to run with GDB ? I know where there is the first executed instruction ( 0h3FFFF0 ) inside the file, and the files it should be a sequence of assembler instruction. Is there a way to follow the flow and run step by step the bios ?
First, it's incorrect to assume that SPI image will contain only BIOS executable. It usually much more then that (ie. ME, Setup parameters, flash descriptor). To investigate what is inside your image I suggest to use binwalk <spi_image>, if you will be lucky, it will detect layout of your image. Then you can extract image parts by binwalk -e <spi_image>.
Second, since I don't know if we talk about modern hardware or some obsolete I will try to be general. I don't think GDB is able to run BIOS binary because those binaries doesn't contain any debugging information that GDB can understand. GDB need to know what executable file it deals with, what is the ABI used and what is the physical architecture of target being debugged. Even if it will know all those information the code could be only disassembled because BIOS starts with low level hardware initialization, which cannot be recreated in GDB. On the other hand modern hardware BIOS (UEFI firmware) is typically signed.
What is the best thing you can do with extracted BIOS binary is to get datasheet of your hardware and use disassembler like IDA Pro or radare2 to investigate code flow.
Related
I'm currently struggling to determine how I can get an emulated environment via QEMU to correctly display output on the command line. I have an environment that displays perfectly well using the virt reference board, a cortex-a9CPU, and the 4.1 Linux kernel cross-compiled for ARM. However, if I swap out the 4.1 kernel for 2.6 or 3.1, suddenly I can no longer see console output.
While solving this issue is my main goal, I feel like I lack a critical understanding of how Linux and the hardware initially integrate before userspace configurations via boot scripts and whatnot have a chance to execute. I am aware of the device tree, and have a loose understanding of how it works. But the issue I ran into where a different kernel version broke console availability entirely confounds me. Can someone explain how Linux initially maps console output to a hardware device on the ARM architecture?
Thank you!
The answer depends quite a bit on which kernel version, what config options are set, what hardware, and also possibly on kernel command line arguments.
For modern kernels, the answer is that it looks in the device tree blob it is passed for descriptions of devices, some of which will be serial ports, and it initializes those. The kernel config or command line will specify which of those is to be used for the console. For earlier kernels, especially if you go all the way back to 2.6, use of device tree was less universal, and for some hardware the boot loader simply said "this is a versatile express board" (for instance) and the kernel had compiled-in data structures to tell it where the devices were for each board that it supported. As the transition to device tree progressed, boards were converted one by one, and sometimes a few devices at a time, so what exactly the situation was for any specific kernel version depends on which board you're using.
The other thing that I rather suspect you're running into is that if the kernel crashes early in bootup (ie before it finds the serial port at all) then it will never output anything. So if the kernel is just too early to support the "virt" board properly at all, or if your kernel config is missing something important, then the chances are good that it crashes in early boot without being able to print you a useful message. (Sometimes "earlycon" or "earlyprintk" kernel arguments can assist here, but not always.)
What is The Interrupt for User input and output in Console Window?
Hello I'm trying to learn Assembly Language, I know that in MS-DOS Operating System, the .COM programs were Supported, and were loaded into memory at 100h, isn't it? And the first 1k bytes were the IVT or Interrupt vector table, it is a list of interrupts, isn't it?? So in MS-DOS if we want to Ask an input from the user we first Move 01 to AH register like MOV ah, 01 and call the Interrupt INT 21h, I don't know if this is gonna work on DOS or not, I never tried the DOS virtual machine or something Similar Applications, i don't want the DOS program I just want a Console Window for Asking a user for Input...
I have just stepped into Assembly programming and found no Complete tutorials on Windows, Or not just One which Shows the use of Interrupts, Everywhere MASM is used with windows.inc libs, and the C standard libs, i don't wanna waste my time learning them, As after learning Assembly programming i Want to learn Writing Booting Programs, Which doesn't Sticks to one Operating system, And the reason i want to use Interrupts is just for learning purpose, Meaning this is just to learn Assembly language, Currently i know very less instructions like MOV ADD SUB INC INT & DEC And know nothing or very less about Registers, So with these limited Capability i cannot write Boot Programs, and for this reason im trying to learn or Practice assembly on Windows but with interrupts, and thus does not want to use those Predefined libraries in my Applications, and want to Only use Interrupts from IVTs and shift to BIOS interrupts, for Boot Programs.
I heard on a forum that Microsoft hides its Windows IVTs from the public as they want their APIs to become popular, Is this true? Thus this means there is no way i can use interrupts to handle I/O in Windows?? because there is no such Documentary on it, if No Kindly Post it with An Example for NASM, and Please Refer to any Online/Offline guide, and my last request please Tell me how can i Convert MASM written souce code for NASM, i mean what is the difference between them, In Short, i have Two requests and One Question as follows:
1. Q1: What is the Interrupt for Console I/O in Windows?
2. R1: Please share any Good tutorials on Assembly for windows Especially on using Interrupts just like TutorialsPoint but Its for Linux and I want for Windows.
And the Last One...
3. R2: How is MASM's input file source code different from NASM one?? I mean How much and where can i find difference writing source code for them, As many tutorials said that Many MASM programs wont run in NASM...
Thanks in Advance!!
And the first 1k bytes were the IVT or Interrupt vector table ...
Modern CPUs have two (in the case of 64-bit CPUs even three) operating modes: The "real mode" and the "protected mode".
When the CPU starts up and in MS-DOS the CPU runs in "real mode". When Windows is running the CPU runs in "protected mode". In this mode the interrupt system works completely different than in "real mode".
It is not possible to access the IVT directly and it is not possible to access memory belonging to a task from another task.
What is the Interrupt for Console I/O in Windows?
Windows NT, 32-bit, used interrupt 1Eh. However other Windows versions (Windows 9x for example) used another method of entering the operating system.
As far as I know 64-bit versions of Windows also do not use interrupts.
No Windows program (with exception of the old 16-bit Windows programs) is using interrupts directly.
hides its Windows IVTs from the public as they want their APIs to become popular, Is this true?
They hide the interrupt numbers because they are only available in some Windows versions.
The only possibility to write a program that works on all Windows versions currently in use is to use the Windows APIs.
i don't wanna waste my time learning them
In this case don't waste your time writing Assembler programs for Windows! Doing so only makes sense if you are interested in compiler development.
Use some virtual machine tool (Bochs, VMware, ...) and start writing "booting programs" NOW or use an MS-DOS emulator and write Assembler programs for MS-DOS.
The only use of Assembler programs for Windows is that the 32-bit assembler is a bit easier to learn than the 16-bit assembler (which is required for MS-DOS and "booting programs") so writing Assembler programs for Windows may be good for learning purposes...
We have built a simple instruction set simulator for the sparc v8 processor. The model consists of a v8 processor, a main memory and a character input and a character output device. Currently I am able to run simple user-level programs on this simulator which are built using a cross compiler and placed in the modeled main memory directly.
I am trying to get a linux kernel to run on this simulator by building a simplest bootloader. (I'm considering uClinux which is made for mmu-less systems). The uncompressed kernel and the filesystem are both assumed to be present in the main memory itself, and all that my bootloader has to do is pass the relevant information to the kernel and make a jump to the start of the kernel code. I have no experience in OS development or porting linux.
I have the following questions :
What is this bare minimum information that a bootloader has to supply to the kernel ?
How to pass this information?
How to point the kernel to use my custom input/output devices?
There is some documentation available for porting linux to ARM boards, and from this documentation, it seems that the bootloader passes information about the size of RAM etc
via a data structure called ATAGS. How is it done in the case of a Sparc processor? I could not find much documentation for Sparc on the internet. There exists a linux bootloader for the Leon3 implementation of Sparc v8, but I could not find the specific information I was looking for in its code.
I will be grateful for any links that explain the bare minimum information to be passed to a kernel and how to pass it.
Thanks,
-neha
What is the difference between hardware and software breakpoints?
Are hardware breakpoints are said to be faster than software breakpoints, if yes then how, and also then why would we need the software breakpoints at all?
This article provides a good discussion of pros and cons:
http://www.nynaeve.net/?p=80
To answer your question directly software breakpoints are more flexible because hardware breakpoints are limited in some functionality and highly architecture-dependant. One example given in the article is that x86 hardware has a limit of 4 hardware breakpoints.
Hardware breakpoints are faster because they have dedicated registers and less overhead than software breakpoints.
Hardware breakpoints are actually comparators, comparing the current PC with the address in the comparator (when enabled). Hardware breakpoints are the best solution when setting breakpoints. Typically set via the debug probe (using JTAG, SWD, ...). The downside of hardware breakpoints: They are limited. CPUs have only a limited number of hardware breakpoints (comparators). The number of available hardware breakpoints depends on the CPU. ARM 7/9 cores have 2, modern ARM devices (Cortex-M 0,3,4) between 2 and 6,
x86 usually 4.
Software breakpoints are in fact set by replacing the instruction to be breakpointed with a breakpoint instruction. The breakpoint instruction is present in most CPUs, and usually as short as the shortest instruction, so only one byte on x86 (0xcc, INT 3). On Cortex-M CPUs, instructions are 2 or 4 bytes, so the breakpoint instruction is a 2 byte instruction.
Software breakpoints can easily be set if the program is located in RAM (such as on a PC). A lot of embedded systems have the program located in flash memory. Here it is not so easy to exchange the instruction, as the flash needs to be reprogrammed, so hardware breakpoints are used primarily. Most debug probes support only hardware breakpoints if the program is located in flash memory. However, some (such as SEGGER's J-Link) allow reprogramming the flash memory with breakpoint instruction and aso allow an unlimited number of (software) breakpoints even when debugging a program located in flash.
More info about software breakpoints in flash memory
You can go through GDB internals, its very well explains the HW and SW breakpoints.
HW breakpoints are something that require support from MCU. The ARM controllers have special registers where you can write some address space, whenever PC (program counter) == sp register CPU halts. Jtag is usually required to write into those special registers.
SW breakpoints are implemented in GDB by inserting a trap, an illegal divide, or some other instruction that will cause an exception, and then when it’s encountered, gdb will take the exception and stop the program. When the user says to continue, gdb will restore the original instruction, single-step, re-insert the trap, and continue on.
There are a lot of advantages in using HW debuggers over SW debuggers especially if you are dealing with interrupts and memory bus devices. AFAIK interrupts cannot be debugged with software debuggers.
In addition to the answers above, it is also important to note that while software breakpoints overwrite specific instructions in the program to know where to stop, the more limited number of hardware breakpoints are actually part of the processor.
Justin Seitz in his book Gray Hat Python points out that the important difference here is that by overwriting instructions, software breakpoints actually change the CRC of the file, and so any sort of program such as a piece of malware which calculates its CRC can change its behavior in response to breakpoints being set, whereas with hardware breakpoints it is less obvious that the debugger is stopping and stepping through certain chunks of code.
In brief, hardware breakpoints make use of dedicated registers and hence are limited in number. These can be set on both volatile and non volatile memory.
Software breakpoints are set by replacing the opcode of instruction in RAM memory with breakpoint instruction. These can be set only in RAM memory(Flash memory is not feasible to be written) and are not limited.
This article provides good explanation about breakpoints.
Thanks and regards,
Shivakumar V W
Software breakpoints put an instruction in RAM that is executed like a TRAP when your program reaches that address.
While hardware breakpoints use a register of the CPU to implement the breakpoint itself. That is why the hardware breakpoints are much faster. And that is why we need software breakpoints: hardware breakpoints are limited to the processor number of registers dedicated to breakpoints.
I have learned it at work today :)
Watchpoints is where it makes a huge difference
This is a case where hardware handling is much faster:
watch var
rwatch var
awatch var
When you enter those commands on GDB 7.7 x86-64 it says:
Hardware watchpoint 2: var
This hardware capability for x86 is mentioned at: http://en.wikipedia.org/wiki/X86_debug_register
It is likely possible because of the existing paging circuit, which manages every memory access.
The "software" alternative is to single step the program, which is very slow.
Compare that to regular breakpoints, where at least the software implementation injects an int3 instruction at the breaking point and lets the program run, so you only pay overhead when a breakpoint is hit.
Some quote from the Intel System Debugger help doc:
Hardware vs. Software Breakpoints
The debugger can use both hardware
and software breakpoints, each of these has strengths and weaknesses:
Hardware Breakpoints are implemented using the DRx architectural
breakpoint registers described in the Intel SDM. They have the
advantage of being usable directly at reset, being non-volatile, and
being usable with flash or other read-only memory. The downside is
that they are a finite resource. Software Breakpoints require
modifying system memory as they are implemented by replacing the
opcode at the desired location with a special instruction. This makes
them an unlimited resource, but the memory dependency mean you cannot
install them prior to a module being loaded in memory, and if the
target software overwrites that memory then they will become invalid.
In general, any debug feature that must be enabled by the debugger
does not persist after a reset, and may be impacted after other
architectural mode transitions such as SMM entry/exit or VM
entry/exit. Specific examples include:
CPU Reset will clear all debug features, except for reset break. This
means for example that user-specified breakpoints will be invalid
until the target halts once after reset. Note that this halt can be
due to either a reset-break, or due to a user-initiated halt. In
either case the debugger will restore the necessary debug features.
SMM Entry/exit will disable/re-enable breakpoints, this means you
cannot specify a breakpoint in SMRAM while halted outside of SMRAM. If
you wish the break within SMRAM, you must first halt at the SMM
entry-break and manually apply the breakpoint. Alternatively you can
patch the BIOS to re-enable breakpoints when entering SMM, but this
requires the ability to modify the BIOS which cannot be used in
production code.
I've to test some low level code on an ARM architecture. Typically experimentation is quite complicated on the real board, so I was thinking about QEMU.
What I'd like to get is some kind of debugging information like printfs or gdb. I know that this is simple with linux since it implements both the device driver for the QEMU Integrator and the gdb feature, but I'm not working with Linux. Also I suspect that extracting this kind of functionality from the Linux kernel source code would be complicated.
I'm searching from some simple operating system that already implements one of those features. Do you have some advice?
You don't need a target OS to debug code that's running inside QEMU -- QEMU already does that for you.
Specifically, QEMU supports remote debugging from GDB -- you can run QEMU with the appropriate command-line options and it will export an interface that a copy of GDB (running on the host machine) can connect to. At that point, you can debug the program in GDB pretty much just as if you were running it on the host machine.
http://wiki.osdev.org/GDB appears to have a bit more basic information; possibly not enough to completely get you started, but at least give you the basic idea and some terms to look for in the QEMU and GDB documentation. Skip over the bit about "Implementing GDB Stubs", which doesn't apply here since QEMU has one already, and start at the section on "Using Emulator Stubs". The short form is simply that you start QEMU with the -s option (export a GDB connection on localhost:1234) and the -S option (wait for a GDB "continue" command before starting execution), and then in GDB on your host you say target remote :1234 instead of run. Also, of course, you need to be using an ARM version of GDB rather than a native-x86 one.
(In addition, if you're willing to pay for a commercial solution, CodeSourcery's ARM toolchain has the IDE integration to set all of this up automatically, including support for "printf" to print into the debugger console. That works on a physical board, too, if you've got a hardware debugger. Usual disclaimer about me being a CodeSourcery employee applies -- but I do find it very easy to use.)
Update, 2012: CodeSourcery's toolchain is now called Mentor Graphics Sourcery CodeBench, but all the above still applies.
I realise that I am addressing your original problem here rather than your proposed solution (perhaps that's better?), but to use GDB (or Insight/GDB) directly on the target, use a low-cost JTAG tool and OpenOCD. An example of such a set-up and how to implement it can be found here.
If you have a larger budget, a more fully featured JTAG debugger may be useful, such as the Abatron BDI3000 with bdiGDB firmware which allows remote debugging and device programming over Ethernet with GDB and no special drivers or target debug agent.
Maybe a microkernel like OKL4 would suit your needs?