I need to execute a few machine instructions just before kernel "halts".
Reason is I need to inform a board controller it can actually remove power.
Question is: what is the best-practice to achieve this?
In an old (3.18) kernel for the same board I hacked .../arch/mips/ralink/reset.c to add some register settings in static void ralink_halt(void) but that function seems gone together with static int __init mips_reboot_setup(void), so i guess structure changed a lot since then.
What is the correct hook to use in modern kernels?
What is a context register and how does it change the way Go code is compiled?
Context: I've seen uses of stub functions in some parts of GOROOT, i.e. reflect, but am not quite sure how these work.
The expression "context register" first appeared in commit b1b67a3 (Feb. 2013, Go 1.1rc2) for implementing step 3 of the Go 1.1 Function Calls
Change the reflect.MakeFunc implementation to avoid run-time code generation as well
It was picked up in commit 4a000b9 in Feb. 2014, Go 1.3beta1, for assembly and system calls for Native Client x86-64, where sigreturn is managed as:
NaCl has abidcated its traditional operating system responsibility and declined to implement 'sigreturn'.
Instead the only way to return to the execution of our program is to restore the registers ourselves.
Unfortunately, that is impossible to do with strict fidelity, because there is no way to do the final update of PC that ends the sequence without either
(1) jumping to a register, in which case the register ends holding the PC value instead of its intended value, or
(2) storing the PC on the stack and using RET, which imposes the requirement that SP is valid and that is okay to smash the word below it.
The second would normally be the lesser of the two evils, except that on NaCl, the linker must rewrite RET into "POP reg; AND $~31, reg; JMP reg", so either way we are going to lose a register as a result of the incoming signal.
Similarly, there is no way to restore EFLAGS; the usual way is to use POPFL, but NaCl rejects that instruction.
We could inspect the bits and execute a sequence of instructions designed to recreate those flag settings, but that's a lot of work.
Thankfully, Go's signal handlers never try to return directly to the executing code, so all the registers and EFLAGS are dead and can be smashed.
The only registers that matter are the ones that are setting up for the simulated call that the signal handler has created.
Today those registers are just PC and SP, but in case additional registers are relevant in the future (for example DX is the Go func context register) we restore as many registers as possible.
Much more recently (Q4 2016), for Go 1.8, we have commit d5bd797 and commit bf9c71c, for eliminating stack rescanning:
morestack writes the context pointer to gobuf.ctxt, but since
morestack is written in assembly (and has to be very careful with
state), it does not invoke the requisite write barrier for this
write. Instead, we patch this up later, in newstack, where we invoke
an explicit write barrier for ctxt.
This already requires some subtle reasoning, and it's going to get a
lot hairier with the hybrid barrier.
Fix this by simplifying the whole mechanism.
Instead of writing gobuf.ctxt in morestack, just pass the value of the context register to newstack and let it write it to gobuf.ctxt. This is a normal Go pointer write, so it gets the normal Go write barrier. No subtle reasoning required.
I'm working on a timing loop for the AVR platform where I'm counting down a single byte inside an ISR. Since this task is a primary function of my program, I'd like to permanently reserve a processor register so that the ISR doesn't have to hit a memory barrier when its usual code path is decrement, compare to zero, and reti.
The avr-libc docs show how to bind a variable to a register, and I got that working without a problem. However, since this variable is shared between the main program (for starting the timer countdown) and the ISR (for actually counting and signaling completion), it should also be volatile to ensure that the compiler doesn't do anything too clever in optimizing it.
In this context (reserving a register across an entire monolithic build), the combination volatile register makes sense to me semantically, as "permanently store this variable in register rX, but don't optimize away checks because the register might be modified externally". GCC doesn't like this, however, and emits a warning that it might go ahead and optimize away the variable access anyway.
The bug history of this combination in GCC suggests that the compiler team is simply unwilling to consider the type of scenario I'm describing and thinks it's pointless to provide for it. Am I missing some fundamental reason why the volatile register approach is in itself a Bad Idea, or is this a case that makes semantic sense but that the compiler team just isn't interested in handling?
The semantics of volatile are not exactly as you describe "don't optimize away checks because the register might be modified externally" but are actually more narrow: Try to think of it as "don't cache the variable's value from RAM in a register".
Seen this way, it does not make any sense to declare a register as volatile because the register itself cannot be 'cached' and therefore cannot possibly be inconsistent with the variable's 'actual' value.
The fact that read accesses to volatile variables are usually not optimzed away is merely a side effect of the above semantics, but it's not guaranteed.
I think GCC should assume by default that a value in a register is 'like volatile' but I have not verified that it actually does so.
Edit:
I just did a small test and found:
avr-gcc 4.6.2 does not treat global register variables like volatiles with respect to read accesses, and
the Naggy extension for Atmel Studio detects an error in my code: "global register variables are not supported".
Assuming that global register variables are actually considered "unsupported" I am not surprised that gcc treats them just like local variables, with the known implications.
My test code looks like this:
uint8_t var;
volatile uint8_t volVar;
register uint8_t regVar asm("r13");
#define NOP asm volatile ("nop\r\n":::)
int main(void)
{
var = 1; // <-- kept
if ( var == 0 ) {
NOP; // <-- optimized away, var is not volatile
}
volVar = 1; // <-- kept
if ( volVar == 0 ) {
NOP; // <-- kept, volVar *is* volatile
}
regVar = 1; // <-- optimized away, regVar is treated like a local variable
if ( regVar == 0 ) {
NOP; // <-- optimized away consequently
}
for(;;){}
}
The reason you would use the volatile keyword on AVR variables is to, as you said, avoid the compiler optimizing access to the variable. The question now is, how does this happen though?
A variable has two places it can reside. Either in the general purpose register file or in some location in RAM. Consider the case where the variable resides in RAM. To access the latest value of the variable, the compiler loads the variable from RAM, using some form of the ld instruction, say lds r16, 0x000f. In this case, the variable was stored in RAM location 0x000f and the program made a copy of this variable in r16. Now, here is where things get interesting if interrupts are enabled. Say that after loading the variable, the following occurs inc r16, then an interrupt triggers and its corresponding ISR is run. Within the ISR, the variable is also used. There is a problem, however. The variable exists in two different versions, one in RAM and one in r16. Ideally, the compiler should use the version in r16, but this one is not guaranteed to exist, so it loads it from RAM instead, and now, the code does not operate as needed. Enter then the volatile keyword. The variable is still stored in RAM, however, the compiler must ensure that the variable is updated in RAM before anything else happens, thus the following assembly may be generated:
cli
lds r16, 0x000f
inc r16
sei
sts 0x000f, r16
First, interrupts are disabled. Then, the the variable is loaded into r16. The variable is increased, interrupts are enabled and then the variable is stored. It may appear confusing for the global interrupt flag to be enabled before the variable is stored back in RAM, but from the instruction set manual:
The instruction following SEI will be executed before any pending interrupts.
This means that the sts instruction will be executed before any interrupts trigger again, and that the interrupts are disabled for the minimum amount of time possible.
Consider now the case where the variable is bound to a register. Any operations done on the variable are done directly on the register. These operations, unlike operations done to a variable in RAM, can be considered atomic, as there is no read -> modify -> write cycle to speak of. If an interrupt triggers after the variable is updated, it will get the new value of the variable, since it will read the variable from the register it was bound to.
Also, since the variable is bound to a register, any test instructions will utilize the register itself and will not be optimized away on the grounds the compiler may have a "hunch" it is a static value, given that registers by their very nature are volatile.
Now, from experience, when using interrupts in AVR, I have sometimes noticed that the global volatile variables never hit RAM. The compiler kept them on the registers all the time, bypassing the read -> modify -> write cycle alltogether. This was due, however, to compiler optimizations, and it should not be relied on. Different compilers are free to generate different assembly for the same piece of code. You can generate a disassembly of your final file or any particular object files using the avr-objdump utility.
Cheers.
Reserving a register for one variable for a complete compilation unit is probably too restrictive for a compiler's code generator. That is, every C routine would have to NOT use that register.
How do you guarantee that other called routines do NOT use that register once your code goes out of scope? Even stuff like serial i/o routines would have to NOT use that reserved register. Compilers do NOT recompile their run-time libraries based on a data definition in a user program.
Is your application really so time sensitive that the extra delay for bringing memory up from L2 or L3 can be detected? If so, then your ISR might be running so frequently that the required memory location is always available (i.e. it doesn't get paged back down thru the cache) and thus does NOT hit a memory barrier (I assume by memory barrier you are referring to how memory in a cpu really operates, through caching, etc.). But for this to really be true the up would have to have a fairly large L1 cache and the ISR would have to run at a very high frequency.
Finally, sometimes an application's requirements make it necessary to code it in ASM in which case you can do exactly what you are requesting!
I want to write some inline ARM assembly in my C code. For this code, I need to use a register or two more than just the ones declared as inputs and outputs to the function. I know how to use the clobber list to tell GCC that I will be using some extra registers to do my computation.
However, I am sure that GCC enjoys the freedom to shuffle around which registers are used for what when optimizing. That is, I get the feeling it is a bad idea to use a fixed register for my computations.
What is the best way to use some extra register that is neither input nor output of my inline assembly, without using a fixed register?
P.S. I was thinking that using a dummy output variable might do the trick, but I'm not sure what kind of weird other effects that will have...
Ok, I've found a source that backs up the idea of using dummy outputs instead of hard registers:
4.8 Temporary registers:
People also sometimes erroneously use clobbers for temporary registers. The right way is
to make up a dummy output, and use “=r” or “=&r” depending on the permitted overlap
with the inputs. GCC allocates a register for the dummy value. The difference is that
GCC can pick a convenient register, so it has more flexibility.
from page 20 of this pdf.
For anyone who is interested in more info on inline assembly with GCC this website turned out to be very instructive.
I was reading some old articles about debugging, and one of them mentioned the debug registers. Reading some more about these registers and what they can do made me incredibly eager to have some fun with them. However when I tried looking for some more information about how to actually use them I read that they can only be accessed from ring 0 in windows.
I thought that that was the end then, since I'm not going to write a kernel driver just to play with a few registers. But then I thought about the memory editing tool I used to play around with. Cheat engine it's called, and one of the various options of the program was to specify to break on instructions/data that was being executed/accessed/read. That is exactly the same as the debug registers do. So I was wondering: Is there a substitute/replacement for the debug registers in windows? Since I'm sure that the program (cheat engine) doesn't use a kernel driver to set these values.
Thats not true at all, you can set HW debug register from ring3, indirectly (ollydbg does this), for this you need to use SetThreadContext under windows (example).
if you still want a substitute for HW registers, you can use INT3 for code break points and single step trapping for checking if a varibale has changed(highly inefficient).
a good reference is GDB and its source: http://developer.apple.com/library/mac/#documentation/DeveloperTools/gdb/gdbint/gdbint_3.html