How can I extract a floating point bit pattern? [duplicate] - c++11

#include <iostream>
int main(int argc, char * argv[])
{
int a = 0x3f800000;
std::cout << a << std::endl;
static_assert(sizeof(float) == sizeof(int), "Oops");
float f2 = *reinterpret_cast<float *>(&a);
std::cout << f2 << std::endl;
void * p = &a;
float * pf = static_cast<float *>(p);
float f3 = *pf;
std::cout << f3 << std::endl;
float f4 = *static_cast<float *>(static_cast<void *>(&a));
std::cout << f4 << std::endl;
}
I get the following info out of my trusty compiler:
me#Mint-VM ~/projects $ g++-5.3.0 -std=c++11 -o pun pun.cpp -fstrict-aliasing -Wall
pun.cpp: In function ‘int main(int, char**)’:
pun.cpp:11:45: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
float f2 = *reinterpret_cast<float *>(&a);
^
pun.cpp:21:61: warning: dereferencing type-punned pointer will break strict-aliasing rules [-Wstrict-aliasing]
float f4 = *static_cast<float *>(static_cast<void *>(&a));
^
me#Mint-VM ~/projects $ ./pun
1065353216
1
1
1
me#Mint-VM ~/projects $ g++-5.3.0 --version
g++-5.3.0 (GCC) 5.3.0
Copyright (C) 2015 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
I don't really understand when and why I get type-punned errors in some places and not in others.
So, strict aliasing:
Strict aliasing is an assumption, made by the C (or C++) compiler, that dereferencing pointers to objects of different types will never refer to the same memory location (i.e. alias each other.)
Line 11 claims I'm breaking strict-aliasing. I don't see a case where this can possibly hurt anything - the pointer "comes into existence", is immediately dereferenced, then thrown away. In all likelyhood, this will compile down to zero instructions. This seems like absolutely no risk - I'm telling the compiler EXACTLY what I want.
Lines 15-16 proceed to NOT elicit a warning, even though the pointers to the same memory location are now here to stay. This appears to be a bug in gcc.
Line 21 elicits the warning, showing that this is NOT limited to just reinterpret_cast.
Unions are no better (emphasis mine):
...it's undefined behavior to read from the member of the union that wasn't most recently written. Many compilers implement, as a non-standard language extension, the ability to read inactive members of a union.
This link talks about using memcpy, but that seems to just hide what you're really trying to accomplish.
For some systems, it is a required operation to write a pointer to an int register, or receive an incoming byte stream and assemble those bytes into a float, or other non-integral type.
What is the correct, standard-conforming way of doing this?

Credit to Anton here please. His answer was first and his is correct.
I am posting this exposition because I know you won't believe him until you see the assembler:
Given:
#include <cstring>
#include <iostream>
// prevent the optimiser from eliding this function altogether
__attribute__((noinline))
float convert(int in)
{
static_assert(sizeof(float) == sizeof(int), "Oops");
float result;
memcpy(&result, &in, sizeof(result));
return result;
}
int main(int argc, char * argv[])
{
int a = 0x3f800000;
float f = convert(a);
std::cout << a << std::endl;
std::cout << f << std::endl;
}
result:
1065353216
1
compiled with -O2, here's the assembler output for the function convert, with some added comments for clarity:
#
# I'll give you £10 for every call to `memcpy` you can find...
#
__Z7converti: ## #_Z7converti
.cfi_startproc
## BB#0:
pushq %rbp
Ltmp0:
.cfi_def_cfa_offset 16
Ltmp1:
.cfi_offset %rbp, -16
movq %rsp, %rbp
Ltmp2:
.cfi_def_cfa_register %rbp
#
# here's the conversion - simply move the integer argument (edi)
# into the first float return register (xmm0)
#
movd %edi, %xmm0
popq %rbp
retq
.cfi_endproc
#
# did you see any memcpy's?
# nope, didn't think so.
#
Just to drive the point home, here's the same function compiled with -O2 and -fomit-frame-pointer :
__Z7converti: ## #_Z7converti
.cfi_startproc
## BB#0:
movd %edi, %xmm0
retq
.cfi_endproc
Remember, this function only exists because I added the attribute to prevent the compiler from inlining it. In reality, with optimisations enabled, the entire function will be optimised away. Those 3 lines of code in the function and the call at the call site will vanish.
Modern optimising compilers are awesome.
but what I really wanted was this std::cout << *reinterpret_cast<float *>(&a) << std::endl; and I think it expresses my intent perfectly well.
Well, yes it does. But c++ is designed with both correctness and performance in mind. Very often, the compiler would like to assume that two pointers or two references don't point to the same piece of memory. If it can do that, it can make all kinds of clever optimisations (usually involving not bothering make reads or writes which aren't necessary to produce the required effect). However, because a write to one pointer could affect the read from the other (if they really point at the same object), then in the interests of correctness, the compiler may not assume that the two objects are distinct, and it must perform every read and write you indicated in your code - just in case one write affects a subsequent read... unless the pointers point to different types. If they point to different types, the compiler is allowed to assume that they will never point to the same memory - this is the strict aliasing rule.
When you do this: *reinterpret_cast<float *>(&a),
you're trying to read the same memory via an int pointer and a float pointer. Because the pointers are of different types, the compiler will assume that they point to different memory addresses - even though in your mind they do not.
This is the struct aliasing rule. It's there to help programs perform quickly and correctly. A reinterpret cast like this prevents either.

Use memcpy:
memcpy(&f2, &a, sizeof(float));
If you are worried about type safety and semantics, you can easily write a wrapper:
void convert(float& x, int a) {
memcpy(&x, &a, sizeof(float));
}
And if you want, you can make this wrapper template to satisfy your needs.

As you've found out, reinterpret_cast can't be used for type punning
Is reinterpret_cast type punning actually undefined behavior?
Is reinterpret_cast mostly useless?
Since C++20 you'll have a safe way for type punning via std::bit_cast
uint32_t bit_pattern = 0x3f800000U;
constexpr auto f = std::bit_cast<float>(bit_pattern);
At the moment std::bit_cast is only supported by MSVC
While waiting for others to implement that, if you're using Clang you can try __builtin_bit_cast. Just cast it like this
float f = __builtin_bit_cast(float, bit_pattern);
See demo on Godbolt
In other compilers or older C++ standards the only way you can do is via a memcpy
However many compilers have implementation-specific way to do type punning, or implementation-specific behavior regarding type punning. For example in GCC you can use __attribute__((__may_alias__))
union Float
{
float __attribute__((__may_alias__)) f;
uint32_t __attribute__((__may_alias__)) u;
};
uint32_t getFloatBits(float v)
{
Float F;
F.f = v;
return F.u;
}
ICC and Clang also support that attribute. See demo

Related

What is causing this error: SSE register return with SSE disabled?

I'm new to kernel development, and I need to write a Linux kernel module that performs several matrix multiplications (I'm working on an x64_64 platform). I'm trying to use fixed-point values for these operations, however during compilation, the compiler encounters this error:
error: SSE register return with SSE disabled
I don't know that much about SSE or this issue in particular, but from what i've found and according to most answers to questions about this problem, it is related to the usage of Floating-Point (FP) arithmetic in kernel space, which seems to be rarely a good idea (hence the utilization of Fixed-Point arithmetics). This error seems weird to me because I'm pretty sure I'm not using any FP values or operations, however it keeps popping up and in some ways that seem weird to me. For instance, I have this block of code:
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/module.h>
const int scale = 16;
#define DOUBLE_TO_FIXED(x) ((x) * (1 << scale))
#define FIXED_TO_DOUBLE(x) ((x) / (1 << scale))
#define MULT(x, y) ((((x) >> 8) * ((y) >> 8)) >> 0)
#define DIV(x, y) (((x) << 8) / (y) << 8)
#define OUTPUT_ROWS 6
#define OUTPUT_COLUMNS 2
struct matrix {
int rows;
int cols;
double *data;
};
double outputlayer_weights[OUTPUT_ROWS * OUTPUT_COLUMNS] =
{
0.7977986, -0.77172316,
-0.43078753, 0.67738613,
-1.04312621, 1.0552227 ,
-0.32619684, 0.14119884,
-0.72325027, 0.64673559,
0.58467862, -0.06229197
};
...
void matmul (struct matrix *A, struct matrix *B, struct matrix *C) {
int i, j, k, a, b, sum, fixed_prod;
if (A->cols != B->rows) {
return;
}
for (i = 0; i < A->rows; i++) {
for (j = 0; j < B->cols; j++) {
sum = 0;
for (k = 0; k < A->cols; k++) {
a = DOUBLE_TO_FIXED(A->data[i * A->rows + k]);
b = DOUBLE_TO_FIXED(B->data[k * B->rows + j]);
fixed_prod = MULT(a, b);
sum += fixed_prod;
}
/* Commented the following line, causes error */
//C->data[i * C->rows + j] = sum;
}
}
}
...
static int __init insert_matmul_init (void)
{
printk(KERN_INFO "INSERTING MATMUL");
return 0;
}
static void __exit insert_matmul_exit (void)
{
printk(KERN_INFO "REMOVING MATMUL");
}
module_init (insert_matmul_init);
module_exit (insert_matmul_exit);
which compiles with no errors (I left out code that I found irrelevant to the problem). I have made sure to comment any error-prone lines to get to a point where the program can be compiled with no errors, and I am trying to solve each of them one by one. However, when uncommenting this line:
C->data[i * C->rows + j] = sum;
I get this error message in a previous (unmodified) line of code:
error: SSE register return with SSE disabled
sum += fixed_prod;
~~~~^~~~~~~~~~~~~
From what I understand, there are no FP operations taking place, at least in this section, so I need help figuring out what might be causing this error. Maybe my fixed-point implementation is flawed (I'm no expert in that matter either), or maybe I'm missing something obvious. Just in case, I have tested the same logic in a user-space program (using Floating-Point values) and it seems to work fine. In either case, any help in solving this issue would be appreciated. Thanks in advance!
Edit: I have included the definition of matrix and an example matrix. I have been using the default kbuild command for building external modules, here is what my Makefile looks like:
obj-m = matrix_mult.o
KVERSION = $(shell uname -r)
all:
make -C /lib/modules/$(KVERSION)/build M=$(PWD) modules
Linux compiles kernel code with -mgeneral-regs-only on x86, which produces this error in functions that do anything with FP or SIMD. (Except via inline asm, because then the compiler doesn't see the FP instructions, only the assembler does.)
From what I understand, there are no FP operations taking place, at least in this section, so I need help figuring out what might be causing this error.
GCC optimizes whole functions when optimization is enabled, and you are using FP inside that function. You're doing FP multiply and truncating conversion to integer with your macro and assigning the result to an int, since the MCVE you eventually provided shows struct matrix containing double *data.
If you stop the compiler from using FP instructions (like Linux does by building with -mgeneral-regs-only), it refuses to compile your file instead of doing software floating-point.
The only odd thing is that it pins down the error to an integer += instead of one of the statements that compiles to a mulsd and cvttsd2si
If you disable optimization (-O0 -mgeneral-regs-only) you get a more obvious location for the same error (https://godbolt.org/z/Tv5nG6nd4):
<source>: In function 'void matmul(matrix*, matrix*, matrix*)':
<source>:9:33: error: SSE register return with SSE disabled
9 | #define DOUBLE_TO_FIXED(x) ((x) * (1 << scale))
| ~~~~~^~~~~~~~~~~~~~~
<source>:46:21: note: in expansion of macro 'DOUBLE_TO_FIXED'
46 | a = DOUBLE_TO_FIXED(A->data[i * A->rows + k]);
| ^~~~~~~~~~~~~~~
If you really want to know what's going on with the GCC internals, you could dig into it with -fdump-tree-... options, e.g. on the Godbolt compiler explorer there's a dropdown for GCC Tree / RTL output that would let you look at the GIMPLE or RTL internal representation of your function's logic after various analyzer passes.
But if you just want to know whether there's a way to make this function work, no obviously not, unless you compile a file without -mgeneral-registers-only. All functions in a file compiled that way must only be called by callers that have used kernel_fpu_begin() before the call. (and kernel_fpu_end after).
You can't safely use kernel_fpu_begin inside a function compiled to allow it to use SSE / x87 registers; it might already have corrupted user-space FPU state before calling the function, after optimization. The symptom of getting this wrong is not a fault, it's corrupting user-space state, so don't assume that happens to work = correct. Also, depending on how GCC optimizes, the code-gen might be fine with your version, but might be broken with earlier or later GCC or clang versions. I somewhat expect that kernel_fpu_begin() at the top of this function would get called before the compiler did anything with FP instructions, but that doesn't mean it would be safe and correct.
See also Generate and optimize FP / SIMD code in the Linux Kernel on files which contains kernel_fpu_begin()?
Apparently -msse2 overrides -mgeneral-regs-only, so that's probably just an alias for -mno-mmx -mno-sse and whatever options disables x87. So you might be able to use __attribute__((target("sse2"))) on a function without changing build options for it, but that would be x86-specific. Of course, so is -mgeneral-regs-only. And there isn't a -mno-general-regs-only option to override the kernel's normal CFLAGS.
I don't have a specific suggestion for the best way to set up a build option if you really do think it's worth using kernel_fpu_begin at all, here (rather than using fixed-point the whole way through).
Obviously if you do save/restore the FPU state, you might as well use it for the loop instead of using FP to convert to fixed-point and back.

update integer array elements atomically C++

Given a shared array of integer counters, I am interested to know if a thread can atomically fetch and add an array element without locking the entire array?
Here's an illustration of working model that uses mutex to lock access to the entire array.
// thread-shared class members
std::mutex count_array_mutex_;
std::vector<int> counter_array_( 100ish );
// Thread critical section
int counter_index = ... // unpredictable index
int current_count;
{
std::lock_guard<std::mutex> lock(count_array_mutex_);
current_count = counter_array_[counter_index] ++;
}
// ... do stuff using current_count.
I'd like multiple threads to be able to fetch-add separate array elements simultaneously.
So far, in my research of std::atomic<int> I'm thrown off that constructing the atomic object also constructs the protected member. (And plenty of answers explaining why you can't make a std::vector<std::atomic<int> > )
C++20 / C++2a (or whatever you want to call it) will add std::atomic_ref<T> which lets you do atomic operations on an object that wasn't atomic<T> to start with.
It's not available yet as part of the standard library for most compilers, but there is a working implementation for gcc/clang/ICC / other compilers with GNU extensions.
Previously, atomic access to "plain" data was only available with some platform-specific functions like Microsoft's LONG InterlockedExchange(LONG volatile *Target, LONG Value); or GNU C / C++
type __atomic_add_fetch (type *ptr, type val, int memorder) (the same builtins that C++ libraries for GNU compilers use to implement std::atomic<T>.)
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0019r8.html includes some intro stuff about the motivation. CPUs can easily do this, compilers can already do this, and it's been annoying that C++ didn't expose this capability portably.
So instead of having to wrestle with C++ to get all the non-atomic allocation and init done in a constructor, you can just have every access create an atomic_ref to the element you want to access. (It's free to instantiate as a local, at least when it's lock-free, on any "normal" C++ implementations).
This will even let you do things like resize the std::vector<int> after you've ensured no other threads are accessing the vector elements or the vector control block itself. And then you can signal the other threads to resume.
It's not yet implemented in libstdc++ or libc++ for gcc/clang.
#include <vector>
#include <atomic>
#define Foo std // this atomic_ref.hpp puts it in namespace Foo, not std.
// current raw url for https://github.com/ORNL/cpp-proposals-pub/blob/master/P0019/atomic_ref.hpp
#include "https://raw.githubusercontent.com/ORNL/cpp-proposals-pub/580934e3b8cf886e09accedbb25e8be2d83304ae/P0019/atomic_ref.hpp"
void inc_element(std::vector<int> &v, size_t idx)
{
v[idx]++;
}
void atomic_inc_element(std::vector<int> &v, size_t idx)
{
std::atomic_ref<int> elem(v[idx]);
static_assert(decltype(elem)::is_always_lock_free,
"performance is going to suck without lock-free atomic_ref<T>");
elem.fetch_add(1, std::memory_order_relaxed); // take your pick of memory order here
}
For x86-64, these compile exactly the way we'd hope with GCC,
using the sample implementation (for compilers implementing GNU extensions) linked in the C++ working-group proposal. https://github.com/ORNL/cpp-proposals-pub/blob/master/P0019/atomic_ref.hpp
From the Godbolt compiler explorer with g++8.2 -Wall -O3 -std=gnu++2a:
inc_element(std::vector<int, std::allocator<int> >&, unsigned long):
mov rax, QWORD PTR [rdi] # load the pointer member of std::vector
add DWORD PTR [rax+rsi*4], 1 # and index it as a memory destination
ret
atomic_inc_element(std::vector<int, std::allocator<int> >&, unsigned long):
mov rax, QWORD PTR [rdi]
lock add DWORD PTR [rax+rsi*4], 1 # same but atomic RMW
ret
The atomic version is identical except it uses a lock prefix to make the read-modify-write atomic, by making sure no other core can read or write the cache line while this core is in the middle of atomically modifying it. Just in case you were curious how atomics work in asm.
Most non-x86 ISAs like AArch64 of course require a LL/SC retry loop to implement an atomic RMW, even with relaxed memory order.
The point here is that constructing / destructing the atomic_ref doesn't cost anything. Its member pointer fully optimizes away. So this is exactly as cheap as a vector<atomic<int>>, but without the headache.
As long as you're careful not to create data-race UB by resizing the vector, or accessing an element without going through atomic_ref. (It would potentially manifest as a use-after-free on many real implementations if std::vector reallocated the memory in parallel with another thread indexing into it, and of course you'd be atomically modifying a stale copy.)
This definitely gives you rope to hang yourself if you don't carefully respect the fact that the std::vector object itself is not atomic, and also that the compiler won't stop you from doing non-atomic access to the underlying v[idx] after other threads have started using it.
One way:
// Create.
std::vector<std::atomic<int>> v(100);
// Initialize.
for(auto& e : v)
e.store(0, std::memory_order_relaxed);
// Atomically increment.
auto unpredictable_index = std::rand() % v.size();
int old = v[unpredictable_index].fetch_add(1, std::memory_order_relaxed);
Note that std::atomic<> copy-constructor is deleted, so that the vector cannot be resized and needs to be initialized with the final count of elements.
Since resize functionality of std::vector is lost, instead of std::vector you may as well use std::unique_ptr<std::atomic<int>[]>, e.g.:
// Create.
unsigned const N = 100;
std::unique_ptr<std::atomic<int>[]> p(new std::atomic<int>[N]);
// Initialize.
for(unsigned i = 0; i < N; ++i)
p[i].store(0, std::memory_order_relaxed);
// Atomically increment.
auto unpredictable_index = std::rand() % N;
int old = p[unpredictable_index].fetch_add(1, std::memory_order_relaxed);

What is the role of the clobber list? [duplicate]

This function "strcpy" aims to copy the content of src to dest, and it works out just fine: display two lines of "Hello_src".
#include <stdio.h>
static inline char * strcpy(char * dest,const char *src)
{
int d0, d1, d2;
__asm__ __volatile__("1:\tlodsb\n\t"
"stosb\n\t"
"testb %%al,%%al\n\t"
"jne 1b"
: "=&S" (d0), "=&D" (d1), "=&a" (d2)
: "0"(src),"1"(dest)
: "memory");
return dest;
}
int main(void) {
char src_main[] = "Hello_src";
char dest_main[] = "Hello_des";
strcpy(dest_main, src_main);
puts(src_main);
puts(dest_main);
return 0;
}
I tried to change the line : "0"(src),"1"(dest) to : "S"(src),"D"(dest), the error occurred: ‘asm’ operand has impossible constraints. I just cannot understand. I thought that "0"/"1" here specified the same constraint as the 0th/1th output variable. the constraint of 0th output is =&S, te constraint of 1th output is =&D. If I change 0-->S, 1-->D, there shouldn't be any wrong. What's the matter with it?
Does "clobbered registers" or the earlyclobber operand(&) have any use? I try to remove "&" or "memory", the result of either circumstance is the same as the original one: output two lines of "Hello_src" strings. So why should I use the "clobbered" things?
The earlyclobber & means that the particular output is written before the inputs are consumed. As such, the compiler may not allocate any input to the same register. Apparently using the 0/1 style overrides that behavior.
Of course the clobber list also has important use. The compiler does not parse your assembly code. It needs the clobber list to figure out which registers your code will modify. You'd better not lie, or subtle bugs may creep in. If you want to see its effect, try to trick the compiler into using a register around your asm block:
extern int foo();
int bar()
{
int x = foo();
asm("nop" ::: "eax");
return x;
}
Relevant part of the generated assembly code:
call foo
movl %eax, %edx
nop
movl %edx, %eax
Notice how the compiler had to save the return value from foo into edx because it believed that eax will be modified. Normally it would just leave it in eax, since that's where it will be needed later. Here you can imagine what would happen if your asm code did modify eax without telling the compiler: the return value would be overwritten.

How do I ask the assembler to "give me a full size register"?

I'm trying to allow the assembler to give me a register it chooses, and then use that register with inline assembly. I'm working with the program below, and its seg faulting. The program was compiled with g++ -O1 -g2 -m64 wipe.cpp -o wipe.exe.
When I look at the crash under lldb, I believe I'm getting a 32-bit register rather than a 64-bit register. I'm trying to compute an address (base + offset) using lea, and store the result in a register the assembler chooses:
"lea (%0, %1), %2\n"
Above, I'm trying to say "use a register, and I'll refer to it as %2".
When I perform a disassembly, I see:
0x100000b29: leal (%rbx,%rsi), %edi
-> 0x100000b2c: movb $0x0, (%edi)
So it appears the code being generated calculates and address using 64-bit values (rbx and rsi), but saves it to a 32-bit register (edi) (that the assembler chose).
Here are the values at the time of the crash:
(lldb) type format add --format hex register
(lldb) p $edi
(unsigned int) $3 = 1063330
(lldb) p $rbx
(unsigned long) $4 = 4296030616
(lldb) p $rsi
(unsigned long) $5 = 10
A quick note on the Input Operands below. If I drop the "r" (2), then I get a compiler error when I refer to %2 in the call to lea: invalid operand number in inline asm string.
How do I tell the assembler to "give me a full size register" and then refer to it in my program?
int main(int argc, char* argv[])
{
string s("Hello world");
cout << s << endl;
char* ptr = &s[0];
size_t size = s.length();
if(ptr && size)
{
__asm__ __volatile__
(
"%=:\n" /* generate a unique label for TOP */
"subq $1, %1\n" /* 0-based index */
"lea (%0, %1), %2\n" /* calcualte ptr[idx] */
"movb $0, (%2)\n" /* 0 -> ptr[size - 1] .. ptr[0] */
"jnz %=b\n" /* Back to TOP if non-zero */
: /* no output */
: "r" (ptr), "r" (size), "r" (2)
: "0", "1", "2", "cc"
);
}
return 0;
}
Sorry about these inline assembly questions. I hope this is the last one. I'm not really thrilled with using inline assembly in GCC because of pain points like this (and my fading memory). But its the only legal way I know to do what I want to do given GCC's interpretation of the qualifier volatile in C.
If interested, GCC interprets C's volatile qualifier as hardware backed memory, and anything else is an abuse and it results in an illegal program. So the following is not legal for GCC:
volatile void* g_tame_the_optimizer = NULL;
...
unsigned char* ptr = ...
size_t size = ...;
for(size_t i = 0; i < size; i++)
ptr[i] = 0x00;
g_tame_the_optimizer = ptr;
Interestingly, Microsoft uses a more customary interpretation of volatile (what most programmers expect - namely, anything can change the memory, and not just memory mapped hardware), and the code above is acceptable.
gcc inline asm is a complicated beast. "r" (2) means allocate an int sized register and load it with the value 2. If you just need an arbitrary scratch register you can declare a 64 bit early-clobber dummy output, such as "=&r" (dummy) in the output section, with void *dummy declared earlier. You can consult the gcc manual for more details.
As to the final code snippet looks like you want a memory barrier, just as the linked email says. See the manual for example.

Any constraints to refer to the high half of a register in gcc inline assembly?

In my C code there are some inlined assembly calling PCI BIOS service. Now the problem is that one of the results is returned in the %ah register, but I can't find a constrant to refer to that register.
What I want is to write like following:
asm("lcall *%[call_addr]" : "something here"(status) :);
and the variable status contains the value of %ah register.
If I use "=a"(status) and add a mov %%ah, %%al instruction it will work. But it seems ugly.
Any suggestions?
I don't think there is a way to specify %ah in a constraint - the GCC x86 backend knows that sub-registers contain particular parts of values, but doesn't really treat them as independent entities.
Your approach will work; another option is to shift status down outside the inline assembler. e.g. this:
unsigned int foo(void)
{
unsigned int status;
asm("movb $0x12, %%ah; movb $0x34, %%al" : "=a"(status) : );
return (status >> 8) & 0xff;
}
...implements the (status >> 8) & 0xff as a single movzbl %ah, %eax instruction.
A third option is to use a small structure:
unsigned int bar(void)
{
struct { uint8_t al; uint8_t ah; } status;
asm("movb $0x12, %%ah; movb $0x34, %%al" : "=a"(status) : );
return status.ah;
}
I'm not sure whether this is nicer or not - it seems a little more self-documenting, but the use of a small structure with a register constraint looks less obviously correct. However, it generates the same code as foo() above.
(Disclaimer: code generation tested with gcc 4.3.2 only; results may well differ on other versions.)

Resources