Invalid operands for binary AND (&) - gcc

I have this "assembly" file (containing only directives)
// declare protected region as somewhere within the stack
.equiv prot_start, $stack_top & 0xFFFFFF00 - 0x1400
.equiv prot_end, $stack_top & 0xFFFFFF00 - 0x0C00
Combined with this linker script:
SECTIONS {
"$stack_top" = 0x10000;
}
Assembling produces this output
file.s: Assembler messages:
file.s: Error: invalid operands (*UND* and *ABS* sections) for `&' when setting `prot_start'
file.s: Error: invalid operands (*UND* and *ABS* sections) for `&' when setting `prot_end'
How can I make this work?

Why it is not possible?
You have linked to the GAS docs, but what is the rationale for that inability?
Answer: GAS must communicate operations to the linker through the ELF object file, and the only things that can be conveyed like this are + and - (- is just + a negative value). So this is a fundamental limition of the ELF format, and not just lazyness from GAS devs.
When GAS compiles to the object file, a link step will follow, and it is the relocation which will determine the final value of the symbol.
Question: why can + be conveyed, but not &?
Answer: because + is transitive: (a + b) + c == a + (b + c) but + and & are not "transitive together": (a & b) + c!= a & (b + c).
Let us see how + is conveyed through the ELF format to convince ourselves that & is not possible.
First learn what relocation is if you are not familiar with it: https://stackoverflow.com/a/30507725/895245
Let's minimize your example with another that would generate the same error:
a: .long s
b: .long s + 0x12345678
/* c: .long s & 1 */
s:
Compile and decompile:
as --32 -o main.o main.S
objdump -dzr main.o
The output contains:
00000000 <a>:
0: 08 00 or %al,(%eax)
0: R_386_32 .text
2: 00 00 add %al,(%eax)
00000004 <b>:
4: 80 56 34 12 adcb $0x12,0x34(%esi)
4: R_386_32 .text
Ignore the disassembly since this is not code, and look just at the symbols, bytes and relocations.
We have two R_386_32 relocations. From the System V ABI for IA-32 (which defines the ELF format), that type of relocation is calculated as:
S + A
where:
S: the value before relocation in the object file.
Value of a before relocation == 08 00 00 00 == 8 in little endian
Value of b before relocation == 80 56 34 12 == 0x12345680 in little endian
A: addend, a field of the rellocation entry, here 0 (not shown by objdump), so lets just forget about it.
When relocation happens:
a will be replaced with:
address of text section + 8
There is a + 8 because s: is the 8th byte of the text section, preceded by 2 longs.
b will be replaced with:
address of text section + (0x12345678 + 8)
==
address of text section + 0x12345680
Aha, so that is why 0x12345680 appeared on the object file!
So as we've just seen, it is possible to express + on the ELF file by just adding to the actual offset.
But it would not possible to express & with this mechanism (or any other that I know of), because we don't what the address of the text section will be after relocation, so we can't apply & to it.

Darn:
Infix Operators
Infix operators take two arguments, one on either side. Operators have precedence, but operations with equal precedence are performed left to right. Apart from + or -, both arguments must be absolute, and the result is absolute.

Related

undefined reference to `__floatundisf' when using hard float (PowerPC)

I'm building code for PowerPC with hard float and suddenly getting this issue.
I understand that this symbol belongs to gcc's soft-float library. What I don't understand is why it's trying to use that at all, despite my efforts to tell it to use hard float.
make flags:
CFLAGS += -mcpu=750 -mhard-float -ffast-math -fno-math-errno -fsingle-precision-constant -shared -fpic -fno-exceptions -fno-asynchronous-unwind-tables -mrelocatable -fno-builtin -G0 -O3 -I$(GCBASE) -Iinclude -Iinclude/gc -I$(BUILDDIR)
ASFLAGS += -I include -mbroadway -mregnames -mrelocatable --fatal-warnings
LDFLAGS += -nostdlib -mhard-float $(LINKSCRIPTS) -Wl,--nmagic -Wl,--just-symbols=$(GLOBALSYMS)
Code in question:
static void checkTime() {
u64 ticks = __OSGetSystemTime();
//note timestamp here is seconds since 2000-01-01
float secs = ticks / 81000000.0f; //everything says this should be 162m / 4,
//but I only seem to get anything sensible with 162m / 2.
int days = secs / 86400.0f; //non-leap days
int years = secs / 31556908.8f; //approximate average
int yDay = days % 365;
debugPrintf("Y %d D %d", years, yDay);
}
What more do I need to stop gcc trying to use soft float? Why has it suddenly decided to do that?
Looking at the GCC docs, __floatundisf converts an unsigned long to a float. If we compile your code* with -O1 and run objdump, we can see that the __floatundisf indeed comes from dividing your u64 by a float:
u64 ticks = __OSGetSystemTime();
20: 48 00 00 01 bl 20 <checkTime+0x20> # Call __OSGetSystemTime
20: R_PPC_PLTREL24 __OSGetSystemTime
//note timestamp here is seconds since 2000-01-01
float secs = ticks / 81000000.0f; //everything says this should be 162m / 4,
24: 48 00 00 01 bl 24 <checkTime+0x24> # Call __floatundisf
24: R_PPC_PLTREL24 __floatundisf
28: 81 3e 00 00 lwz r9,0(r30)
2a: R_PPC_GOT16 .LC0
2c: c0 09 00 00 lfs f0,0(r9) # load the constant 1/81000000
30: ec 21 00 32 fmuls f1,f1,f0 # do the multiplication ticks * 1/81000000
So you're getting it for a u64 / float calculation.
If you convert the u64 to a u32, I also see it going away.
Why is it generated? Looking at the manual for the 750CL which I'm hoping is largely equivalent to your chip, there's no instruction that will read an 8 byte integer from memory and convert it to a float. (It looks like there isn't one for directly converting a 32-bit integer to a float either: gcc instead inlines a confusing sequence of integer and float manipulation instructions.)
I don't know what the units for __OSGetSystemTime are, but if you can reduce it to a 32-bit integer by throwing away some lower bits, or by doing some tricks with common divisors, you could get rid of the call.
*: Lightly modified to compile on my system.

Extending SRecord to handle crc32_mpeg2?

statement of problem:
I'm working with a Kinetis L series (ARM Cortex M0+) that has a dedicated CRC hardware module. Through trial and error and using this excellent online CRC calculator, I determined that the CRC hardware is configured to compute CRC32_MPEG2.
I'd like to use srec_input (a part of SRecord 1.64) to generate a CRC for a .srec file whose results must match the CRC_MPEG2 computed by the hardware. However, srec's built-in CRC algos (CRC32 and STM32) don't generate the same results as the CRC_MPEG2.
the question:
Is there a straightforward way to extend srec to handle CRC32_MPEG2? My current thought is to fork the srec source tree and extend it, but it seems likely that someone's already been down this path.
Alternatively, is there a way for srec to call an external program? (I didn't see one after a quick scan.) That might do the trick as well.
some details
The parameters of the hardware CRC32 algorithm are:
Input Reflected: No
Output Reflected: No
Polynomial: 0x4C11DB7
Initial Seed: 0xFFFFFFFF
Final XOR: 0x0
To test it, an input string of:
0x10 0xB5 0x06 0x4C 0x23 0x78 0x00 0x2B
0x07 0xD1 0x05 0x4B 0x00 0x2B 0x02 0xD0
should result in a CRC32 value of:
0x938F979A
what generated the CRC value in the first place?
In response to Mark Adler's well-posed question, the firmware uses the Freescale fsl_crc library to compute the CRC. The relevant code and parameters (mildly edited) follows:
void crc32_update(crc32_data_t *crc32Config, const uint8_t *src, uint32_t lengthInBytes)
{
crc_config_t crcUserConfigPtr;
CRC_GetDefaultConfig(&crcUserConfigPtr);
crcUserConfigPtr.crcBits = kCrcBits32;
crcUserConfigPtr.seed = 0xffffffff;
crcUserConfigPtr.polynomial = 0x04c11db7U;
crcUserConfigPtr.complementChecksum = false;
crcUserConfigPtr.reflectIn = false;
crcUserConfigPtr.reflectOut = false;
CRC_Init(g_crcBase[0], &crcUserConfigPtr);
CRC_WriteData(g_crcBase[0], src, lengthInBytes);
crcUserConfigPtr.seed = CRC_Get32bitResult(g_crcBase[0]);
crc32Config->currentCrc = crcUserConfigPtr.seed;
crc32Config->byteCountCrc += lengthInBytes;
}
Peter Miller be praised...
It turns out that if you supply enough filters to srec_cat, you can make it do anything! :) In fact, the following arguments the correct checksum:
$ srec_cat test.srec -Bit_Reverse -CRC32LE 0x1000 -Bit_Reverse -XOR 0xff -crop 0x1000 0x1004 -Output -HEX_DUMP
00001000: 93 8F 97 9A #....
In other words, bit reverse the bits going to the CRC32 algorithm, bit reverse them on the way out, and 1's compliment them.

how to find xor key/algorithm, for a given hex?

So i have this hex: B0 32 B6 B4 37
I know this hex is obfuscated with some key/algorithm.
I also know this hex is equal to: 61 64 6d 69 6e (admin)
How can i calculate the XOR key for this?
If you write out the binary representation, you can see the pattern:
encoded decoded
10110000 -> 01100001
00110010 -> 01100100
Notice that the bit patterns have the same number of bits before and after. To decode, you just bitwise rotate one bit left. So the value shifts left one place and the most significant bit wraps around to the least significant place. To encode, just do the opposite.
int value, encoded_value;
encoded_value = 0xB0;
value = ((encoded_value << 1) | (encoded_value >> 7)) & 255;
// value will be 0x61;
encoded_value = ((value >> 1) | (value << 7)) & 255;

getelementptr has -1 as the first index operand

I'm reading the IR of nginx generated by Clang. In function ngx_event_expire_timers, there are some getelementptr instructions with i64 -1 as first index operand. For example,
%handler = getelementptr inbounds %struct.ngx_rbtree_node_s, %struct.ngx_rbtree_node_s* %node.addr.0.i, i64 -1, i32 2
I know the first index operand will be used as an offset to the first operand. But what does a negative offset mean?
The GEP instruction is perfectly fine with negative indices.
In this case you have something like:
node arr[100];
node* ptr = arr[50];
if ( (ptr-1)->value == ptr->value)
// then ...
GEP with negative indices just calculate the offset to the base pointer into the other direction. There is nothing wrong with it.
Considering what is doing inside nginx source code, the semantics of the getelementptr instruction is interesting. It's the result of two lines of C source code:
ev = (ngx_event_t *) ((char *) node - offsetof(ngx_event_t, timer));
ev->handler(ev);
node is of type ngx_rbtree_node_t, which is a member of ev's type ngx_event_t. That is like:
struct ngx_event_t {
....
struct ngx_rbtree_node_t time;
....
};
struct ngx_event_t *ev;
struct ngx_rbtree_node_t *node;
timer is the name of struct ngx_event_t member where node should point to.
|<- ngx_rbtree_node_t ->|
|<- ngx_event_t ->|
------------------------------------------------------
| (some data) | "time" | (some data)
------------------------------------------------------
^ ^
ev node
The graph above shows the layout of an instance of ngx_event_t. The result of offsetof(ngx_event_t, time) is 40. That means the some data before time is of 40 bytes. And the size of ngx_rbtree_node_t is also 40 bytes, by coincidence. So the i64 -1 in the first index oprand of getelementptr instruction computes the base address of the ngx_event_t containing node, which is 40 bytes ahead of node.
handler is another member of ngx_event_t, which is 16 bytes behind the base of ngx_event_t. By (another) coincidence, the third member of ngx_rbtree_node_t is also 16 bytes behind the base address of ngx_rbtree_node_t. So the i32 2 in getelementptr instruction will add 16 bytes to ev, to get the address of handler.
Note that the 16 bytes is computed from the layout of ngx_rbtree_node_t, but not ngx_event_t. Clang must have done some computations to ensure the correctness of the getelementptr instruction. Before use the value of %handler, there is a bitcast instruction which casts %handler to a function pointer type.
What Clang has done breaks the type transformation process defined in C source code. But the result is the exactly same.

Extracting numbers from a 32-bit integer

I'm trying to solve a riddle in a programming test.
Disclaimer: It's a test for a job, but I'm not looking for an answer. I'm just looking for an understanding of how to do this. The test requires that I come up with a set of solutions to a set of problems within 2 weeks, and it doesn't state a requirement that I arrive at the solutions in isolation.
So, the problem:
I have a 32-bit number with the bits arranged like this:
siiiiiii iiiiiiii ifffffff ffffffff
Where:
s is the sign bit (1 == negative)
i is 16 integer bits
f is 15 fraction bits
The assignment is to write something that decodes a 32-bit integer into a floating-point number. Given the following inputs, it should produce the following outputs:
input output
0x00008000 1.0
0x80008000 -1.0
0x00010000 2.0
0x80014000 -2.5
0x000191eb 3.14
0x00327eb8 100.99
I'm having no trouble getting the sign bit or the integer part of the number. I get the sign bit like this:
boolean signed = ((value & (1 << 31)) != 0);
I get the integer and fraction parts like this:
int wholePart = ((value & 0x0FFFFFFF) >> 15);
int fractionPart = ((value & 0x0000FFFF >> 1));
The part I'm having an issue with is getting the number in the last 15 bits to match the expected values.
Instead of 3.14, I get 3.4587, etc.
If someone could give me a hint about what I'm doing wrong, I'd appreciate it. More than anything else, the fact that I haven't figured this out after hours of messing with it is kind of driving me nuts. :-)
Company's inputs aren't wrong. The fractional bits don't represent the the literal digits right of the decimal point, they represent the fractional part. Don't know how else to say it without giving it away. Would it be too big a hint to say there is a divide involved?
A few things...
Why not get the fractional part as
int fractionPart = value & 0x00007FFF; // i.e. no shifting needed...
Similarly, no shifting needed for the sign
boolean signed = ((value & (0x80000000) != 0); // signed is true when negative
See Ryan's response for the effective use of the fractional part, i.e. not taking this literally as the digit values for the decimal part but rather... some' involving a fraction...
Have a look at what you're anding the fraction part with prior to the shift.
Shift Right 31 gives you the signed bit 1=Neg 0=Pos
BEFORE siiiiiii iiiiiiii ifffffff ffffffff
SHR 31 00000000 00000000 00000000 0000000s
Shift Left 1 followed by Shift Right 16 gives you the Integer bits
BEFORE siiiiiii iiiiiiii ifffffff ffffffff
SHL 1 iiiiiiii iiiiiiii ffffffff fffffff0
SHR 16 00000000 00000000 iiiiiiii iiiiiiii
Shift Left 17 followed by Shift Right 15 gives for the Faction bits
BEFORE siiiiiii iiiiiiii ifffffff ffffffff
SHL 17 ffffffff fffffff0 00000000 00000000
SHR 16 00000000 00000000 0fffffff ffffffff
int wholePart = ((value & 0x7FFFFFFF) >> 15);
int fractionPart = (value & 0x00007FFF);
Key your bit-mask into Calculator in Binary mode and then flip it to Hex...

Resources