On Linux, feenableexcept and fedisableexcept can be used to control the generation of SIGFPE interrupts on floating point exceptions. How can I do this on Mac OS X Intel?
Inline assembly for enabling floating point interrupts is provided in http://developer.apple.com/documentation/Performance/Conceptual/Mac_OSX_Numerics/Mac_OSX_Numerics.pdf, pp. 7-15, but only for PowerPC assembly.
Exceptions for sse can be enabled using _MM_SET_EXCEPTION_MASK from xmmintrin.h. For example, to enable invalid (nan) exceptions, do
#include <xmmintrin.h>
...
_MM_SET_EXCEPTION_MASK(_MM_GET_EXCEPTION_MASK() & ~_MM_MASK_INVALID);
On Mac OS X this is moderately complicated. OS X uses the SSE unit for all FP math by default, not the x87 FP unit. The SSE unit does not honor the interrupt options, so that means that in addition to enabling interrupts, you need to make sure to compile all your code not to use SSE math.
You can disable the math by adding "-mno-sse -mno-sse2 -mno-sse3" to your CFLAGS. Once you do that you can use some inline assembly to configure your FP exceptions, with basically the same flags as Linux.
short fpflags = 0x1332 // Default FP flags, change this however you want.
asm("fnclex");
asm("fldcw _fpflags");
The one catch you may find is that since OS X is built entirely using sse there may be uncaught bugs. I know there used to be a big with the signal handler not passing back the proper codes, but that was a few years ago, hopefully it is fixed now.
Related
I wonder how does a Compiler treats Intrinsics.
If one uses SSE2 Intrinsics (Using #include <emmintrin.h>) and compile with -mavx flag. What will the compiler generate? Will it generate AVX or SSE code?
If one uses AVX2 Intrinsics (Using #include <immintrin.h>) and compile with -msse2 flag. What will the compiler generate? Will it generate SSE Only or AVX code?
How does compilers treat Intrinsics?
If one uses Intrinsics, does it help the compiler understand the dependency in the loop for better vectorization?
For instance, what's going on here - https://godbolt.org/z/Y4J5OA (Or https://godbolt.org/z/LZOJ2K)?
See all 3 panes.
The Context
I'm trying to build various version of the same functions with different CPU features (SSE4 and AVX2).
I'm writing the same version one with SSE Intrinsics and once with AVX Intrinsics.
Let's say theyare name MyFunSSE() and MyFunAVX(). Both are in the same file.
How can I make the Compiler (Same method should work for MSVC, GCC and ICC) build each of them using only the respective functions?
GCC and clang require that you enable all extensions you use. Otherwise it's a compile-time error, like error: inlining failed to call always_inline error: inlining failed in call to always_inline ‘__m256d _mm256_mask_loadu_pd(__m256d, __mmask8, const void*)’: target specific option mismatch
Using -march=native or -march=haswell or whatever is preferred over enabling specific extensions, because that also sets appropriate tuning options. And you don't forget useful ones like -mpopcnt that will let std::bitset::count() inline a popcnt instruction, and make all variable-count shifts more efficient with BMI2 shlx / shrx (1 uop vs. 3)
MSVC and ICC do not, and will let you use intrinsics to emit instructions that they couldn't auto-vectorize with.
You should definitely enable AVX if you use AVX intrinsics. Older MSVC without enabling AVX didn't always use vzeroupper automatically where needed, but that's been fixed for a few years. Still, if your whole program can assume AVX support, definitely tell the compiler about it even for MSVC.
For compilers that support GNU extensions (GCC, clang, ICC), you can use stuff like __attribute__((target("avx"))) on specific functions in a compilation unit. Or better, __attribute__((target("arch=haswell"))) to maybe also set tuning options. (That also enables AVX2 and FMA, which you might not want. And I'm not sure if target attributes do set -mtune=xx). See
https://gcc.gnu.org/onlinedocs/gcc/Common-Function-Attributes.html
__attribute__((target())) will prevent them from inlining into functions with other target options, so be careful to use this on functions they will inline into, if the function itself is too small. Use it on a function containing a loop, not a helper function called in a loop.
See also
https://gcc.gnu.org/wiki/FunctionMultiVersioning for using different target options on multiple definitions of the same function name, for compiler supported runtime dispatching. But I don't think there's a portable (to MSVC) way to do that.
See specify simd level of a function that compiler can use for more about doing runtime dispatch on GCC/clang.
With MSVC you don't need anything, although like I said I think it's normally a bad idea to use AVX intrinsics without -arch:AVX, so you might be better off putting those in a separate file. But for AVX vs. AVX2 + FMA, or SSE2 vs. SSE4.2, you're fine without anything.
Just #define AVX2_FUNCTION to the empty string instead of __attribute__((target("avx2,fma")))
#if defined(__GNUC__) && !defined(__INTEL_COMPILER)
// apparently ICC doesn't support target attributes, despite supporting GNU C
#define TARGET_HASWELL __attribute__((target("arch=haswell")))
#else
#define TARGET_HASWELL // empty
// maybe warn if __AVX__ isn't defined for functions where this is used?
// if you need to make sure MSVC uses vzeroupper everywhere needed.
#endif
TARGET_HASWELL
void foo_avx(float *__restrict dst, float *__restrict src)
{
for (size_t i = 0 ; i<1024 ; i++) {
__m256 v = _mm256_loadu_ps(src);
...
...
}
}
With GCC and clang, the macro expands to the __attribute__((target)) stuff; with MSVC and ICC it doesn't.
ICC pragma:
https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-optimization-parameter documents a pragma which you'd want to put before AVX functions to make sure vzeroupper is used properly in functions that use _mm256 intrinsics.
#pragma intel optimization_parameter target_arch=AVX
For ICC, you could #define TARGET_AVX as this, and always used it on a line by itself before the function, where you can put an __attribute__ or a pragma. You might also want separate macros for defining vs. declaring functions, if ICC doesn't want this on declarations. And a macro to end a block of AVX functions, if you want to have non-AVX functions after them. (For non-ICC compilers, this would be empty.)
If you compile code with -mavx2 enabled your compiler will (usually) generate so-called "VEX encoded" instructions. In case of _mm_loadu_ps, this will generate vmovups instead of movups, which is almost equivalent, except that the latter will only modify the lower 128 bit of the target register, whereas the former will zero-out everything above the lower 128 bits. However, it will only run on machines which support at least AVX. Details on [v]movups are here.
For other instructions like [v]addps, AVX has the additional advantage of allowing three operands (i.e., the target can be different from both sources), which in some cases can avoid copying registers. E.g.,
_mm_mul_ps(_mm_add_ps(a,b), _mm_sub_ps(a,b));
requires a register copy (movaps) when compiled for SSE, but not when compiled for AVX:
https://godbolt.org/z/YHN5OA
Regarding using AVX-intrinsics but compiling without AVX, compilers either fail (like gcc/clang) or silently generate the corresponding instructions which would then fail on machines without AVX support (see #PeterCordes answer for details on that).
Addendum: If you want to implement different functions depending on the architecture (at compile-time) you can check that using #ifdef __AVX__ or #if defined(__AVX__): https://godbolt.org/z/ZVAo-7
Implementing them in the same compilation unit is difficult, I think. The easiest solutions are to built different shared-libraries or even different binaries and have a small binary which detects the available CPU features and loads the corresponding library/binary. I assume there are related questions on that topic.
I have to use a Microchip PIC for a new project (needed high pin count on a TQFP60 package with 5V operation).
I have a huge problem, I might miss something (sorry for that in advance).
IDE: MPLAB X 3.51
Compiler: XC8 1.41
The issue is that if I initialize an object to anything other than 0, it will not be initialized, and always be zero when I reach the main();
In simulator it works, and the object value is the proper one.
Simple example:
#include <xc.h>
static int x= 0x78;
void main(void) {
while(x){
x++;
}
return;
}
In simulator the x is 0x78 and the while(x) is true.
BUT when I load the code to the PIC18F67K40 using PICKIT3, the x is 0.
This happening even if I do a simple sprintf, and it does nothing as the formatting text string (char array) is full of zeros.
sprintf(buf,"Number is %u",x")
I can not initialize any object apart to be zero.
What is going on? Any help appreciated!
Found the problem, The chip has an errata issues, and I got the one which is effected, strange, Farnell sells it. More strange that the compiler is not prepared for that, does not even give a warning to say to be careful!
Errata note:
Module: PIC18 Core
3.1 TBLRD requires NVMREG value to point to
appropriate memory
The affected silicon revisions of the PIC18FXXK40
devices improperly require the NVMREG<1:0>
bits in the NVMCON register to be set for TBLRD
access of the various memory regions. The issue
is most apparent in compiled C programs when the
user defines a const type and the compiler uses
TBLRD instructions to retrieve the data from
program Flash memory (PFM). The issue is also
apparent when the user defines an array in RAM
for which the complier creates start-up code,
executed before main(), that uses TBLRD
instructions to initialize RAM from PFM.
Id like to use Ada with Stm32F103 uc, but here is the problem - there is no build-in runtime system within GNAT 2016. There is another cortex-m3 uc by TI RTS included - zfp-lm3s, but seems like it needs some global updates, simple change of memory size/origin doesn't work.
So, there is some questions:
Does some body have RTS for stm32f103?
Is there any good books about low-level staff of cortex-m3 or other arm uc?
PS. Using zfp-lm3s rises this error, when i try to run program via GPS:
Loading section .text, size 0x140 lma 0x0
Load failed
The STM32F series is from STMicroelectronics, not TI, so the stm32f4 might seem to be a better starting point.
In particular, the clock code in bsp/setup_pll.adb should need only minor tweaking; use STM’s STM32CubeMX tool (written in Java) to find the magic numbers to set up the clock properly.
You will also find that the assembler code used in bsp/start*.S needs simplifying/porting to the Cortex-M3 part.
My Cortex GNAT Run Time Systems project includes an Arduino Due version (also Cortex-M3), which has startup code written entirely in Ada. I don’t suppose the rest of the code would help a lot, being based on FreeRTOS - you’d have to be very very careful about memory usage.
I stumbled upon this question while looking for a zfp runtime specific to the stm32l0xx boards. It doesn't look like one exists from what I can see, but I did stumble upon this guide to creating a new runtime from AdaCore, which might help anyone stuck with the same issue:
https://blog.adacore.com/porting-the-ada-runtime-to-a-new-arm-board
I'd like to generate pseudo-random ARM instructions. Via assembler directives, I can tell gcc what mode I'm in, and it will complain if I try a set of opcodes and operands that's not legal in that mode, so it must have some internal listing of what can be done in which mode. Where does that live? Would it be easier to extract that info from LLVM?
Is this question "not even wrong"? Should I try a different approach entirely?
To answer my own question, this is actually really easy to do from arm.md and and constraints.md in gcc/config/arm/. I probably spent more time answering asking this question and answering comments for it than I did figuring this out. Turns out I just need to look for 'TARGET_THUMB1', until I get around to implementing thumb2.
For the ARM family the buck stops at the ARM ARM (ARM Architectural Reference Manual). There is an ARM instruction set section and a Thumb instruction set section. Within both each instruction tells you what generation (ARMvX where X is some number like 4 (arm7), or 5 (arm9 time frame) ,etc). Since the opcode and pseudo code is listed for each instruction you should be able to figure out what is a real instruction and, if any, are syntax to save typing on another (push and pop for example).
With the Cortex-m3 and thumb2 in particular you also need to look at the TRM (Technical Reference Manual) as well. ARM has, I forget the name, a universal syntax they are trying to use that should work on both Thumb and ARM. For example on an ARM you have three register instructions:
add r1,r1,r2
In thumb there are only two register operations
add r1,r2
The desire basically is to meet in the middle or I would say more accurately to encourage ARM assemblers to parse Thumb instructions and encode them with the equivalent ARM instruction without complaining. This may have started with thumb and not thumb2, I have always separated the two syntaxes in my code until recently (and I still generally use ARM syntax for ARM and Thumb for Thumb).
And then yes you have to see what the specific implementation of the assembler tool is, in your case binutils. And it sounds like you have found the binutils/gnu secret decoder ring.
How is a breakpoint implemented on PPC (On OS X, to be specific)?
For example, on x86 it's typically done with the INT 3 instruction (0xCC) -- is there an instruction comparable to this for ppc? Or is there some other way they're set/implemented?
With gdb and a function that hexdumps itself, I get 0x7fe00008. This appears to be the tw instruction:
0b01111111111000000000000000001000
011111 31
11111 condition flags: lt, gt, ge, logical lt, logical gt
00000 rA
00000 rB
0000000100 constant 4
0 reserved
i.e. compare r0 to r0 and trap on any result.
The GDB disassembly is simply the extended mnemonic trap
EDIT: I'm using "GNU gdb 6.3.50-20050815 (Apple version gdb-696) (Sat Oct 20 18:20:28 GMT 2007)"
EDIT 2: It's also possible that conditional breakpoints will use other forms of tw or twi if the required values are already in registers and the debugger doesn't need to keep track of the hit count.
Besides software breakpoints, PPC also supports hardware breakpoints, implemented via IABR (and possibly IABR2, depending on the core version) registers. These are instructions breakpoints, but there are also data breakpoints (implemented with DABR and, possibly, DABR2). If your core supports two sets of hardware breakpoint registers (i.e. IABR2 and DABR2 are present), you can do more than just trigger on a specific address: you can specify a whole contiguous range of addresses as a breakpoint target. For data breakpoints, you can also specify whether you want them to trigger on write, or read, or any access.
Best guess is a 'tw' or 'twi' instruction.
You could dig into the source code of PPC gdb, OS X probably uses the same functionality as its FreeBSD roots.
PowerPC architectures use "traps".
http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=/com.ibm.aix.aixassem/doc/alangref/twi.htm
Instruction breakpoints are typically realised with the TRAP instruction or with the IABR debug hardware register.
Example implementations:
ArchLinux, Apple, Wii and Wii U.
I'm told by a reliable (but currently inebriated, so take it with a grain of salt) source that it's a zero instruction which is illegal and causes some sort of system trap.
EDIT: Made into community wiki in case my friend is so drunk that he's talking absolute rubbish :-)