I'm a newbie to kernel development and am trying to understand how the core dumps work. I came across the crashkernel memory size for various platforms like for eg: for x86_64 the recommended crashkernel size is said to be 64M and for the ppc64 it's 128M. How are these sizes determined?
Related
I have a 2019 MacBook Pro 16". It has an Intel Core i9, 8-core processor and an AMD Radeon Pro 5500M with 8 GB GPU RAM.
I have the laptop dual booting Mac OS 12.4 and Windows 11.
Running clinfo under Windows tells me essentially that the OpenCL support is version 2.0, and that the addressing is 64-bits, and the max allocatable memory is between 7-8 GB.
Running clinfo under Mac OS tells me that OpenCL support is version 1.2, that addressing is 32-bits little endian, and the max allocatable memory is about 2 GB.
I am guessing this means that any OpenCL code I run is then restricted to using 2GB because of the 32-bit addressing (I thought that limit was 4GB), but I am wondering a) is this true and b) if it is true, is there any way to enable OpenCL under Mac to use the full amount of GPU memory?
OpenCL support on macOS is not great and has not been updated/improved for almost a decade. It always maxes out at version 1.2 regardless of hardware.
I'm not sure how clinfo determines "max allocatable memory," but if this refers to CL_DEVICE_MAX_MEM_ALLOC_SIZE, this is not necessarily a hard limit and can be overly conservative at times. 32-bit addressing may introduce a hard limit though. I'd also experiment with allocating your memory as multiple buffers rather than one giant one.
For serious GPU programming on macOS, it's hard to recommend OpenCL these days - tooling and feature support on Apple's own Metal API is much better, but of course not source compatible with OpenCL and only available on Apple's own platforms. (OpenCL is now also explicitly deprecated on macOS.)
Is it possible to compile a Package on ARMv8 and run it on ARMv7 ?
I am not really experienced in the whole building thing (yet).
I came to this question because my Odroid C1+ fails to compile icinga2 due to the very limited RAM.
The C2 has 2 GB of RAM and will do probably better at this task.
But can I run a C2 (ARMv8) compiled package on my C1+ (ARMv7)?
Is it possible to compile a Package on ARMv8 and run it on ARMv7 ?
That's called cross-compiling and is the usual way how ARM code is generated – only that most build machines for ARM binaries are probably x86_64 nowadays. But if you have a compiler that targets platform ARMv7 running on ARMv8, I don't see a problem.
I am not really experienced in the whole building thing (yet). I came to this question because my Odroid C1+ fails to compile icinga2 due to the very limited RAM. The C2 has 2 GB of RAM and will do probably better at this task.
You know what is much much better at compiling? A proper PC with more than 4GB of RAM, massive RAM bandwidth and a much higher storage bandwidth, with a heavily pipelined multicore CISC CPU rather than an energy-efficient ARM.
Really, software for embedded systems is usually built on non-embedded computers with cross-compilers. There's definitely different ways to cross-compile something for your C1+ on your PC; I'd generally recommend using the method your Linux distro (if you're using any) has for cross-compiling packages.
ARMv7 is a different platform from ARMv8, so compiling software from ARMv7 on v8 has no advantage over compiling software for ARMv7 on x86. You'll need a cross-compiling toolchain, anyway.
I am using the u-boot-2011.12 on my OMAP3 target, the cross tool chain is CodeSourcery arm-none-linux-gnueabi, I compiled u-boot, downloaded it onto the target and booted it, everything went fine,but I have some questions about the u-boot relocation feature, we know that this feature is base on PIC(position independent code), position independent code is generated by setting the -fpic flag to gcc, but I don't find fpic in the compile flags. Without the PIC, how can u-boot implement the relocation feature?
Remember when u-boot is running there is no OS yet. It doesn't really need the 'pic' feature used in most user applications. What I'll describe below is for the PowerPC architecture.
u-boot is initially running in NV memory (NAND or NOR). After u-boot initializes most of the peripherals (specially the RAM) it locates the top of the RAM, reserves some area for the global data, then copies itself to RAM. u-boot will then branch to the code in RAM and modify the fixups. u-boot is now relocated in RAM.
Look at the start.S file for your architecture and find the relocate_code() function. Then study, study, study...
I found this troubling too, and banged my head around this question for a few hours.
Luckily I stumbled upon the following thread on the u-boot mailing list :
http://lists.denx.de/pipermail/u-boot/2010-October/078297.html
What this says, is that at least on ARM, using -fPIC/-fPIE at COMPILE TIME is not necessary to generate position independent binaries. It eases the task of the runtime loader by doing as most work up-front as possible, but that's all.
Whether you use fPIC or not, you can always use -pic / -pie at LINK TIME, which will move all position-dependent references to a relocation section. Since no processing was performed at COMPILE TIME to add helpers, expect this section to be larger than when using -fPIC.
They conclude that for their purposes using -fPIC does not have any significant advantage over a link-time only solution.
[edit] See commit u-boot 92d5ecba for reference
arm: implement ELF relocations
http://git.denx.de/cgi-bin/gitweb.cgi?p=u-boot.git;a=commit;h=92d5ecba47feb9961c3b7525e947866c5f0d2de5
What are the differences between compiling a Mac app in Xcode with the Active Architecture set to i386 vs x86_64 (chosen in the drop down at the top left of the main window)? In the Build settings for the project, the Architecture options are Standard (32/64-bit Universal), 32-bit Universal, and 64-bit Intel. Practically, what do these mean and how does one decide?
Assume one is targeting OS X 10.5 and above. I see in the Activity Monitor that compiling for x86_64 results in an app that uses more memory than one compiled for i386. What is the advantage? I know 64-bit is "the future", but given the higher memory usage, would it make sense to choose 32-bit ever?
32/64-bit Universal -- i386, x86_64, ppc
32-bit Universal -- i386, ppc
64-bit Intel -- 64 bit Intel only
ppc64 is no longer supported.
x86_64 binaries are faster for a number of reasons; faster ABI, more registers, on many (most & all new machines) machines the kernel is 64 bit & kernel calls are faster, etc.etc.etc...
While 64 bit has a bit of memory overhead directly related, generally, to how pointer heavy your app's data structures are, keep in mind that 32 bit applications drag in the 32 bit versions of all frameworks. If yours is the only 32 bit app on the system, it is going to incur a massive amount of overhead compare to the 64 bit version.
64 bit apps also enjoy the latest and greatest Objective-C ABI; synthesized ivars, non-fragile ivars, unified C++/ObjC exceptions, zero-cost #try blocks etc... and there are a number of optimizations that are only possible in 64 bit, too.
iOS apps need to run on many different architectures:
arm7: Used in the oldest iOS 7-supporting devices[32 bit]
arm7s: As used in iPhone 5 and 5C[32 bit]
arm64: For the 64-bit ARM processor in iPhone 5S[64 bit]
i386: For the 32-bit simulator
x86_64: Used in 64-bit simulator
Xcode basically emulates the environment of 32 bit or 64 bit based on what is set in the Valid Architecture - i386 or x86_64 respectively
Every architecture requires a different binary, and when you build an app Xcode will build the correct architecture for whatever you’re currently working with. For instance, if you’ve asked it to run in the simulator, then it’ll only build the i386 version (or x86_64 for 64-bit).
Unless you have a reason to compile for x86_64, I recommend just compiling for i386 (and PPC if you support that). Read Apple's stance on the matter:
Although 64-bit executables make it
easier for you to manage large data
sets (compared to memory mapping of
large files in a 32-bit application),
the use of 64-bit executables may
raise other issues. Therefore you
should transition your software to a
64-bit executable format only when the
64-bit environment offers a compelling
advantage for your specific purposes.
When trying to get shared memory, shmget() often fails because being unable to allocate memory. The physical size of RAM really shouldn't be the problem (4GB is enough, I think).
Rather there's probably anywhere in the systems properties a limit for allocating shared memory set. Does anyone know, where I can find this property?
I'm using Mac OS X Version 10.6
Depends on the OS. PostgreSQL documentation has tips for changing the shared memory limit on various platforms.