The NSProcessInfo class has two methods named processorCount and activeProcessorCount. The documentation is as unhelpful as possible as to what is the different between a processing core and an active processing core. Or, in other words: what counts as an inactive processing core for Cocoa?
It may be that OS X can shut down cores when the system is overloaded (to reduce temperature).
On older MacBooks, one core could shut down if the power cord was the only power source (no battery). (I can't find link for that one but I'm pretty sure that was the case for my 2007 white MacBook.)
Also, the hwprefs command line utility can enable/disable processors cores.
Most of the time, you really want the activeProcessorCount since it's what really represents the state of the machine.
Edit: hwprefs is gone in Lion, but you can access the same functionality with sysctl -n hw.ncpu
Related
I would like to know if there is a proper method to track memory accesses
across multiple resources at once. For example I set up a simple dual core CPU
by advancing the simple.py from learning gem5 (I just added another
TimingSimpleCPU and made the port connections).
I took a look at the different debug options and found for example the
MemoryAccess flag (and others), but this seemed to only show the accesses at
the DRAM or one other resource component.
Nevertheless I imagine a way to track events across CPU, bus and finally memory.
Does this feature already exist?
What can I try next? Is it and idea to add my own --debug-flag or can I work
with the TraceCPU for my specified use?
I haven't worked much with gem5 yet so I'm not sure how to achieve this. Since until now I only ran in SE mode is the FS mode a solution?
Finally I also found the TraceCPUData flag in the --debug-flags, but running
this with my config script created no output (like many other flags btw. ...).
It seems that this is a --debug-flag for the TraceCPU, what kind of output does this flag create and can it help me?
I have an HID device that is somewhat unfortunately designed (the Griffin Powermate) in that as you turn it, the input value for the "Rotation Axis" HID element doesn't change unless the speed of rotation dramatically changes or unless the direction changes. It sends many HID reports (angular resolution appears to be about 4deg, in that I get ~90 reports per revolution - not great, but whatever...), but they all report the same value (generally -1 or 1 for CCW and CW respectively -- if you turn faster, it will report -2 & 2, and so on, but you have to turn much faster. As a result of this unfortunate behavior, I'm finding this thing largely useless.
It occurred to me that I might be able to write a background userspace app that seized the physical device and presented another, virtual device with some minor additions so as to cause an input value change for every report (like a wrap-around accumulator, which the HID spec has support for -- God only knows why Griffin didn't do this themselves.)
But I'm not seeing how one would go about creating the kernel side object for the virtual device from userspace, and I'm starting to think it might not be possible. I saw this question, and its indications are not good, but it's low on details.
Alternately, if there's a way for me to spoof reports on the existing device, I suppose that would do it as well, since I could set it back to zero immediately after it reports -1 or 1.
Any ideas?
First of all, you can simulate input events via Quartz Event Services but this might not suffice for your purposes, as that's mainly designed for simulating keyboard and mouse events.
Second, the HID driver family of the IOKit framework contains a user client on the (global) IOHIDResource service, called IOHIDResourceDeviceUserClient. It appears that this can spawn IOHIDUserDevice instances on command from user space. In particular, the userspace IOKitLib contains a IOHIDUserDeviceCreate function which seems to be supposed to be able to do this. The HID family source code even comes with a little demo of this which creates a virtual keyboard of sorts. Unfortunately, although I can get this to build, it fails on the IOHIDUserDeviceCreate call. (I can see in IORegistryExplorer that the IOHIDResourceDeviceUserClient instance is never created.) I've not investigated this further due to lack of time, but it seems worth pursuing if you need its functionality.
I am working on my small c++ framework and have a file class which should also support async reading and writing. The only solution other than using synchronous file i/o inside some worker threads I found is aio. Anyways I was looking around and read somewhere, that in Linux, aio is not even implemented in the kernel but rather with user threads. Is the same true for OSX? Another concern is aio's functionality of callbacks which has to spawn an extra thread for each callback since you can't assign a certain thread or threadpool to take care of that (signals are not an option for me). So here are the questions resulting from that:
Is aio implemented in the Kernel of osx and thus is most likely better than my own threaded implementation?
Can the callback system -spawning a thread for each callback- become a bottleneck in practice?
If aio is not worth using on osx, are there any other alternatives on unix? in cocoa? in carbon?
Or should I simply emulate async i/o with my own threadpool?
What is your experience on the subject?
You can see exactly how AIO is implemented on OSX right here.
The implementation uses kernel threads, one queue of jobs which each thread pops and execute in a blocking fashion in a priority queue based on each request's priority (at least that's what it looks like at a first glance).
You can configure the number of threads and the size of the queue with sysctl. To see these options and the default values, run sysctl -a | grep aio
kern.aiomax = 90
kern.aioprocmax = 16
kern.aiothreads = 4
In my experience, in order for it to make any sense to use AIO, these limits need to be a lot higher.
As for the callbacks in threads, I don't believe Mac OS X supports that. It only does completion notifications through signals (see source).
You could probably do as good of a job in your own thread pool. One thing you could do better than the current darwin implementation is to sort your read jobs by physical location on the disk (see fcntl and F_LOG2PHYS) which might even give you an edge.
#Moka: Sorry to say that you're wrong on the linux implementation, as of kernel 2.6 there is a kernel implementation of AIO, which comes in libaio (libaio.h)
The implementation that doesn't use Kernel threads but instead uses user threads is POSIX.1 AIO, and it does it that way to make it more portable, as not all unix based OS support completion events at Kernel level.
I would like to know if it is possible to identify physical processor (core) used by thread with specific thread-id?
For example, I have a multithreaded application that has two (2) threads (thread-id = 10 and thread-id = 20, for instance). I run the application on a system that has a dual core processor (core 1 and core 2). So, how do I to get core number used by thread with thread-id = 20?
P.S. Windows platforms.
Thank you,
Denis.
Unless you use thread-affinity, threads are not assigned to specific cores. With every time slice, the thread can be executed on different cores. This means that if there would be a function to get the core of a thread, by the time you get the return value, there's a big chance that the thread is already executing on another core.
If you are using thread-affinity, you could take a look at the Windows thread-affinity functions (http://msdn.microsoft.com/en-us/library/ms684847%28v=VS.85%29.aspx).
There are functions called GetCurrentProcessorNumber (available since Server 2003 and Vista) and GetCurrentProcessorNumberEx (available since Server 2008 R2 and Windows 7).
See also this question's answers for more related options and considerations (including Windows XP - primarily this answer describing the use of cpuid instruction).
Of course the core number can be changed any time by the scheduler so if You need to be sure then perhaps it helps for a reasonable amount if You check the core number both before and after something You measured or executed for a short amount of time, and if the core number is still same then You know on which core most likely the intermediate code also executed.
Is there any way to set a system wide memory limit a process can use in Windows XP? I have a couple of unstable apps which do work ok for most of the time but can hit a bug which results in eating whole memory in a matter of seconds (or at least I suppose that's it). This results in a hard reset as Windows becomes totally unresponsive and I lose my work.
I would like to be able to do something like the /etc/limits on Linux - setting M90, for instance (to set 90% max memory for a single user to allocate). So the system gets the remaining 10% no matter what.
Use Windows Job Objects. Jobs are like process groups and can limit memory usage and process priority.
Use the Application Verifier (AppVerifier) tool from Microsoft.
In my case I need to simulate memory no longer being available so I did the following in the tool:
Added my application
Unchecked Basic
Checked Low Resource Simulation
Changed TimeOut to 120000 - my application will run normally for 2 minutes before anything goes into effect.
Changed HeapAlloc to 100 - 100% chance of heap allocation error
Set Stacks to true - the stack will not be able to grow any larger
Save
Start my application
After 2 minutes my program could no longer allocate new memory and I was able to see how everything was handled.
Depending on your applications, it might be easier to limit the memory the language interpreter uses. For example with Java you can set the amount of RAM the JVM will be allocated.
Otherwise it is possible to set it once for each process with the windows API
SetProcessWorkingSetSize Function
No way to do this that I know of, although I'm very curious to read if anyone has a good answer. I have been thinking about adding something like this to one of the apps my company builds, but have found no good way to do it.
The one thing I can think of (although not directly on point) is that I believe you can limit the total memory usage for a COM+ application in Windows. It would require the app to be written to run in COM+, of course, but it's the closest way I know of.
The working set stuff is good (Job Objects also control working sets), but that's not total memory usage, only real memory usage (paged in) at any one time. It may work for what you want, but afaik it doesn't limit total allocated memory.
Per process limits
From an end-user perspective, there are some helpful answers (and comments) at the superuser question “Is it possible to limit the memory usage of a particular process on Windows”, including discussions of how to set recursive quota limits on any or all of:
CPU assignment (quantity, affinity, NUMA groups),
CPU usage,
RAM usage (both ‘committed’ and ‘working set’), and
network usage,
… mostly via the built-in Windows ‘Job Objects’ system (as mentioned in #Adam Mitz’s answer and #Stephen Martin’s comment above), using:
the registry (for persistence, when desired) or
free tools, such as the open-source Process Governor.
(Note: nested Job Objects ~may~ not have been available under all earlier versions of Windows, but the un-nested version appears to date back to Windows XP)
Per-user limits
As far as overall per-user quotas:
??
It is possible that each user session is automatically assigned to a job group itself; if true, per-user limits should be able to be applied to that job group. Update: nope; Job Objects can only be nested at the time they are created or associated with a specific process, and in some cases a child Job Object is allowed to ‘break free’ from its parent and become independent, so they can’t facilitate ‘per-user’ resource limits.
(NTFS does support per-user file system ~storage~ quotas, though)
Per-system limits
Besides simple BIOS or ‘energy profile’ restrictions:
VM hypervisor or Kubernetes-style container resource limit controls may be the most straightforward (in terms of end-user understandability, at least) option.
Footnotes, regarding per-process and other resource quotas / QoS for non-Windows systems:
‘Classic’ Mac OS (including ‘classic’ applications running on 2000s-era versions of Mac OS X): per-application memory limits can be easily set within the ‘Memory’ section of the Finder ‘Get Info’ window for the target program; as a system using a cooperative multitasking concurrency model, per-process CPU limits were impossible.
BSD: ? (probably has some overlap with linux and non-proprietary macOS methods?)
macOS (aka ‘Mac OS X’): no user-facing interface; system support includes, depending on version, the ‘Multiprocessing Services API’, Grand Central Dispatch, POSIX threads / pthread, ‘operation objects’, and possibly others.
Linux: ‘Resource Manager’/limits.conf, control groups/‘cgroups’, process priority/‘niceness’/renice, others?
IBM z/OS and other mainframe-style systems: resource controls / allocation was built-in from nearly the beginning