I am trying to port the fantastic ASUS XONAR-series driver for Linux, written by Clemens Ladisch, to Mac OSX.
Right now, a very rough version that compiles is available at: github.com/i3roly/CMI8788
My question is regarding the pthread.h header for OSX. By default, including pthread.h tries to define a structure that is markedly different from the one included through the IOKit drivers. for brevity i will use an informative post from a github post(https://github.com/civetweb/civetweb/issues/364#issuecomment-255438891):
#include <pthread.h>
#include <sys/_types/_mach_port_t.h>
typedef __darwin_mach_port_t mach_port_t;
versus
#include <IOKit/audio/IOAudioDevice.h>
#include <IOKit/IOService.h>
#include <IOKit/IORegistryEntry.h>
#include <IOKit/IOTypes.h>
#include <IOKit/system.h>
#include <mach/mach_types.h>
#include <mach/host_info.h>
#include <mach/message.h>
#include <mach/port.h>
/*
* For kernel code that resides outside of Mach proper, we opaque the
* port structure definition.
*/
struct ipc_port;
typedef struct ipc_port *ipc_port_t;
#define IPC_PORT_NULL ((ipc_port_t) 0UL)
#define IPC_PORT_DEAD ((ipc_port_t)~0UL)
#define IPC_PORT_VALID(port) \
((port) != IPC_PORT_NULL && (port) != IPC_PORT_DEAD)
typedef ipc_port_t mach_port_t;
now, i can get around this by doing
#define _MACH_PORT_T
#include <pthread.h>
but i am not sure if this is a safe solution, since to me it seems the pthreads API for Xcode implies it is only to be used for user-land programs. is this assumption wrong? is using this macro to get around the redefinition problem a reasonable one?
have others tried to write kernel land drivers for OSX using pthreads, and encountered this issue? any insight would be appreciated.
thank you.
stupid question.
i don't know why i didn't remind myself that you CANNOT USE PTHREADS IN KERNEL, especially when i have experience building the linux kernel (which should have served as an easy reminder that YOU CANNOT DO THIS AND IT IS A BAD IDEA)
hits self over the head with a slipper
i have no idea why this didn't click yesterday.
Related
I'm developing an EGT embedded Linux application on a Microchip SAM Xplained Board. EGT is primarily C++ based, similar in some respects to Qt. The application I'm building naturally contains the GUI element & the interaction with hardware connected to the board.
For speed & convenience I'd like to develop as much as possible of the GUI on a desktop (EGT will run on a desktop Linux machine), however I'm going to run into issues when hardware interaction occurs (e.g. calls to GPIO pins etc.)
Is there a gcc compile time option to somehow block/redirect/overwrite these hardware interactions to something that would allow the application to run on a desktop? If not I think I'm looking at lots of #if arch = 'ARM' or something similar.
Thanks for looking!
Regards,
For anyone looking at this it seems that the way to go is some type of wrapper around hardware calls (examples below) as suggested by #sawdust or using a QEMU (which can be built using Yocto)
// Enable compiling on desktop
#if defined(__x86_64__) || defined(_M_X64)
#pragma GCC diagnostic ignored "-Wunused-value"
#define GPIO_CHIP_GET_LINE(chip, offset) (NULL)
#define GPIO_CHIP_OPEN_BY_NAME(port, pin) (1)
#define GPIO_LINE_REQUEST_OUTPUT(line, consumer, default_val) (1)
#define GPIO_LINE_REQUEST_INPUT(line, consumer) (1)
#define GPIO_LINE_SET_VALUE(line, value) (1)
#define GPIO_LINE_GET_VALUE(line) (1)
#else
#define GPIO_CHIP_GET_LINE(chip, offset) gpiod_chip_get_line(chip, offset)
#define GPIO_CHIP_OPEN_BY_NAME(name) gpiod_chip_open_by_name(name)
#define GPIO_LINE_REQUEST_OUTPUT(line, consumer, default_val) gpiod_line_request_output(line, consumer, default_val)
#define GPIO_LINE_REQUEST_INPUT(line, consumer) gpiod_line_request_input(line, consumer)
#define GPIO_LINE_SET_VALUE(line, value) gpiod_line_set_value(line, value)
#define GPIO_LINE_GET_VALUE(line) gpiod_line_get_value(line)
#endif
This is my code that works only on Xcode (version 4.5):
#include <stdio.h>
#include <mach/mach_init.h>
#include <mach/mach_vm.h>
#include <sys/types.h>
#include <mach/mach.h>
#include <sys/ptrace.h>
#include <sys/wait.h>
#include <Security/Authorization.h>
int main(int argc, const char * argv[]) {
char test[14] = "Hello World! "; //0x7fff5fbff82a
char value[14] = "Hello Hacker!";
char test1[14];
pointer_t buf;
uint32_t sz;
task_t task;
task_for_pid(current_task(), getpid(), &task);
if (vm_write(current_task(), 0x7fff5fbff82a, (pointer_t)value, 14) == KERN_SUCCESS) {
printf("%s\n", test);
//getchar();
}
if (vm_read(task, 0x7fff5fbff82a, sizeof(char) * 14, &buf, &sz) == KERN_SUCCESS) {
memcpy(test1, (const void *)buf, sz);
printf("%s", test1);
}
return 0;
}
I was trying also ptrace and other things, this is why I include other libraries too.
The first problem is that this works only on Xcode, I can find with the debugger the position (memory address) of a variable (in this case of test), so I change the string with the one on value and then I copy the new value on test on test1.
I actually don't understand how vm_write works (not completely) and the same for task_for_pid(), the 2° problem is that I need to read and write on another process, this is only a test for see if the functions works on the same process, and it works (only on Xcode).
How I can do that on other processes? I need to read a position (how I can find the address of "something"?), this is the first goal.
For your problems, there are solutions:
The first problem: OS X has address space layout randomization. If you want to make your memory images fixed and predictable, you have to compile your code with NOPIE setting. This setting (PIE = Position Independent Executable), is responsible for allowing ASLR, which "slides" the memory by some random value, which changes on every instance.
I actually don't understand how vm_write works (not completely) and the same for task_for_pid():
The Mach APIs operate on the lower level abstractions of "task" and "Thread" which correspond roughly to that of the BSD "process" and "(u)thread" (there are some exceptions, e.g. kernel_task, which does not have a PID, but let's ignore that for now). task_for_pid obtains the task port (think of it as a "handle"), and if you get the port - you are free to do whatever you wish. Basically, the vm_* functions operate on any task port - you can use it on your own process (mach_task_self(), that is), or a port obtained from task_for_pid.
Task for PID actually doesn't necessarily require root (i.e. "sudo"). It requires getting past taskgated on OSX, which traditionally verified membership in procmod or procview groups. You can configure taskgated ( /System/Library/LaunchDaemons/com.apple.taskgated.plist) for debugging purposes. Ultimately, btw, getting the task port will require an entitlement (the same as it now does on iOS). That said, the easiest way, rather than mucking around with system authorizations, etc, is to simply become root.
Did you try to run your app with "sudo"?
You can't read/write other app's memory without sudo.
I have a kext that needs to know what version of OS X it is running on. CocoaDev has an article which describes how to get the OS X version info using Gestalt(), but the code requires Cocoa.
Can I call Gestalt() from a kext?
If so, what #include do I use to define it?
If not, are there any other solutions?
Background:
I'd like to use the same kexts in on all versions of OS X from 10.4 through 10.7.
BUT: The kexts call cdevsw_add, which was changed in Lion in a non-backward-compatible way. Along with (apparently) changes to some kernel programs that call it, the changes mean — per the comment before the routine — that cdevsw_add should be called with a different first argument on 10.7 than on OS X 10.0 through 10.6. (-12 on Lion, -1 on earlier versions.)
If the kexts can determine which version of OS X they are running on, it's easy. (If not, it will be a pain to do — maybe a horrible kludge like building two different versions of the kexts and having the kext-loading code pick which one to load.)
Kernel.framework provides <libkern/version.h>. There are declared some extern variables like version_major, version_minor etc. AFAIK those are exported from the libkern.kpi.
Hope it helps.
You can use sysctl to get the kernel version (scroll down to method 3). It allegedly works when you develop kernel modules.
Here's an example of the method, in case the site ever goes down.
#include <sys/param.h>
#include <sys/sysctl.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
int main()
{
int mib[] = {CTL_KERN, KERN_OSRELEASE};
size_t len;
sysctl(mib, sizeof mib / sizeof(int), NULL, &len, NULL, 0);
char* kernelVersion = malloc(len);
sysctl(mib, sizeof mib / sizeof(int), kernelVersion, &len, NULL, 0);
printf("Kernel version is %s\n", kernelVersion);
free(kernelVersion);
}
Of course, you'll need to figure out the kernel versions of Snow Leopard and Lion, but that shouldn't be very hard. (I can testify that the kernel version of the current Lion release is 11.0.0.)
Hey so i've made a text game using the pdCurses library and microsoft opperating system tools. Here are my includes and look below for other explination:
#include <iostream>
#include <time.h> // or "ctime"
#include <stdio.h> // for
#include <cstdlib>
#include <Windows.h>
#include <conio.h>
#include<curses.h>
#include <algorithm>
#include <string>
#include <vector>
#include <sstream>
#include <ctime>
#include <myStopwatch.h> // for keeping times
#include <myMath.h> // numb_digits() and digit_val();
myStopwath/Math.h includes:
#include <stdio.h>
#include <math.h>
#include <tchar.h>
So i've tested the game (which includes a folder containing the .exe and pdcurses.dll) on my computer running windows 7 and it works great, however when running it on another computer which has vista or older my game comes up, but immediatly ends due to the loss of all the players lives almost instantaniously.... how could this be?
If you would like to see the full source code, go to this Link
Thanks!
In the main game loop, you are not initializing the coll variable before passing it to theScreen.check_collision(). If the player is in no danger, then that function does not update this value. Back in the main loop you don't check the return value from check_collision(), and the program is now making decisions based on whatever uninitialized value was in that variable. Welcome to the wide world of Undefined Behavior.
It is likely the difference you're seeing on different OS's is due to the way the different heap managers initialize memory pages. Even if your player survives for a while, after the first collision, that memory location now holds 'X', which is then never cleared, and while the result is still "undefined", on most architectures, this will result in registering a new collision on each iteration, explaining why your "lives" are vanishing so quickly.
Two things you need to do to fix this:
All code paths through check_collision must write to the 'buff' out parameter. The easiest way to do this is initialize it to 0 in the first line of the function. (Alternatively, if it's intended as an in/out param, then you need to initialize it in the main loop before calling check_collision() )
Make your decision based on the return value of check_collision(), rather than the out parameter. (Or, if that return value really is not important, change the return type of the function to void)
Line 23 in string_lines is missing a comma at the end. Don't think this is your whole issue, but that can't be good either.
You didn't say if you recompiled it separately under each OS (Vista, etc). And if you did recompile, whether same version of compiler was used.
Windows 7 shipped with Visual C++ 2008 runtime.
Windows Vista shipped Visual C++ 2005 runtime.
XP shipped with Visual C++ 6.0 runtime.
Since you compiled the application in Visual Studio 2010, more than likely it was not compiled to target older operating systems.
Try installing the latest runtimes on the machine you are testing with and if it works after doing that, you know to recompile your project to support older operating systems.
http://www.microsoft.com/download/en/details.aspx?id=5555 x86
http://www.microsoft.com/download/en/details.aspx?id=14632 x64
I would like to be able to tell how long it takes to get from power on to windows starting.
Is there a way of determining this retrospectively (ie once windows has started)?
Does the BIOS/CMOS hold a last boot time?
Would it be possible to tell from RDTSC how long a machine has been running for and subtract the windows boot time?
You might try BootTimer or BootRacer to see either of them will do what you want.
I don't believe you can determine this after Windows is started. I'm not aware of any BIOS that stores the last boot time. But on any modern machine, if the time between power on to calling the OS boot loader (essentially the time it takes to run the POST routines) takes longer than a few seconds, something is wrong.
Are you trying to do this programmatically to get the accurate amount of time that the machine has been online and usable? The inaccuracy resulting from the few seconds that POST takes doesn't seem like it would make a significant difference. If you're timing for benchmarking or optimization purposes, either of these two utilities should work for you.
Get the time since power on from GetTickCount(). Then get the timestamp of a file Windows touches at boot (windows\bootstat.dat for example). Code is below. On my machine it says 16 seconds which sounds accurate.
#include <stdio.h>
#include <windows.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <time.h>
int main()
{
struct __stat64 st;
_stat64("c:\\windows\\bootstat.dat", &st);
return printf("%d\n", st.st_mtime - (time(NULL) - GetTickCount()/1000));
}