Best practices for passing data between processes in Cocoa - cocoa

I am in the middle of solving a problem which requires me to do the following in my 64-bit Cocoa application:
Spawn a 32-bit Cocoa helper tool (command line tool) from within my application. This helper will open a file (a quicktime movie to be precise) and access information about that file using 32-bit only APIs (Quicktime-C APIs)
The data gathered from the 32-bit process needs to be passed back to the 64-bit application.
The 64-bit app should wait until the 32-bit process completes before continuing
There are many ways to accomplish this in Cocoa, but from what I gather these are two approaches I could take.
Option 1: NSTask with Pipes
Use NSTask to spawn the 32-bit process
Redirect the NSTasks stdoutput to a pipe, and read data from that pipe in the 64-bit process.
Parse the data from the pipe, which will involve converting strings from stdout into data (ints, floats, strings, etc.)
Option 2: NSTask with NSDistributedNotificationCenter
Use NSTask to spawn the 32-bit process
When data is ready in the 32-bit process, send an NSNotification to the Distributed notification center, and inlude a dictionary in the event with all of the pertinent data.
In the 64-bit app subscribe to the same NSNotification
So my question for StackOverflowers' is, which option is "better"?
Which is a better practice?
Which is more efficient?
I'm leaning towards Option 2 because is seems like there will be less code involved. If these two approaches aren't great, is there a better way to do this?

You say that the subprocess will be an application. Don't use NSTask for that—it confuses Launch Services. (If you mean it's a helper tool, such that a curious expert user could run it from the command line, then NSTask is OK.)
The DNC will work either way, but if the subprocess really is an application, don't use NSTask+NSPipe—use distributed objects.

NSDistributedNotificationCenter will work okay, but keep in mind your application isn't "guaranteed" to receive distributed notifications by the OS. I haven't actually seen this in practice, but it's something to keep in mind when you're choosing a technology.
The other option you didn't mention is distributed objects. Distributed objects is made exactly for this purpose. It handles either serializing or setting up proxy objects that work between processes or over a network. The documentation is a bit lacking, it doesn't support some newer parts of Cocoa like bindings, it's not exactly easy to use, but I still prefer it when I'm working two processes that work together in a complex way.

Related

How do defy output buffering in Windows? [duplicate]

Hi according to this post, unbuffer connects to a command via a pseudo-terminal (pty), which makes the system treat it as an interactive process, therefore not using any stdout buffering.
I would like to use this function on Windows. May I know what is the equivalent of unbuffer program on Windows? Thanks.
I spent some time on this and succeeded. I found this blog during research, and decided to return and provide my solution to save the next guy some time. I'm responding as a guest with a false email so I won't be interacting, but no further information should be required.
On Jul 18 '12 at 19:41 Harry Johnston wrote:
"In principle, if you know how much data to expect, you could use the console API functions to create a console for the application to write to, and then read the output from the console. But you can't do that from Java, you would need to write a C application to do it for you."
Thing is, there is already a utility that does this. It's written for a slightly different use, but it can be coxed into providing the desired result. Its intended purpose is to enable a windows console app to interact with a Linux style tty terminal. It does this by running a hidden console and accesses the console buffer directly. If you tried to use it – you'd fail. I got lucky and discovered that there are undocumented switches for this utility which will allow it to provide simple unbuffered output. Without the switches it fails with the error – the output is not a tty – when trying to pipe output.
The utility is called winpty. You can get it here:
https://github.com/rprichard/winpty/releases
The undocumented switches are mentioned here:
https://github.com/rprichard/winpty/issues/103
I’m using the MSYS2 version. You’ll need the msys-2.0.dll to use it.
Simply run:
winpty.exe -Xallow-non-tty -Xplain your_program.exe | receive_unbuffered_output.exe
-Xallow-non-tty , will allow piped output
-Xplain , will remove the added Linux terminal escape codes (or whatever they’re called)
Required files are:
winpty.exe
winpty-agent.exe
winpty.dll
msys-2.0.dll
winpty-debugserver.exe – Not needed
The behaviour you're describing is typical of applications using run-time libraries for I/O. By default, most runtime libraries check to see whether the handle is a character mode device such as a console, and if so, they don't do any buffering. (Ideally the run-time library would treat a pipe in the same way as a console, but it seems that most don't.)
I'm not aware of any sensible way to trick such an application into thinking it is writing to a console when it is actually writing to a pipe.
Addendum: seven years later, Windows finally supports pseudoconsoles. If you are running on Windows 10 v1809 or later, this new API should solve your problem.
On older versions of Windows, if you know how much data to expect, you could in principle use the console API functions to create a console for the application to write to, and then read the output from the console. But you can't do that from Java, you would need to write a C application to do it for you.
Similarly, in principle it should presumably be possible to write a device driver equivalent to a Unix pseudo-terminal, one that acts like a pipe but reports itself to be a character-mode device. But writing device drivers requires specific expertise, and they have to be digitally signed, so unless there is an existing product out there this approach isn't likely to be feasible.
Disclaimer: My answer only deals with executables compiled using MSVC.
The buffering policy is coded inside Microsoft C Runtime (CRT) Library. You can learn the details here. This article suggests using console handles and manipulate console buffers to receive unbuffered output.
However, there's an undocumented feature inside Microsoft C Runtime to inherit file handles with some internal flags directly from its parent process using lpReserved2 and cbReserved2 fields of STARTUPINFO structure. You can find the details in the crt source code provided by Microsoft Visual Studio. Or search for something like posfhnd on GitHub.
We can exploit this undocumented feature to provide a pipe handle and specify FOPEN | FDEV flags to the child process, to fool the child process treat that pipe handle the same way as a FILE_TYPE_CHAR handle.
I have a working Python3 script to demonstrate this method.

compiling and using command-line C++ program under Android 2.3.5?

How can I compile a C++ program with a command-line interface and use it under Android 2.3.5 on my phone?
No - the model is completely different. Simple C++ programs are single threaded - they do whatever they have to do as quickly as they can in a single thread of execution and if they have to wait or block on something like retrieving data from the network then they just have to wait. They are given timeslices by a multitasking operating system and when they're finished they're finished.
In Android there is always one thread running which handles GUI interactions and passes the results into 'hooks' in your Activity instance. Anything that might block the GUI thread has to be farmed out to another thread, and call back on another method in your Activity. It's event-driven, and you have remarkably little control or certainty about things like object lifetime. So you need to program in a completely different way.
An emulator of some kind running as an Android app could - in principle - run C++ binaries compiled for a specific VM. But as far as I'm aware such an app doesn't exist and neither does the toolchain to produce such binaries. Google have discouraged such an approach too AFAIK. There are fully-fledged computer emulators but for obvious reasons they're mainly old 8-bit nostalgia fests :)
I'm a C++ programmer who recently got involved in Android programming and I'd recommend it. You'll think about programs in a different way from the single-threaded IFTT way you may be used to.

How to communicate between a Win32 Console application and an MFC application?

I have one Win32 console application which will be independent EXE and I have front-end designed in MFC.
I want to get the results of the Win32 application to be shown on my GUI. I searched a lot and found some techniques:
Named pipe
DDE
Shared memory
Are any of these an appropriate solution to my problem? Does anyone know of any other solution(s) that might be easier than those I mentioned?
If the output of the console exe is machine parsable, you can use CreateProcess() with pipes for standard input and output which you then parse and display in your UI.
You send also message from one application to another, it's quite simple. Look into WM_COPYDATA
http://msdn.microsoft.com/en-us/library/ms649011%28v=vs.85%29.aspx

Creating GUI for linux CLI

I am a final year computer engineering student. As my final year project, I have decided to create a multimedia encoder for linux, possibly cross platform.
My question is: How can I create a GUI for ffmpeg (i.e. how can I pass command line arguments from the GUI)?
I am trying to use QT for cross platform development.
Tcl/Tk was designed to embed scripting into C programs and is probably the easiest of any language to do this with. It has several mechanisms for doing this embedding. The API makes it very easy to retrofit it to command-line C programs using argv as it has calls for converting native Tcl data structures to and from char**. It also has GUI toolkit called Tk that is somewhat basic but very easy to use and substantially more flexible than you might think.
In your case, the two mechanisms you would probably use in Tcl are the embedding where you just call main with the arguments passed from your Tcl program. The other is to fork the process with appropriate command line args and wait for it to complete. Both are fairly easy to accomplish with Tcl.
I'm not aware of any QT bindings for Tcl but it is very portable and Tk can be themed thesedays so it doesn't look like a 1990 vintage Motif app.
Se this posting for a more in-depth discussion of the topic.
Do you want to call ffmpeg from within your application? If so, look at QProcess. You can even capture the stdout and stderr streams from the ffmpeg process and use that information to (for example) drive a progress bar or display errors.
If you actually want to embed one GUI application inside another, that's a lot harder, especially to do in a platform independent manner.
The Red Hat folks use Python and pyGTK to write their CLI GUI's.
Blog posting: http://www.oreillynet.com/onlamp/blog/2008/02/red_hats_emerging_technology_g.html

Low-overhead I/O monitoring on Windows

I would like a low-overhead method of monitoring the I/O of a Windows process.
I got several useful answers to Monitoring certain system calls done by a process in Windows. The most promising was about using Windows Performance Toolkit to get a kernel event trace. All necessary information can indeed be pulled from there, but the WPT is a massive overkill for what I need and subsequently has a prohibitive overhead.
My idea was to implement an alternative approach to detecting C/C++ dependency graphs. Usually this is done by passing an option to the compiler (-M, for example). This works fine for compilers and tools which have such an option, but not all of them do, and those who do often implement them differently. So, I implemented an alternative way of doing this on Linux using strace to detect which files are opened. Running gcc (for example) in this way has a 50% overhead (ballpark figure), and I was hoping to figure out a way to do this on windows with a similarish overhead.
The xperf set of tools have two issues which prevents me from using them in this case:
There is no way to monitor file-I/O events for a single process; I have to use the kernel event trace which traces every single process and thus generates huge amounts of data (15Mb for the time it takes to run gcc, YMMV).
As a result of having to use the kernel event trace, I have to run as administrator.
I really don't need events at the kernel level; I suppose I could manage just as well if I could just monitor, say, the Win32 API call CreateFile(), and possibly CreateProcess() if I want to catch forked processes.
Any clever ideas?
Use API hooking. Hooking NtCreateFile and a few other calls in ntdll should be enough. I've had good experience using easyhook as a framework to do the hooking itself - free and open source. Even supports managed hooking (c# etc) if you wanted to do that. It's quite easy to set up.
It's at located at http://easyhook.codeplex.com
Edit: btw detours does not allow 64 bit hooking (unless you buy a license for a nominal price of 10,000USD)
EasyHook does not allow native hooks across a WOW64 boundary. It allows managed hooking across WOW64 boundaries though.
I used Microsoft's Detours in the past to track memory allocations by intercepting particular API calls. You could use it to track CreateFile and CreateProcess.
It seems like Dr. Memory's System Call Tracer for Windows is exactly what I was looking for. It is basically a strace implementation for Windows.

Resources