PonyLang Windows CreateProcess FFI - windows

I've been trying to call Window's CreateProcessA from Pony Language's FFI.
I created both a C and a PonyLang example. The C example works great:
#include <windows.h>
#include <stdio.h>
#include <tchar.h>
void wmain(void) {
STARTUPINFO info={0};
PROCESS_INFORMATION processInfo={0};
CreateProcessA("calc.exe", 0, 0, 0, 0, 0, 0, 0, &info, &processInfo);
if (status == 0)
printf("%d",GetLastError()); // never hits
}
I put calc.exe in the current directory. This works flawlessly on Windows.
However, my PonyLang implementation keeps on returning a non zero GetLastError:
use "lib:kernel32"
primitive _ProcessAttributes
primitive _ThreadAttributes
primitive _Inherit
primitive _Creation
primitive _Environment
primitive _CurrentDir
primitive _StartupInfo
primitive _ProcessInfo
primitive _HandleIn
primitive _HandleOut
primitive _HandleErr
primitive _Thread
primitive _Process
struct StartupInfo
var cb:I32 = 0
var lpReserved:Pointer[U8] tag= "".cstring()
var lpDesktop:Pointer[U8] tag= "".cstring()
var lpTitle:Pointer[U8] tag= "".cstring()
var dwX:I32 = 0
var dwY:I32 = 0
var dwXSize:I32=0
var dwYSize:I32=0
var dwXCountChars:I32=0
var dwYCountChars:I32=0
var dwFillAttribute:I32=0
var dwFlags:I32=0
var wShowWindow:I16=0
var cbReserved2:I16=0
var lpReserved2:Pointer[U8] tag="".cstring()
var hStdInput:Pointer[_HandleIn] = Pointer[_HandleIn]
var hStdOutput:Pointer[_HandleOut]= Pointer[_HandleOut]
var hStdError:Pointer[_HandleErr]= Pointer[_HandleErr]
struct ProcessInfo
var hProcess:Pointer[_Process] = Pointer[_Process]
var hThread:Pointer[_Thread] = Pointer[_Thread]
var dwProcessId:I32 = 0
var dwThreadId:I32 = 0
//var si:StartupInfo = StartupInfo
actor Main
new create(env: Env) =>
var si: StartupInfo = StartupInfo
var pi: ProcessInfo = ProcessInfo
var inherit:I8 = 0
var creation:I32 = 0
var one:I32 = 0
var two:I32 = 0
var three:I32 = 0
var four:I32 = 0
var z:I32 = 0
var p = #CreateProcessA[I8]("calc.exe",
z,
one,
two,
inherit,
creation,
three,
four,
addressof si,
addressof pi)
if p == 0 then
var err = #GetLastError[I32]() // hits this every time.
env.out.print("Last Error: " + err.string())
end
So the above code compiles for PonyLang, but GetLastError most of the time returns 2. Sometimes GetLastError returns 123. Other times it returns 998?
It all seems odd that the error code is different sometimes. Those codes all mean that there is some issue with file access?
Calc.exe is in the current directory (same directory as the c example).
Also not only is the Error code different but calc.exe is executed(runs fine) in the C version but not in the PonyLang version. This leads me to believe something is off with my PonyLang ffi setup.
Does anyone know what may be wrong?

The problem is with your use of addressof. When you create a struct object, e.g. with var si = StartupInfo, the underlying type is a pointer to the structure (i.e structs in Pony don't have value semantics). Then when you call CreateProcessA with addressof, you're actually passing a pointer to pointer to the function.
If your C function expects a pointer to a structure, you can simply pass the Pony object without addressof when doing the FFI call.

Related

bpf_prog_test_run() causes unexpected packet data

I try to perform a test run for an XDP BPF program. The BPF program uses the bpf_xdp_adjust_meta() helper, to adjust the meta data.
I tried:
to run bpf_prog_test_run()
to run bpf_prog_test_run_xattr()
1. bpf_prog_test_run()
(The first time I tried my bpf program's debug messages told me that adjusting the data_meta field failed.) Now it can adjust the data_meta, but the iph.ihl field is apparently not set to 5.
2. bpf_prog_test_xattr()
This always returns -1, so something failed.
The Code
packet:
struct ipv4_packet pkt_v4 = {
.eth.h_proto = __bpf_constant_htons(ETH_P_IP),
.iph.ihl = 5,
.iph.daddr = __bpf_constant_htonl(33554442),
.iph.saddr = __bpf_constant_htonl(50331658),
.iph.protocol = IPPROTO_TCP,
.iph.tot_len = __bpf_constant_htons(MAGIC_BYTES),
.tcp.urg_ptr = 123,
.tcp.doff = 5,
};
test attribute:
__u32 size, retval, duration;
char data_out[128];
struct xdp_md ctx_in, ctx_out;
struct bpf_prog_test_run_attr test_attr = {
.prog_fd = prog_fd,
.repeat = 100,
.data_in = &pkt_v4,
.data_size_in = sizeof(&pkt_v4),
.data_out = &data_out,
.data_size_out = sizeof(data_out),
.ctx_in = &ctx_in,
.ctx_size_in = sizeof(ctx_in),
.ctx_out = &ctx_out,
.ctx_size_out = sizeof(ctx_out),
.retval = &retval,
.duration = &duration,
};
test execution:
bpf_prog_test_run(main_prog_fd, 1, &pkt_v4, sizeof(pkt_v4), &data_out, &size, &retval, &duration) -> iph.ihl field is 0.
bpf_prog_test_run_xattr(&test_attr) -> returns -1.
Note
The program was already successfully attached to the hook point of a real network interface and ran as intended. I just replaced the code that attaches the program to the hook point with the above code for testing.
The struct ipv4_packet pkt_v4 was not packed.
When I replace __packed with __attribute__ ((__packed__)) it works.
For information what happens without packing, see for example this question.
Basically the compiler adds padding bytes which leads to the fields in the packet being in different places than expected.

Why don't I get the result of this Metal kernel

I am trying to understand how Metal compute shaders work, so I have wrote this code :
class AppDelegate: NSObject, NSApplicationDelegate {
var number:Float!
var buffer:MTLBuffer!
func applicationDidFinishLaunching(aNotification: NSNotification) {
// Insert code here to initialize your application
let metalDevice = MTLCreateSystemDefaultDevice()!
let library = metalDevice.newDefaultLibrary()!
let commandQueue = metalDevice.newCommandQueue()
let commandBuffer = commandQueue.commandBuffer()
let commandEncoder = commandBuffer.computeCommandEncoder()
let pointlessFunction = library.newFunctionWithName("pointless")!
let pipelineState = try! metalDevice.newComputePipelineStateWithFunction(pointlessFunction)
commandEncoder.setComputePipelineState(pipelineState)
number = 12
buffer = metalDevice.newBufferWithBytes(&number, length: sizeof(Float), options: MTLResourceOptions.StorageModeShared)
commandEncoder.setBuffer(buffer, offset: 0, atIndex: 0)
commandEncoder.endEncoding()
commandBuffer.commit()
commandBuffer.waitUntilCompleted()
let data = NSData(bytesNoCopy: buffer.contents(), length: sizeof(Float), freeWhenDone: false)
var newResult:Float = 0
data.getBytes(&newResult, length: sizeof(Float))
print(newResult)
}
By making a buffer with StorageModeShared, I want changes made to the Metal buffer reflected in my Swift code, but when I populate my newResult variable, it looks like the buffer is still the same value than at the beginning (12) while it should be 125 :
#include <metal_stdlib>
using namespace metal;
kernel void pointless (device float* outData [[ buffer(0) ]]) {
*outData = 125.0;
}
What am I doing wrong ?
A kernel function doesn't run unless you dispatch it. I think you're assuming if you have a function, then Metal should run it one time, until you say otherwise, but that won't happen. It will instead not run at all. Add this before endEncoding and you're good to go!
let size = MTLSize(width: 1, height: 1, depth: 1)
commandEncoder.dispatchThreadgroups(size, threadsPerThreadgroup: size)

couldn't find function symbol in library

Declaring SCardStatus function causes the error: "couldn't find function symbol in library"
The Code is as follows:
Cu.import('resource://gre/modules/ctypes.jsm');
var is64bit = ctypes.voidptr_t.size == 4 ? false : true;
var ifdef_UNICODE = true;
var TYPES = {
ABI: is64bit ? ctypes.default_abi : ctypes.winapi_abi,
CHAR: ctypes.char,
DWORD: ctypes.uint32_t,
LONG: ctypes.long,
LPCVOID: ctypes.voidptr_t,
ULONG_PTR: is64bit ? ctypes.uint64_t : ctypes.unsigned_long,
WCHAR: ctypes.jschar,
};
TYPES.LPSTR = TYPES.CHAR.ptr;
TYPES.LPDWORD = TYPES.DWORD.ptr;
TYPES.LPWSTR = TYPES.WCHAR.ptr;
TYPES.SCARDHANDLE = TYPES.ULONG_PTR;
TYPES.LPBYTE = TYPES.LPSTR;
TYPES.LPTSTR = ifdef_UNICODE ? TYPES.LPWSTR : TYPES.LPSTR;
var cardLib = ctypes.open('Winscard');
var SCardStatus = cardLib.declare('SCardStatus', TYPES.ABI, TYPES.LONG, TYPES.SCARDHANDLE, TYPES.LPTSTR, TYPES.LPDWORD, TYPES.LPDWORD, TYPES.LPDWORD, TYPES.LPBYTE, TYPES.LPDWORD );
I guess that TYPES.LPBYTE is not correct, according to https://msdn.microsoft.com/en-us/library/windows/desktop/aa379803%28v=vs.85%29.aspx , LPBYTE should be a Pointer to a 32-byte buffer that receives the ATR string from the currently inserted card, if available. However I could not fix it, I appreciate any helps in advanced.
In winapi if the functions have two versions, unicode version and ascii verison IF it takes strings. So the docs show this accepts characters, so on the page it shows: SCardStatusW (Unicode) and SCardStatusA (ANSI) so you have to define it like this: var SCardStatus = cardLib.declare(ifdef_UNICODE ? 'SCardStatusW' : 'SCardStatusA', ....

BITMAPV5HEADER Size?

I'm trying to use BITMAPV5HEADER with CreateDIBSection to get a uint8[] in form of RGBA. I am doing this from ctypes but this isn't a ctypes question but I'll post the code so you can see it. The ctypes guys won't know whats up here it's a winapi thing.
I double checked my struct, function, and type declares but I keep getting GetLastError of 87 which is invalid parameter after calling CreateDIBSection. If I set the bv5Size to the size of BITMAPINFOHEADER which is 40 it works but is treated as BITMAPINFO header and the red,blue,green,alpha masks dont affect it, as I still get BGRA.
So I was wondering what is the size supposed to be for BITMAPV5HEADER on 32bit and 64bit please. For me I'm getting 124:
"ostypes.TYPE.BITMAPINFOHEADER.size:" 40
"ostypes.TYPE.BITMAPV5HEADER.size:" 124
This is my ctypes code just to show that everything is correct:
var bmi = ostypes.TYPE.BITMAPV5HEADER();
bmi.bV5Size = ostypes.TYPE.BITMAPV5HEADER.size;
bmi.bV5Width = nWidth; //w;
bmi.bV5Height = -1 * nHeight; //-1 * h; // top-down
bmi.bV5Planes = 1;
bmi.bV5BitCount = nBPP; //32;
bmi.bV5Compression = ostypes.CONST.BI_BITFIELDS;
bmi.bV5RedMask = ostypes.TYPE.DWORD('0x00FF0000');
bmi.bV5GreenMask = ostypes.TYPE.DWORD('0x0000FF00');
bmi.bV5BlueMask = ostypes.TYPE.DWORD('0x000000FF');
bmi.bV5AlphaMask = ostypes.TYPE.DWORD('0xFF000000'); // 0x00000000 for opaque, otherwise 0xff000000
var cBmi = ctypes.cast(bmi.address(), ostypes.TYPE.BITMAPINFO.ptr); // cBmi is now a pointer so no need to pass cBmi.address() to CreateDIBSection
var pixelBuffer = ostypes.TYPE.BYTE.ptr();
var hbmp = ostypes.API('CreateDIBSection')(hdcMemoryDC, cBmi, ostypes.CONST.DIB_RGB_COLORS, pixelBuffer.address(), null, 0);
This is ctypes so I don't have to memset bmi after creation as by default it is initialized memset 0.
Thanks

GetOpenFileNameW results in FNERR_INVALIDFILENAME, or CDERR_INITIALIZATION if I call GetOpenFileNameA

Here's the code using GetOpenFileNameW:
import core.sys.windows.windows;
import std.stdio, std.string, std.utf;
pragma(lib, "comdlg32");
// Fill in some missing holes in core.sys.windows.windows.
extern (Windows) DWORD CommDlgExtendedError();
enum OFN_FILEMUSTEXIST = 0x001000;
void main()
{
auto buf = new wchar[1024];
OPENFILENAMEW ofn;
ofn.lStructSize = ofn.sizeof;
ofn.lpstrFile = buf.ptr;
ofn.nMaxFile = buf.length;
ofn.lpstrInitialDir = null;
ofn.Flags = OFN_FILEMUSTEXIST;
BOOL retval = GetOpenFileNameW(&ofn);
if (retval == 0) {
// Get 0x3002 for W and 0x0002 for A. ( http://msdn.microsoft.com/en-us/library/windows/desktop/ms646916(v=vs.85).aspx )
throw new Exception(format("GetOpenFileName failure: 0x%04X.", CommDlgExtendedError()));
}
writeln(buf);
}
This results in FNERR_INVALIDFILENAME, but I don't see any non-optional strings that I haven't filled in. And here's the code (only differences shown) for GetOpenFileNameA:
auto buf = new char[1024];
OPENFILENAMEA ofn;
// ...
BOOL retval = GetOpenFileNameA(&ofn);
This results in CDERR_INITIALIZATION, and the only elaboration MSDN gives me is
The common dialog box function failed during initialization.
This error often occurs when sufficient memory is not available.
This is on Windows 7 64 bit, DMD v2.059.
buf has to be zeroed completely. The problem here is that wchar.init == wchar.max (for error detection reasons), so your array is essentially 1024 instances of wchar.max. A simple buf[] = 0; should fix that.

Resources