OpenCL command queue creation on OSX - macos

I'm surprised by the behaviour of clCreateCommandQueue() on my macbook pro running OpenCL1.2.
I can supply a CL_QUEUE_PROFILING_ENABLE queue property without a problem.
But if I try to set the CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE property, the queue fails to be created.
I could understand if it were to fail with CL_INVALID_QUEUE_PROPERTIES, according to the API documentation. Yet, it fails with CL_INVALID_VALUE which makes on sense. It claims the property is invalid, instead of just merely being unsupported by device.
This happens on both the Iris GPU device, and the Intel CPU device.
The code:
context = clCreateContext( 0, 1, &device_id, opencl_notify, NULL, &err );
CHECK_CL
if ( !context )
{
LOGE( "Failed to create CL context. err=0x%x", err );
return 0;
}
cl_command_queue_properties queue_properties =
CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE |
CL_QUEUE_PROFILING_ENABLE |
0;
commands = clCreateCommandQueue( context, device_id, queue_properties, &err );
CHECK_CL
The output:
Found 1 OpenCL platforms.
Platform FULL_PROFILE OpenCL 1.2 (Sep 20 2014 22:01:02) Apple Apple had 2 devices:
Intel(R) Core(TM) i5-4278U CPU # 2.60GHz Intel(R) Core(TM) i5-4278U CPU # 2.60GHz with [4 units]
Iris Iris with [40 units]
ERR OpenCL called back with error: [CL_INVALID_VALUE] : OpenCL Error : clCreateCommandQueue failed: Device failed to create queue (cld returned: -35).
ERR OpenCL called back with error: [CL_INVALID_VALUE] : OpenCL Error : clCreateCommandQueue failed: Device failed to create queue: -30
CL_INVALID_VALUE
ERR Failed to create a command queue. err=0xffffffe2

I believe clGetDeviceInfo with CL_DEVICE_QUEUE_PROPERTIES on OS
X will return CL_QUEUE_PROFILING_ENABLE, but not CL_QUEUE_OUT_OF_ORDER_EXEC_MODE_ENABLE, so it is apparently not supported.
The confusing error message could be a bug.

Related

Derive qsv hwdevice from D3D11VA hwdevice using FFMPEG

I'm trying to derive a QSV hwcontext from D3D11VA device in order to encode d3d11 frames but I'm getting an error when calling av_hwdevice_ctx_create_derived.
buffer_t ctx_buf { av_hwdevice_ctx_alloc(AV_HWDEVICE_TYPE_D3D11VA) };
auto ctx = (AVD3D11VADeviceContext *)((AVHWDeviceContext *)ctx_buf->data)->hwctx;
std::fill_n((std::uint8_t *)ctx, sizeof(AVD3D11VADeviceContext), 0);
auto device = (ID3D11Device *)hwdevice_ctx->data;
device->AddRef();
ctx->device = device;
ctx->lock_ctx = (void *)1;
ctx->lock = do_nothing;
ctx->unlock = do_nothing;
auto err = av_hwdevice_ctx_init(ctx_buf.get());
and then I call
av_hwdevice_ctx_create_derived(&derive_hw_device_ctx, AV_HWDEVICE_TYPE_QSV, ctx_buf.get(), 0);
I'm seeing this in the log:
[AVHWDeviceContext # 000001de119a9b80] Initialize MFX session: API version is 1.35, implementation version is 1.30
[AVHWDeviceContext # 000001de119a9b80] Error setting child device handle: -16
Please let me know if you have any idea how to fix it or a different approach to encode d3d11 frames on QSV encoder.
Thank you.
OS: windows 10 64bits
CPU: Intel i5-8400
Graphics card: Nvidia GT1030 (has no hw encoder)
Adding this to d3d11 device solved the issue:
ID3D10Multithread *pMultithread;
status = device->QueryInterface(IID_ID3D10Multithread, (void **)&pMultithread);
if(SUCCEEDED(status)) {
pMultithread->SetMultithreadProtected(TRUE);
Release(pMultithread);
}

Linux kernel Hardware breakpoint registration failed with EACCES

I am using below code in my driver, but the register_wide_hw_breakpoint returns an EACESS error.
hw_breakpoint_init(&attr);
attr.bp_addr = kallsyms_lookup_name(ksym_name);
attr.bp_len = HW_BREAKPOINT_LEN_4;
attr.bp_type = HW_BREAKPOINT_W | HW_BREAKPOINT_R;
sample_hbp = register_wide_hw_breakpoint(&attr, sample_hbp_handler, NULL);
if (IS_ERR((void __force *)sample_hbp)) {
ret = PTR_ERR((void __force *)sample_hbp);
printk(KERN_INFO "Breakpoint registration failed:%d\n",ret);
}
What could be possible reasons for this? Am I missing a CONFIG option which will grant access to hardware registers?
Additional info: My platform is X86_64, Linux
Please help.

Why does pcap_sendpacket fail on Thunderbolt interface?

In a multi platform project I am using pcap to get a list of all network interfaces, open each (user cannot select which interfaces to use) and send/receive packets (Ethernet type 0x88e1/HomePlugAV) on each. This works fine on Windows and on Mac OS X, but sometimes on Mac OS X pcap_sendpacket fails after some time on the interface that networksetup -listallhardwareports lists as "Hardware Port: Thunderbolt 1". The error is:
send: No buffer space available
When the program is run after the machine was booted, then it takes some time until the error occurs. When the error occurred once and I stop my program, the error occurs immediately when I restart my program without rebooting the machine.
ifconfig -v en9:
en9: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 index 8
eflags=80<TXSTART>
options=60<TSO4,TSO6>
ether b2:00:1e:94:9b:c1
media: autoselect <full-duplex>
status: inactive
type: Ethernet
scheduler: QFQ
networksetup -listallhardwareports (only the relevant parts):
Hardware Port: Thunderbolt 1
Device: en9
Ethernet Address: b2:00:1e:94:9b:c1
Tests show that on OS X 10.9 the interface is not up initially, but on OS X 10.9.2 and 10.9.3 the interface is up and running after booting.
On OS X 10.9 ifconfig initially says:
en5: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500 index 8
After ifconfig en5 up the problematic behavior is the same on OS X 10.9.
Why does pcap_sendpacket fail on the Thunderbolt adapter?
How can my program detect that this is a troubling interface before opening it? I know I could open the interface and try to send one packet, but I'ld prefer to do a clean detection beforehand.
As a workaround, you can ignore the "Thunderbolt 1" interface:
#include <stdio.h>
#include <pcap/pcap.h>
#include <CoreFoundation/CoreFoundation.h>
#include <SystemConfiguration/SCNetworkConfiguration.h>
const char thunderbolt[] = "Thunderbolt 1";
// Build with -framework CoreFoundation -framework SystemConfiguration
int main(int argc, char * argv[])
{
// See: https://opensource.apple.com/source/configd/configd-596.13/SystemConfiguration.fproj/SCNetworkInterface.c
// get Ethernet, Firewire, Thunderbolt, and AirPort interfaces
CFArrayRef niArrayRef = SCNetworkInterfaceCopyAll();
// Find out the thunderbolt iface
char thunderboltInterface[4] = "";
if(niArrayRef) {
CFIndex cnt = CFArrayGetCount(niArrayRef);
for(CFIndex idx = 0; idx < cnt; ++idx) {
SCNetworkInterfaceRef tSCNetworkInterfaceRef = (SCNetworkInterfaceRef)CFArrayGetValueAtIndex(niArrayRef, idx);
if(tSCNetworkInterfaceRef) {
CFStringRef BSDName = SCNetworkInterfaceGetBSDName(tSCNetworkInterfaceRef);
const char * interfaceName = (BSDName == NULL) ? "none" : CFStringGetCStringPtr(BSDName, kCFStringEncodingUTF8);
CFStringRef localizedDisplayName = SCNetworkInterfaceGetLocalizedDisplayName(tSCNetworkInterfaceRef);
const char * interfaceType = (localizedDisplayName == NULL) ? "none" : CFStringGetCStringPtr(localizedDisplayName, kCFStringEncodingUTF8);
printf("%s : %s\n", interfaceName, interfaceType);
if(strcmp(interfaceType, thunderbolt) == 0) {
// Make a copy this time
CFStringGetCString(BSDName, thunderboltInterface, sizeof(thunderboltInterface), kCFStringEncodingUTF8);
}
}
}
}
printf("%s => %s\n", thunderbolt, thunderboltInterface);
CFRelease(niArrayRef);
return 0;
}
I'm guessing from
When the program is run after the machine was booted, then it takes some time until the error occurs. When the error occurred once and I stop my program, the error occurs immediately when I restart my program without rebooting the machine.
that what's probably happening here is that the interface isn't active, so packets given to it to send aren't transmitted (and the mbuf(s) for them freed), and aren't discarded, but are, instead, just left in the interface's queue to be transmitted. Eventually either the queue fills up or an attempt to allocate some resource for the packet fails, and the interface's driver returns an ENOBUFS error.
This is arguably an OS X bug.
From
In a multi platform project I am using pcap to get a list of all network interfaces, open each (user cannot select which interfaces to use) and send/receive packets (Ethernet type 0x88e1/HomePlugAV) on each.
I suspect you aren't sending on all interfaces; not all interfaces have a link-layer header type that has an Ethernet type field - for example, lo0 doesn't.
If you're constructing Ethernet packets, you would only want to send on interfaces with a link-layer header type (as returned by pcap_datalink()) of DLT_EN10MB ("10MB" is a historical artifact; it refers to all Ethernet types except for the old experimental 3MB Xerox Ethernet, which had a different link-layer header).
You probably also don't want to bother with interfaces that aren't "active" in some sense (some sense other than "is up"); unfortunately, there's no platform-independent API to determine that, so you're going to have to fall back on #ifdefs here. That would probably rule out interfaces where the packets would pile up unsent and eventually cause an ENOBUFS error.

mac os and canon edsdk [take picture error 36103]

I got Lazarus installed on Mac Os X 10.6.8 and I'm trying to take a picture using Canon EDSDK.
The problem I'm facing is that after setting parameter to save a photo into host:
saveTo := Integer(EdsSaveTo.kEdsSaveTo_Host);
err := EdsSetPropertyData(camera, kEdsPropID_SaveTo, 0, SizeOf(saveTo) , #saveTo);
and setting capacity of free disk space:
capacity.numberOfFreeClusters := $7FFFFFFF;
capacity.bytesPerSector := $1000;
capacity.reset := 1;
err := EdsSetCapacity(camera, capacity);
I'm taking a picture by:
err := EdsSendCommand(camera, kEdsCameraCommand_TakePicture, 0);
and I'm getting an err code 36103 which if "PC FULL" (also shown on camera LCD).
Any advice about how to set camera capacity on Mac OS X at Pascal?
I got an example on Object C (as XCode project, and above it works as designed):
EdsCapacity capacity = {0x7FFFFFFF, 0x1000, 1};
error = EdsSetCapacity([_model camera], capacity);
But I can't get it to work on Lazarus ;(
Any sugesstions, experience???
Cheers
It seems hex(36103) => '0x8d07', resolves to EDSDK label EDS_ERR_TAKE_PICTURE_CARD_NG. Reported issues around this include the fix that you describe, must be a Lazarus specific issue.

Segmentation fault happens when trying to free OCIEnv structure after failure of OCI environment setup with OCI_THREADED option

Summary
Segmentation fault happens when trying to free OCIEnv structure after failure of OCI environment setup with OCI_THREADED option(failure due to eg. misconfigured NLS_LANG environment variable).
When OCIEnvCreate called without OCI_THREADED options the example code does not crash, it works as expected.
Example code
#include <oci.h>;
#include <stdio.h>
#include <string.h>
int my_connect(const char *username, const char *password, const char *sid)
{
OCIEnv *env = NULL;
OCIError *err = NULL;
OCISvcCtx *svc = NULL;
if ( OCIEnvCreate(&env,
OCI_THREADED,
(dvoid *)0,
0,
0,
0,
(size_t)0,
(dvoid **)0) )
{
fprintf(stderr, "unable to initialize environment\n");
if ( env )
{
printf("env:[%p]\n", env);
OCIHandleFree(env, OCI_HTYPE_ENV); // segfault.
}
return -1;
}
printf("env:[%p]\n", env);
if ( OCIHandleAlloc((dvoid *)env,
(dvoid **)&err,
OCI_HTYPE_ERROR,
(size_t)0,
(dvoid **)0) )
{
fprintf(stderr, "unable to alloc error handlers\n");
goto error;
}
if ( OCIHandleAlloc((dvoid *) env,
(dvoid **) &svc,
OCI_HTYPE_SVCCTX,
(size_t) 0,
(dvoid **)0) )
{
fprintf(stderr, "unable to allocate service handlers\n");
goto error;
}
if ( OCILogon(env,
err,
&svc,
(CONST OraText *) username,
strlen(username),
(CONST OraText *) password,
strlen(password),
sid,
strlen(sid)
) )
{
fprintf(stderr, "login failed\n");
goto error;
}
printf("logged in\n");
if ( OCILogoff (svc, err) )
{
fprintf(stderr, "logoff failed\n");
goto error;
}
printf("logged out\n");
error:
if ( err )
OCIHandleFree(err, OCI_HTYPE_ERROR);
if ( svc )
OCIHandleFree(svc, OCI_HTYPE_SVCCTX);
if ( env )
OCIHandleFree(env, OCI_HTYPE_ENV);
return 0;
}
int main()
{
return my_connect("test_user", "qqq123", "XE");
}
Before run
export NLS_LANG=x
Stack trace
The problem is that __pthread_mutex_destroy is called with a NULL-pointer.
#0 __pthread_mutex_destroy (mutex=0x0) at pthread_mutex_destroy.c:28
#1 0x00007ffff585e6e0 in sltsmxd () from /lib/libclntsh.so.11.1
#2 0x00007ffff56a147c in kpufhndl0 () from /lib/libclntsh.so.11.1
#3 0x00007ffff56a0185 in kpufhndl () from /lib/libclntsh.so.11.1
#4 0x00007ffff567cac1 in OCIHandleFree () from /lib/libclntsh.so.11.1
#5 0x0000000000400a0c in my_connect (username=0x400dd1 "test_user", password=0x400dca "qqq123", sid=0x400dc7 "XE") at test2.c:24
#6 0x0000000000400c27 in main () at test2.c:84
Product details
Basic Lite Package Information
Thu Oct 4 13:00:49 UTC 2007
Client Shared Library 64-bit - 11.1.0.6.0
System name: Linux
Release: 2.6.9-34.0.1.0.11.ELsmp
Version: #1 SMP Mon Dec 4 22:20:39 UTC 2006
Machine: x86_64
OS details
Linux 3.2.0-37-generic #58-Ubuntu SMP Thu Jan 24 15:28:10 UTC 2013 x86_64 GNU/Linux
Distributor ID: Ubuntu
Description: Ubuntu 10.04.4 LTS
Release: 10.04
Codename: lucid
Question
At the moment I just do not free that memory area, but this is not a good solution.
What do you think, what would be the good solution?
This looks like it might be a bug in the 'Basic Lite' instant client, though I can't see anything relevant on MOS; but if so then in OCIEnvCreate(), not OCIHandleFree() as I think you suggest.
None of the example code I've seen tries to clean up the OCIEnv when it encounters a failure of OCIEnvCreate(); it seems to always just exit, including Oracle's own example code. It looks like that function is getting as far as creating the OCIEnv structure since you get a pointer to it, but presumably hasn't allocated the internals of that. Since it's in an indeterminate state, trying to clean it up will probably be a thankless task. So it seems like it ought to be OK to just not call OCIHandleFree().
I was able to recreate the problem using the 11.2.0.3.0 Basic Lite client package for Linux x86-64 (from TechNet downloads), under Oracle Enterprise Linux 5.6. When using the Basic (non-Lite) client package the problem was not seen - no memory fault, but also no 'unable to initialize environment' message, so the OCIEnvCreate() call succeeds. It does then fail to log in though, which is probably more reasonable anyway.
That suggests that you shouldn't expect the function to fail because of the NLS_LANG value - that part looks like a bug. If it does fail for some other reason then don't try to clean up. I can't see anything suggesting the SDK isn't expected to work with the Lite package, but you're forcing a failure and then trying to clean up beyond the norm, so the combination might not have been seen often before. Even without the clean-up triggering the memory fault, though, it looks like it's failing at the wrong point.
To get past the apparent bug and get the failure at the correct point, during login, you may need to switch to the Basic (non-Lite) client package.

Resources