Why does SDL_GetWindowWMInfo() require SDL version? - sdl-2

According to the documentation for
SDL_bool SDL_GetWindowWMInfo(SDL_Window* window,
SDL_SysWMinfo* info)
SDL_SysWMinfo* info's SDL_Version version member must be defined by the SDL_VERSION macro at compile time before it is passed.
Why does SDL_GetWindowWMInfo require the SDL version the calling code was compiled against? What would happen if SDL_GetWindowWMInfo did not check the SDL version?

It's pretty much like keltar said. This is from SDL_windowswindow.c:
SDL_bool
WIN_GetWindowWMInfo(_THIS, SDL_Window * window, SDL_SysWMinfo * info)
{
const SDL_WindowData *data = (const SDL_WindowData *) window->driverdata;
if (info->version.major <= SDL_MAJOR_VERSION) {
info->subsystem = SDL_SYSWM_WINDOWS;
info->info.win.window = data->hwnd;
info->info.win.hdc = data->hdc;
return SDL_TRUE;
} else {
SDL_SetError("Application not compiled with SDL %d.%d\n",
SDL_MAJOR_VERSION, SDL_MINOR_VERSION);
return SDL_FALSE;
}
}
This function fills in a user-provided struct. The danger is that this struct is liable to change as the platform support in SDL changes (as opposed to actual feature/API changes that are more apparent in a new version number).
If that struct definition has changed between versions of SDL (say, you use updated headers but old dll), this requirement allows SDL to detect the problem before it stomps your program's memory.

Related

Speech Recognition does not trigger EventHandler in C++/CLI DLL

I have a TS3 plugin in creation right here, which uses
System::Speech::Recognition for its SpeechRecognitinEngine. Now, I create an EventHandler for the SpeechRecognized event.
Well, my logging (and the actions of the plugin - None) tell me, that the Event is actually never triggered, even if you start and stop talking with the (somewhat) valid (parts of the) grammar.
I have no idea why this is. It happens in a DLL written in C++ CLI.
Now, my theory was that DLLs do not support event handerls... Might that be possible?
void recogn_speech() {
uint64 schid = ts3Functions.getCurrentServerConnectionHandlerID();
SpeechRecognitionEngine^ recognizer = gcnew SpeechRecognitionEngine();
System::Speech::Recognition::Grammar^ g = assembleGrammar();
recognizer->LoadGrammar(g);
recognizer->SetInputToDefaultAudioDevice();
char pluginPath[PATH_BUFSIZE];
ts3Functions.getPluginPath(pluginPath, PATH_BUFSIZE, pluginID);
String^ a = gcnew String(pluginPath);
a = a + "vctest_cpp_ts3\\signal_vc_start.wav";
char* newPath = (char*)(void*)Marshal::StringToHGlobalAnsi(a);
ts3Functions.playWaveFile(schid, newPath);
Marshal::FreeHGlobal((IntPtr)newPath);
recognizer->SpeechRecognized +=
gcnew EventHandler<SpeechRecognizedEventArgs^>(this, &tsapi::sre_SpeechRecognized);
}
void sre_SpeechRecognized(Object^ sender, SpeechRecognizedEventArgs^ e)
{
uint64 schid = ts3Functions.getCurrentServerConnectionHandlerID();
String^ recognRes = e->Result->Text->ToString();
interpretCommand(recognRes);
}
Full Sourcecode on
GitHub
Your event handler looks correct. There's no issue with having events & event handlers across DLLs: It's all managed code, the DLL boundary doesn't matter here.
However, there is one possible issue:
void recogn_speech() {
...
SpeechRecognitionEngine^ recognizer = gcnew SpeechRecognitionEngine();
...
}
You might want to save your SpeechRecognitionEngine object somewhere so it doesn't get garbage collected.
Also, you may want to switch to the more C++-style of having class definitions in .h files, and implementations in .cpp files. As your code gets more complex and the classes need to refer to each other, having everything in one file will start to become an issue.

Getting list of system printer names very slow in windows

I need to create a very custom widget for printing. It has to be multi-platform as well. For consistency, the widget should be similar looking in windows as in linux or mac... After studying the Qt code, which uses the Windows print dialog, I gave up trying to subclass Qt print dialog, and made my own widget.
So, now I am on step 1: populate the list of printers on the system. I added the following code, to be called on each "show()" - just in case printers on the system change during program execution - and it works, but it is extremely slow:
I create a map of index/printer, and add the default printer as index -1, to tell the widget which one it is.
QMap<int, QString> PrintController::getListOfSystemPrinters()
{
QMap<int, QString> printerNames;
#ifdef Q_OS_WIN32
#ifdef NOT_QT_4 // I tried to use "availablePrinterNames" thinking it will be faster but it actually seems slower
QPrinter currentPrinter;
QString printerName = currentPrinter.printerName();
QStringList printerNameList = QPrinterInfo::availablePrinterNames();
int index = 0;
foreach(QString printerName1, printerNameList)
{
printerNames.insert(index, printerName1);
if(printerName == printerName1)
printerNames.insert(-1, printerName1);
++index;
}
#else
QPrinter currentPrinter;
QString printerName = currentPrinter.printerName();
QList<QPrinterInfo> printers = QPrinterInfo::availablePrinters();
int index = 0;
foreach(QPrinterInfo printerInfo, printers)
{
QString printerName1 = printerInfo.printerName();
printerNames.insert(index, printerName1);
if(printerName == printerName1)
printerNames.insert(-1, printerName1);
++index;
}
#endif
#elif defined Q_OS_UNIX
#endif
return printerNames;
}
This is the slowest piece of code I have ! I don't see another way to get all printer names... But I must be doing something wrong !
The Qt 5 version is slightly slower than the Qt 4 version... Either way, they are both slow....
The call to create a QPrinterInfo is slow.
So... are there alternatives ?
How can I get the list of printer names in Windows ?
Note this must work in Qt 4.7-5.x
Get printer list asynchronously:
class Class : public QObject {
Q_OBJECT
Q_SIGNAL void hasPrinters(const QList<QPrinterInfo> &);
Q_SIGNAL void hasPrinterNames(const QStringList &);
/// This method is thread-safe
void getPrinters() {
#if QT_VERSION >= QT_VERSION_CHECK(5,3,0)
emit hasPrinterNames(QPrinterInfo::availablePrinterNames());
#else
emit hasPrinters(QPrinterInfo::availablePrinters());
#endif
}
void test() {
QtConcurrent::run(this, &Class::getPrinters);
}
};
The above compiles on Qt 4.7 & up, using either C++98 or C++11.
Connect to the hasPrinterNames/hasPrinters signal to be notified when the printer list is available, then populate your dialog.
You might be lucky and availablePrinterNames/availablePrinters will be thread-safe. I haven't checked if it is.
On Qt 5.3 and newer, only create the QPrinterInfo for a given printer once your user has selected it, and you might want to create it concurrently as well.

device-tree mismatch: .probe never called

I'm having trouble understanding how device-tree works, or specifically why this driver won't init. This is in the rockchip vendor kernel for android, version 3.10
drivers/watchdog/rk29_wdt.c (reduced for readability)
static const struct of_device_id of_rk29_wdt_match[] = {
{ .compatible = "rockchip,watch dog" }
};
static struct platform_driver rk29_wdt_driver = {
.probe = rk29_wdt_probe,
[..]
.of_match_table = of_rk29_wdt_match,
.name = "rk29-wdt",
},
};
static int __init watchdog_init(void)
{
printk("watchdog_init\n");
return platform_driver_register(&rk29_wdt_driver);
}
and this is the soc dtsi
arch/arm/boot/dts/rk3288.dtsi
watchdog: wdt#2004c000 {
compatible = "rockchip,watch dog";
reg = <0xff800000 0x100>;
clocks = <&pclk_pd_alive>;
clock-names = "pclk_wdt";
interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
rockchip,irq = <0>;
rockchip,timeout = <2>;
rockchip,atboot = <1>;
rockchip,debug = <0>;
status = "okay";
};
however, the .probe function of the driver is never called. It is compiled in and the __init function is called. I suspect it has something to do witch the device tree entry not matching? Maybe the space is an issue?
Or is there anything else that runs before .probe that determines if the driver should continue?
Also i'm not sure how a flattened tree works, so maybe this is relevant:
arch/arm/mach-rockchip/rk3288
DT_MACHINE_START(RK3288_DT, "Rockchip RK3288 (Flattened Device Tree)")
.smp = smp_ops(rockchip_smp_ops),
.map_io = rk3288_dt_map_io,
.init_time = rk3288_dt_init_timer,
.dt_compat = rk3288_dt_compat,
.init_late = rk3288_init_late,
.reserve = rk3288_reserve,
.restart = rk3288_restart,
MACHINE_END
There are a number of possible ways this might happen, and most of them are well away from the driver code itself. Firstly, a .dtsi fragment alone doesn't tell the whole story - the device tree syntax is hierarchical, so the properties (in particular the status) might still be overridden by the board-level .dts which includes a basic SoC .dtsi file. Secondly, the compiled DTB isn't the last word either, since the bootloader may dynamically modify it before passing it to the kernel - this is typically done for memory nodes and SMP enable methods, but could potentially affect anything.
This kind of debugging is often best tackled in reverse, by examining the state of the booted system, then working backwards to figure out how things got that way - the specifics of this particular question rule some of this out already, but for the sake of completeness:
If the kernel knows about the driver, and it's loaded and properly initialised, it should show up somewhere in /sys/bus/*/drivers/ - otherwise, it may be in a module which needs loading, or it may have failed to initialise due to some unmet dependency on some other driver or resource.
If the kernel knows about the device, it should show up somewhere in /sys/bus/*/devices/, and if it's correctly bound to a driver and probed then they should both have a symlink to each other.
If the device is nowhere to be found, then on a DT-based system the next place to check would be /proc/device-tree/ (dependent on CONFIG_PROC_DEVICETREE on older kernels, and canonically found in /sys/firmware/devicetree/base/ on newer ones) - this will show the view of the DT as the kernel found it, and a bit of poking around there should hopefully make clear any missing nodes or out-of-place properties, such as a disabled node causing the kernel to skip creating a device altogether. Beware that the property files themselves are just the raw data - so you probably want to go snooping with hexdump rather than cat - and that all numeric cells are in big-endian byte order.
I notice that in your definition you miss so called SENTINEL in your array, null empty struct.
Look here an example:
static const struct of_device_id clk_ids[] = {
{ .compatible = "sirf,atlas7-clkc" },
{},
};

Strange exception in layouts

Try call this method:
avatarIconImageView.setContentHuggingPriority(UILayoutPriorityDefaultLow, forAxis: UILayoutConstraintAxis.Horizontal)
and catch this exception:
Undefined symbols for architecture armv7:
"_UILayoutPriorityDefaultLow", referenced from:
__TFC13TCProject_iOS36TCAvatarWithCounterUniversalCellView22configureBodyCellViewsfS0_FT_T_
in TCAvatarUniversalCellView.o ld: symbol(s) not found for
architecture armv7 clang: error: linker command failed with exit code
1 (use -v to see invocation)
what it means?
This looks like a bug in the iOS 8 SDK. You can work around this by just passing in a raw value.
UILayoutPriorityDefaultRequired = 1000
UILayoutPriorityDefaultHigh = 750
UILayoutPriorityDefaultLow = 250
In your case
avatarIconImageView.setContentHuggingPriority(250, forAxis: UILayoutConstraintAxis.Horizontal)
This is not a bug. It is a shortcoming of an expectation for the import of Objective-C libraries into Swift. It should be understood how Swift imports code (even from the Apple UIKIt Libraries) from Objective-C into Swift.
UILayoutPriority is a float. In Objective-C, a couple of values have been pre-defined for us. The pre-defined values appear to be an enum. We might expect that the same enums would be available to us in Swift.
The documentation suggests an enum:
Declaration
SWIFT
typealias UILayoutPriority = Float
OBJECTIVE-C
enum {
UILayoutPriorityRequired = 1000,
UILayoutPriorityDefaultHigh = 750,
UILayoutPriorityDefaultLow = 250,
UILayoutPriorityFittingSizeLevel = 50,
};
typedef float UILayoutPriority;
But in Xcode, if you ask to see the defintion of one of these enum values (UILayoutPriorityRequired, for example), you will see that they are actually defined in the header file as constant floats.
typedef float UILayoutPriority;
static const UILayoutPriority UILayoutPriorityRequired NS_AVAILABLE_IOS(6_0) = 1000; // A required constraint. Do not exceed this.
static const UILayoutPriority UILayoutPriorityDefaultHigh NS_AVAILABLE_IOS(6_0) = 750; // This is the priority level with which a button resists compressing its content.
static const UILayoutPriority UILayoutPriorityDefaultLow NS_AVAILABLE_IOS(6_0) = 250; // This is the priority level at which a button hugs its contents horizontally.
So although we may like to think of the pre-defined layout priorities as enum values (as the documentation suggests) the layout priorities are not really defined as enums; they are defined as constant floats.
A hint for anyone that knows the C programming language is that a C enum may only contain int values. The following is legal and will compile:
enum myEnum {
JGCEnum_one = 1,
JGCEnum_two,
JGCEnum_three
} JGCEnum;
But we can't really define floats as values for C enums. The following will not compile:
enum myEnum {
JGCEnum_one = 1.5, // compilation error
JGCEnum_two,
JGCEnum_three
} JGCEnum;
Objective-C enums are the same as C enums (Swift enums are different). It is important to know if we are dealing with actual integers or floats. Because integers can be defined using the NS_ENUM macro, which can then be imported to Swift as a Swift enum.
The iBook says
Swift imports as a Swift enumeration any C-style enumeration marked with the NS_ENUM macro. This means that the prefixes to enumeration value names are truncated when they are imported into Swift, whether they’re defined in system frameworks or in custom code.
Excerpt From: Apple Inc. “Using Swift with Cocoa and Objective-C.” iBooks. https://itun.es/us/1u3-0.l
That means that if UILayoutPriority had been defined as an integer using the NS_ENUM macro, it would have been imported into Swift as Swift enum. This is the case for UILayoutConstraintAxis.
Declaration
SWIFT
enum UILayoutConstraintAxis : Int {
case Horizontal
case Vertical
}
OBJECTIVE-C
enum {
UILayoutConstraintAxisHorizontal = 0,
UILayoutConstraintAxisVertical = 1
};
typedef NSInteger UILayoutConstraintAxis;
Looking at the Objective-C header file confirms what the documentation says.
//
// UIView Constraint-based Layout Support
//
typedef NS_ENUM(NSInteger, UILayoutConstraintAxis) {
UILayoutConstraintAxisHorizontal = 0,
UILayoutConstraintAxisVertical = 1
};
So there are at least two ways to know if a pre-defined value you are used to using in Objective-C is available in Swift:
check the documentation
check the header file in Objective-C (found by right-clicking the value and then selecting "Jump to Definition")
There is one more way to see if a typedef you are used to using is a constant or an enum. In code, test to see if the address of the constant exists. Constants have a memory address, while enums do not. See the code below.
// this line will compile and run just fine.
// UILayoutPriorityDefaultHigh is a constant and has a memory address
// the value will be true if the device is running iOS 6.0 or later
// and false otherwise
BOOL predefinedValueIsAvailable = (NULL != &UILayoutPriorityDefaultHigh);
// this line will not compile
// UILayoutConstraintAxisHorizontal is an enum (NOT a constant)
// and does not have a memory address
predefinedValueIsAvailable = (NULL != &UILayoutConstraintAxisHorizontal);
References
https://www.blogger.com/blogger.g?blogID=1591443959382526146#editor/target=post;postID=5044844235580349879;onPublishedMenu=allposts;onClosedMenu=allposts;postNum=0;src=postname
Xcode Documentation (iOS 8.2)
Apple Inc. “Using Swift with Cocoa and Objective-C.” iBooks. https://itun.es/us/1u3-0.l

libcrypto deprecated on Mac OS X 10.7 (Lion)

I just went to use libcrypto on Lion and it looks like the entire api has been deprecated. The man pages haven't been updated since 10.6.6.
Does anybody know what the replacement for libcrypto is on Lion?
libcrypto is part of OpenSSL, which hasn't changed much. It's not going away, but Apple recommends that developers use their CDSA (Common Data Security Architecture) library rather than using OpenSSL directly.
In case you know what you're doing and you just want to get rid of these warnings, one way is to add
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
to the relevant headers – in my case /usr/include/openssl/crypto.h and /usr/include/openssl/md5.h.
Ok, answering my own question here.
10.7 introduced Transforms into Security.framework, which is tied closely to SecKey. Transforms let you do lots of things, including encoding (eg. base64), digests, signing/verifying, and encryption.
Here's an example of how to sign some data. All of the transforms follow the same basic pattern; if you look in the Headers for Security.framework you'll see a header for each type of transform. These are from SecTransformReadTransform.h and SecSignVerifyTransform.h. I'm omitting any error checking or cleanup code here for simplicity.
NSData *dataToBeSigned = ;// Get this from somewhere. We set sha1 attributes down below, so this should be a sha1 digest
SecKeyRef *key = ;// Get this from somewhere; keychain or SecItemImport
SecGroupTransformRef group = SecTransformCreateGroupTransform();
CFReadStreamRef readStream = NULL;
SecTransformRef readTransform = NULL;
SecTransformRef signingTransform = NULL;
// Setup our input stream as well as an input transform
readStream = CFReadStreamCreateWithBytesNoCopy(kCFAllocatorDefault,
[dataToBeSigned bytes],
[dataToBeSigned length],
kCFAllocatorNull); // Pass Null allocator so it doesn't free NSData's bytes
readTransform = SecTransformCreateReadTransformWithReadStream(readStream);
// Setup a signing transform
signingTransform = SecSignTransformCreate(key, NULL);
SecTransformSetAttribute(signingTransform, kSecInputIsDigest, kCFBooleanTrue, NULL);
SecTransformSetAttribute(signingTransform, kSecDigestTypeAttribute, kSecDigestSHA1, NULL);
// Connect read and signing transform; Have read pass its data to the signer
SecTransformConnectTransforms(readTransform, kSecTransformOutputAttributeName,
self.signingTransformRef, kSecTransformInputAttributeName,
group, NULL);
// Execute the sequence of transforms (group)
// The last one in the connected sequence is the return value
NSData *signature = SecTransformExecute(group, NULL);

Resources