Understanding the resource file of a program(Win32) - winapi

Say i have a simple Win32 program with a menu, and all syntax is correct. Here is the resource file:
#define IDR_MYMENU 101
#define IDI_MYICON 102
#define ID_FILE_EXIT 40001
#define ID_STUFF_GO 40002
#define ID_STUFF_GOSOMEWHEREELSE 40003
Im a little confused on the constants declared, could i use any ol number to represent each option? such as:
#define IDR_MYMENU 23
#define IDI_MYICON 412
#define ID_FILE_EXIT 40071
#define ID_STUFF_GO 40892
#define ID_STUFF_GOSOMEWHEREELSE 64982
or is there something behind those specific numbers? Thanks for any help!

You can use any valid 16-bit unsigned integer value (i.e <= 65535). They are used to uniquely identify a resource.

You can use most any number. However, I have found that on WinCE some menu item identifiers can be reserved for special system actions. It is best to avoid those below 100.

Related

Is it possible to set breakpoint at OpenGL functions?

Sometimes I need to find out which part of the code call a certain OpenGL function, so I try this:
b glEnableVertexAttribArray
----------------------------
Breakpoint 3 at 0x7ffff0326c80 (2 locations)
But it doesn't work, is there any way to make this work?
I'm using gdb in ubuntu18.04, my GPU is GeForce GTX 1050 Ti
If you look at your GL/glew.h header, you will see that it contains lines similar to the following:
#define GLEW_GET_FUN(x) x
#define glCopyTexSubImage3D GLEW_GET_FUN(__glewCopyTexSubImage3D)
#define glDrawRangeElements GLEW_GET_FUN(__glewDrawRangeElements)
#define glTexImage3D GLEW_GET_FUN(__glewTexImage3D)
#define glTexSubImage3D GLEW_GET_FUN(__glewTexSubImage3D)
When you call glewInit, these __glew* variables are populated with pointers extracted from your OpenGL implementation.
In your case, you should set a breakpoint on the contents of such a pointer, so *__glewEnableVertexAttribArray.
For GLAD you will have to put a breakpoint on *glad_glEnableVertexAttribArray. Note the * in both cases: that tells your debugger to dereference the pointer and put the breakpoint at the correct location.

Warnings with a new Xcode project using FMDB, SqliteCipher and CocoaPods

I have installed CocoaPods.. and loaded the workspace as instructed.
I am getting these warnings which I do not understand, here is an example:
Pods-CipherDatabaseSync-SQLCipher
sqlite.c
/Users/admin/Code/CipherDatabaseSync/Pods/SQLCipher/sqlite3.c:24035:13: Ambiguous expansion of macro 'MAX'
I have looked around for a couple of hours and I am stumped at what I need to do, can someone please point me in the direction of somewhere that will provide some insight?
Thanks.
In the sqlite.c file it looks as if MIN and MAX are trying to be defined in two different areas of the file. The first time on line 214
/* Macros for min/max. */
#ifndef MIN
#define MIN(a,b) (((a)<(b))?(a):(b))
#endif /* MIN */
#ifndef MAX
#define MAX(a,b) (((a)>(b))?(a):(b))
#endif /* MAX */
Then secondly at line 8519
/*
** Macros to compute minimum and maximum of two numbers.
*/
#define MIN(A,B) ((A)<(B)?(A):(B))
#define MAX(A,B) ((A)>(B)?(A):(B))
I commented out where they define it the second time and all of the warnings went away after cleaning and building the project again.
Remove the MAX and MIN macro definitions from the sqlite3.c file, since they are already defined in system headers.
Issue Solution:
Open Xcode project Build settings
add “-Wno-ambiguous-macro” into “Other C flags”

Is it possible to create a case insensitive custom locale for Windows 7 and/or 8? [duplicate]

This question already has answers here:
PostgreSQL: Case insensitive string comparison
(6 answers)
Closed last year.
Motivation: I would like to work with strings in PostgreSQL in a case insensitive manner. I am aware of the CITEXT data type and I am also aware of functional indexes where I can use the LOWER function.
Still, the most efficient solution seems to be using a case insensitive collation - something trivial in Sql Server. Anyway, it seems that PostgreSQL is unable to define its own custom collations, instead it derives them from the locales found in the OS, i.e. Windows in my case.
So, the question is this - is it possible to create a custom Windows locale which would treat characters in a case insensitive manner?
The farthest I could get is install a locale builder and export the en-US locale to the respective XML representation (called LDML) to see what is inside. Looking for the sort keyword returns these two lines:
<msLocale:sortName type="en-US" />
<msLocale:sortGuid type="{00000001-57EE-1E5C-00B4-D0000BB1E11E}" />
The guid can be found in the Windows Registry:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\Sorting\Ids]
#="{00000001-57EE-1E5C-00B4-D0000BB1E11E}"
"mn-Mong"="{00000001-57EE-1E5C-00B4-D0000BB1E11E}"
(There are more string values under the key)
And this does not lead anywhere. I am no closer to a case insensitive custom locale than before.
It is possible that LDML can be used to describe a case insensitive locale, but I have no idea how to construct one.
Edit
Food for thought:
SQL Server:
SELECT 'Latin1_General_CS_AS' AS 'Collation',
COLLATIONPROPERTY('Latin1_General_CS_AS', 'CodePage') AS 'CodePage',
COLLATIONPROPERTY('Latin1_General_CS_AS', 'LCID') AS 'LCID',
CONVERT(VARBINARY(8), COLLATIONPROPERTY('Latin1_General_CS_AS', 'ComparisonStyle')) AS 'ComparisonStyle',
COLLATIONPROPERTY('Latin1_General_CS_AS', 'Version') AS 'Version'
UNION ALL
SELECT 'Latin1_General_CI_AS' AS 'Collation',
COLLATIONPROPERTY('Latin1_General_CI_AS', 'CodePage') AS 'CodePage',
COLLATIONPROPERTY('Latin1_General_CI_AS', 'LCID') AS 'LCID',
CONVERT(VARBINARY(8), COLLATIONPROPERTY('Latin1_General_CI_AS', 'ComparisonStyle')) AS 'ComparisonStyle',
COLLATIONPROPERTY('Latin1_General_CI_AS', 'Version') AS 'Version'
yields
Collation CodePage LCID ComparisonStyle Version
Latin1_General_CS_AS 1252 1033 0x00030000 0
Latin1_General_CI_AS 1252 1033 0x00030001 0
Win32 API:
CompareStringEx Win32 function:
int CompareStringEx(
_In_opt_ LPCWSTR lpLocaleName,
_In_ DWORD dwCmpFlags,
_In_ LPCWSTR lpString1,
_In_ int cchCount1,
_In_ LPCWSTR lpString2,
_In_ int cchCount2,
_In_opt_ LPNLSVERSIONINFO lpVersionInformation,
_In_opt_ LPVOID lpReserved,
_In_opt_ LPARAM lParam
);
The flags for the dwCmpFlags parameter can be found in C:\Program Files (x86)\Microsoft SDKs\Windows\v7.1A\Include\WinNls.h:
//
// String Flags.
//
#define NORM_IGNORECASE 0x00000001 // ignore case
#define NORM_IGNORENONSPACE 0x00000002 // ignore nonspacing chars
#define NORM_IGNORESYMBOLS 0x00000004 // ignore symbols
#define LINGUISTIC_IGNORECASE 0x00000010 // linguistically appropriate 'ignore case'
#define LINGUISTIC_IGNOREDIACRITIC 0x00000020 // linguistically appropriate 'ignore nonspace'
#define NORM_IGNOREKANATYPE 0x00010000 // ignore kanatype
#define NORM_IGNOREWIDTH 0x00020000 // ignore width
#define NORM_LINGUISTIC_CASING 0x08000000 // use linguistic rules for casing
From which I conclude that:
using Latin1_General_CS_AS results in CompareStringEx being invoked with the flags NORM_IGNOREKANATYPE|NORM_IGNOREWIDTH
using Latin1_General_CI_AS results in CompareStringEx being invoked with the flags NORM_IGNOREKANATYPE|NORM_IGNOREWIDTH|NORM_IGNORECASE
But what is next? How can I create my own Windows locale similar to Latin1_General_CI_AS, but usable outside the SQL Server?
In Windows, there are no case insensitive sort rules. All the rules are able to sort ignoring case but that is handled by means of a flag not by a sort. There is no way to add a custom sort method.
Probably nobody did it. I am thinking so it is possible - but this scenario is not tested and nobody knows if there is some side effect.
This issue is in PostgreSQL ToDo list still and Craig Ringer sent a proposal how to implement it. http://www.postgresql.org/message-id/52C0C31C.4060804#2ndquadrant.com
P.S.

__attribute__() macro and its effect on Visual Studio 2010 based projects

I have some legacy code that has got conditional preprocessing e.g. #ifdef and #else where I have found the use of __attribute__ macro. I have done a quick research and found out that it is specific to GNU compilers. I have to use this legacy code in Visual Studio 2010 using MSVC10 compiler and apparently it is complaining everywhere it sees attribute((unused)) even though it is protected by #ifndef and #ifdefs. An example is:
#ifdef __tone_z__
static const char *mcr_inf
#else
static char *mcr_inf
#endif
#ifndef _WINDOWS
__attribute__(( unused )) % this is causing all the problem!!
#endif
= "#(#) bla bla copyright bla";
#ifdef __tone_z__
static const char *mcr_info_2a353_id[2]
#else
static char *mcr_info_2a353_id[2]
#endif
__attribute__(( unused )) = { "my long copyright info!" } ;
I am really struggling to understand if it is very poorly planned code or is it just my misunderstanding. How do I avoid the usual compiler and linker errors with this __attribute__() directive? I have started to get C2061 errors (missing identifiers/unknown). I have got all necessary header files and nothing is missing, may be except GNU compiler (which I don't want!!).
Also, it seems that the end of line character ; is also being messed up when I take to code in windows....argh....I mean the UNIX end-of-line and Windows EOL how can I use this code without modifying the body....I can define in my property sheet about the _WINDOWS thingy, but cannot automatically adjust the EOL character recognition.
ANy help is appreciated!
Thanks.
My best guess is that _WINDOWS is, in fact, not defined by your compiler, and so use of __attibute__ is not protected.
In my opinion, the best way to protect against attributes is to define a macro like this:
#define __attribute__(A) /* do nothing */
That should simply remove all __attribute__ instances from the code.
In fact, most code that has been written to be portable has this:
#ifdef _GNUC
#define ATTR_UNUSED __attribute__((unused))
#else
#define ATTR_UNUSED
#endif
static TONE_Z_CONST char *integ_func ATTR_UNUSED = "my copyright info";
(Other __tone_z__ conditional removed for clarity only.)

Move PmodOLED from JB to JA Header - 32MX4 (MPLAB X IDE)

I am new to programming for microcontrollers and my first task is to accept WiFi connections using the PmodWiFi add-on. I read that the default pin/port for the WiFi chip is the JB header on the 32MX4. The problem is that I currently have the PmodOLED add-on installed in the JB header, so I want to move it to the JA header. In PmodOLED.h (which I have imported into my library), I changed the following lines to correspond to the ports in the JA header instead of the JB header:
#define prtSelect IOPORT_G
#define prtDataCmd IOPORT_B
#define prtReset IOPORT_D
#define prtVbatCtrl IOPORT_D
#define prtVddCtrl IOPORT_B
#define bitSelect BIT_9
#define bitDataCmd BIT_15
#define bitReset BIT_5
#define bitVbatCtrl BIT_4
#define bitVddCtrl BIT_14
However, when I run the code with the PModOLED attachment inserted into the JA header, it does not respond and the screen is blank. I would greatly appreciate it if you could tell me how to modify my code so that the PmodOLED add-on can be installed in the JA header.
Thanks in advance for your help,
Guvvy

Resources