I am using the gsoap library and when generating automatic code with the tools wsdl2h and soapcpp2 all work fine but when compiling thats code show a lot of warning as
warning: unused parameter 'soap'
on prototype as:
SOAP_FMAC3 void SOAP_FMAC4 soap_default_std__vectorTemplateOfPointerTodsca__FrameSequence(struct soap *soap, std::vector<dsca__FrameSequence * >*p)
and i don't know how fix it.
Thanks you very mmuch.
Don't worry about the warning, gSoap provides this parameter (gSoap stores it's runtime context in it) to you. Sometimes you have to use it, sometimes you don't need it. You have to use it, if you need to allocate memory for the response. Therefore gSoap provides the functions soap_malloc(struct soap*, size_t num_bytes) and soap_new_YourClassName(struct soap*, int num_instances).
Related
Is it safe to call XGetErrorText from a error handler set by XSetErrorHandler?
E.g.
int errorHandler(Display *dpy, XErrorEvent *err)
{
char buf[BUFLEN];
XGetErrorText(dpy, err->error_code, buf, BUFLEN);
printf("%s\n", buf)
return 0;
}
XSetErrorHandler(errorHandler);
I'm asking because the man page says you should not call any functions (directly or indirectly) on the display that will generate protocol requests but it does not tell if XGetErrorText does.
XGetErrorText doesn't generate any server traffic. It's not supposed to: the server doesn't know your locale, for example, and cannot supply localised messages. XLib can, and indeed does with a couple of local Xrm database lookups.
The source code of XGetErrorText can be viewed e.g. here. We can see that XGetErrorText calls XGetErrorDatabaseText, and this latter function is not even using its dpy parameter.
Each X extension provides its own error-event-to-error-string translation function. This function does accept a dpy parameter, but, just like XGetErrorDatabaseText is not supposed to use it too generate any server traffic. This error-handling function is by default generateed by the XEXT_GENERATE_ERROR_STRING macro here, which just encapsulates another call to XGetErrorDatabaseText.
I am having a following a map,
struct details;
map<std::string, details> myMap;
myMap.erase(“key”);// Why I cant do this in C++11?
This is so simple in java
Map<std::string, details> map
map.remove(“key");
How can I delete an entry from a std::map using key?
Thanks
According to online docs for std::map::erase, in the 3rd form, it can be called directly with the key. Checking on my clang compiler, with C++98 or C++14 as the standards, and both the map and string header files included, it compiles fine. I got errors though if the string header file wasn't included so perhaps that's why it's not working for you.
In any case, as an alternative, you can use the find method on the string to get the iterator that can then be used with the erase method like so:
myMap.erase(myMap.find("key"));
I am binding JMC objective C static library to Xamarin compatible one, currently I am trying to build my binding project now(which contains ApiDefinition.cs, StructsAndEnums.cs and libJmcSDK.a) but when I try to build this Xamarin binding project I'm getting the following error:
BTOUCH: error BI1017: btouch: Do not know how to make a signature for System.Int32* in method 'get_IssueTransport'.
When I try to search for get_IssueTransport method, their is no search results for this, trying to find the System.Int32 signature as well but not able to find in throughout solution.
Any help in resolving this issue is much appreciated in advance. Thanks.
Answer
Remove unsafe keyword and * from int.
Like:
unsafe int*
Change int* (any pointer type) to IntPtr
I'm trying to get a simple OpenGL program going, but I can't get it to display anything. I'm using Swift on a Mac, neither of which I am familiar with, although I've got a good amount of experience with OpenGL on windows. The program runs fine (no GL errors or anything), but nothing displays, until I add this at the end of my openGL initialization:
var vao:GLuint=0;checkGlError();
glGenVertexArrays(1, &vao);checkGlError();
glBindVertexArray(vao);checkGlError();
Then, it gives GL_INVALID_OPERATION as soon as I call glGenVertexArrays(), however the docs don't mention that as an option.
I worried that I might not have a GL3 context (and really I'd prefer to just have a GL1/2 context, however the Swift headers seem to be missing things like glBegin() and glColor3f(), so I decided to settle for GL3), so I tried to manually request one:
required init?(coder: NSCoder) {
super.init(coder: coder);
let attribs=[NSOpenGLPFAOpenGLProfile,NSOpenGLProfileVersion3_2Core,0];
pixelFormat=NSOpenGLPixelFormat(attributes: UnsafePointer<NSOpenGLPixelFormatAttribute>(attribs));
openGLContext=NSOpenGLContext(format: pixelFormat, shareContext: nil);
openGLContext.view=self;
openGLContext.makeCurrentContext();
};
However this didn't seem to affect things at all.
First, glBegin (...) is not valid in a core profile context. You are going to need a legacy (non-core) context for that.
Now, since the legacy context is limited to OpenGL 2.1, it only offers GL_APPLE_vertex_array_object. In a 2.1 context, you can call glGenVertexArraysAPPLE (...) and glBindVertexArrayAPPLE (...) and they will do the same thing.
The problem you are encountering is that on OS X, you link to the same framework regardless which context version you get. That means that you can call functions that are not supported by your context at run-time. Any time you try to use a GL 3.2+ function from the legacy context, you get GL_INVALID_OPERATION.
I was able to solve this by creating a bridging header and including #include in that, then using a GL1 context.
I am working on an old MFC application which uses xerces 2.7 for XML parsing.
In debug mode, while trying to debug a stack corruption, I have been able to narrow down the issue to the following code:
BOOL CXMLHandler::LoadFile(CString fileName)
{
XMLPlatformUtils::Initialize();
SAXParser* parser = new SAXParser();
delete parser;
XMLPlatformUtils::Terminate();
return TRUE;
}
while deleting the parser, I get the error
"DAMAGE: after Normal block (#1695) at 0x0795EEA8."
the SAXParser class is from xerces.
I cannot figure out what is wrong with the code. Can anyone help in finding out what is wrong here. Could a memory leak/corruption elsewhere in the code be causing this?
If that #1695 is the same each time you run add the following to the start of the program:
_CrtSetBreakAlloc(1695);
Allocation number 1695 is the data that has been damaged. The debugger will halt there.