What are the entrypoints of an ocx ? are they same as that of com dlls ?
Yes, the entrypoints are the same.
There may be other constraints on an OCX that differ from a COM DLL though. For example, it's common (but not required) for a COM DLL to include a type library as a built-in resource. Every OCX I've ever seen has had its type library built-in in this way, so maybe that's a requirement for an OCX?
In general the coclasses defined in an OCX (or at least some of them) are expected to be ActiveX controls, which means they should implement certain specific COM interfaces. A plain old COM DLL has no such expectations placed upon it.
Related
I am learning how to implement an out-of-process COM server and came across this Code Project article, Building a LOCAL COM Server and Client: A Step by Step Example.
I can build it, and it runs fine, but where is the proxy/stub DLL? All I can see is the IDL file from which the proxy/stub code is generated during the build. But how is the DLL built, and where is it?
The short answer is that all the interfaces are marked "oleautomation", so oleaut32.dll performs the marshaling, since COM knows how to marshal all the types used in the interfaces.
If "oleautomation" were missing, or a type was specified that COM doesn't know how to marshal by default (see the list here), your nondefault marshaler and its stub would be required.
You need Proxy/Stub project to help COM marshal your interfaces. When you create ATL C++ project with Visual Studio you typically have a secondary project with PS suffix created automatically, and this is your Proxy/Stub DLL. However, you might be doing fine without it at all (I personally never ever had to build and use it, even though I did have to deal with things like custom marshaling). If you provide type library with your project, it is registered and certain conditions are met - COM will supply automatic proxy/stub pair for you.
Bonus reading:
COM Proxy/Stub DLL and why do you need it
When Proxy/Stub Pair is Necessary
Consider a C++ API defined as a series of __options(declexport/import) classes.
Further, assume that the caller is never permitted to call the ordinary operator new(size_t) on these classes. Either a static factory method does the new-ing or there is a class-specific operator new. And ditto marks on the delete size as needed (frequently just a virtual destructor).
Now, if you compile and link a DLL and an IMPLIB of with the tools from VS2010, can you hand that implib and DLL to a user of VS2005 and expect it to work?
MFC is not involved here at all.
I'd be particularly grateful to any reference to any relatively formal Microsoft statement on the subject.
So long as the name mangling on the C++ API is identical (they are), and does not use STL-type specific parameters, such as basic_string or std::map, whose implementation may have changed between releases of the compiler (and they have), then it should just work.
Of course, you'll want to make sure you either compiled your DLL using /MT mode (static linked runtimes), or include the redistributables for VS2010 runtimes with your supplied libraries and link targets.
EDIT: Expanding on "don't pass in types that have version-specific implementations". A partial list is most easily found by looking at the output of the exports of MSVC100P.DLL.
cd %VS100COMNTOOLS%\..\VC\redist\x86\Microsoft.VC100.CRT
DUMPBIN /exports MSVCP100.DLL
The next issue will be header-only implementations of things like map or set which have changed under the hood between versions of the compiler.
This is why it's highly recommended that only scalar types be passed across boundaries between memory arenas. And thus, simple tests will pass, and be reliable.
You have not mentioned if you have used MFC to create the DLL's .If you have, regular DLL's should work , but I dont think extension shall work as the latter links to the MFC dlls .I am including links for your reference.
http://www.codeguru.com/cpp/cpp/cpp_mfc/tutorials/article.php/c4017
http://www.experts-exchange.com/Programming/System/Windows__Programming/MFC/Q_20385543.html
http://msdn.microsoft.com/en-us/library/26h8x9sy%28v=VS.100%29.aspx
EDIT
If its a normal DLL, there should not be any problem.Also depends on the linkage type.
When one should implicitly or explicitly link to a DLL and what are common practices or pitfalls?
It is fairly rare to explicitly link a DLL. Mostly because it is painful and error prone. You need to write a function pointer declaration for the exported function and get the LoadLibrary + GetProcAddress + FreeLibrary code right. You'd do so only if you need a runtime dependency on a plug-in style DLL or want to select from a set of DLLs based on configuration. Or to deal with versioning, an API function that's only available on later versions of Windows for example. Explicit linking is the default for COM and .NET DLLs.
More background info in this MSDN Library article.
I'm assuming you refer to linking using a .lib vs loading a DLL dynamically using LoadLibrary().
Loading a DLL statically by linking to its .lib is generally safer. The linking stage checks that all the entry points exist in compile time and there is no chance you'll load a DLL that doesn't have the function you're expecting. It is also easier not to have to use GetProcAddress().
So generally you should use dynamic loading only when it is absolutely required.
I agree with other who answered you already (Hans Passant and shoosh). I want add only two things:
1) One common scenario when you have to use LoadLibrary and GetProcAddress is the following: you want use some new API existing in new versions of Windows only, but the API are not critical in your application. So you test with LoadLibrary and GetProcAddress whether the function which you need exist, and use it in the case. What your program do if the functions not exist depend total from your implementation.
2) There are one important options which you not included in your question: delayed loading of DLLs. In this case the operating system will load the DLL when one of its functions is called and not at the application start. It allows to use import libraries (.lib files) in some scenarios where explicitly linking should be used at the first look. Moreover it improve the startup time of the applications and are wide used by Windows itself. So the way is also recommended.
I have a DLL that's compiled, and I don't have the source code for it anymore. The only thing I want from the DLL is the functions it provides, and how they are accessed, i.e. their signature.
How can I do this?
You can extract the following:
Metadata (all classes, methods, parameters, etc.) in case of a .NET Assembly. Use Reflector and ILDasm for this purpose.
Exportable functions in case of a native DLL. Use the dumpbin utility. There is also a professional tool called IDA. It's very powerful and is meant for iterative reverse-engineering.
For example, if writing a shell extension, is it necessary to register the typelib with CComModule::RegisterServer (i.e., I'm wondering if calling it with FALSE will cause some issues).
Hardly ever.
In theory typelibs would be used by IDEs that wan't to display lists of properties and methods on an object. OleSpy++ or whatever the tool is called can use typelibs to generate quite a lot of important information about the ActiveX
In MSVC you can use #import on a ActiveX with embedded typelib and header files describing the interfaces and types will be automatically generated.
When dealing with well-known interfaces - published in the platform sdk - or if the header files are already supplied for C & C++ bindings, then typelibs are a tad useless.
Even when used from script languages, IDispatchEx now seems to be preferred as a discovery mechanism for the IDE or code to query an IDispatch supporting object at runtime for its methods.
In general, it's fine to say FALSE unless you need a typelib because someone is going to be calling your IDispatch implementation. For most shell extensions I don't think you need a typelib.