I have a workflow xml that references a function in a separate DLL. When I compile (VS2019) the workflow XML, the reference to the Function in the Dll actually calls the function at compile time. The function in turn tries to invoke the constructor of another DLL. And that constructor attempts to make a connection with a SignalR server on a different machine.
Obviously, this would mean that the code will not compile unless the SignalR server is running and the function's constructor is successful. In actuality, the function is throwing an event when it can't make the connection, thus screwing the compilation.
Is there any way to stop the actual execution of the function when compiling the XML? All the compiler should need is a good reference to the Function in order to generate code connecting workflow to the function. Why must it execute the function at compile time (and subsequently failing to make the SignalR connection)?
Related
Ethereum/smart contracts enthusiasts,
I want to execute some code which does not modify the state variables on my own Geth ETH-Node without deploying a contract. Is that somehow possible?
My current thoughts:
I have debugged the geth a little bit. I found by executing a view function the StaticCall is executed from the evm class. It seems at this point I can also inject some bytecode of my own view functions, without deploying it. From my understanding, a view function does not edit some state variables it only reads and returns them. This would mean I can technically do that, without destroying the chain. But this way of changing the code seems to be a little bit oversized, is there a simpler way?
Thanks.
It is not possible. You need EVM to execute the code. The EVM is an entirely isolated and sandboxed runtime environment. There are some key elements that are required by the execution environment to execute the code.
Deploying a contract is creating an instance of a contract. Then you interact with the contract instance
In my project I use a third-party DLL, that creates some threads, which in turn call my function. The threads in the third-party DLL are created with _beginthreadex(), the DLL is compiled with MSVC.
My project is compiled with MINGW.
In my function I use the thread-local variable, using the __thread keyword, like the following:
__thread Env* env;
Env* getEnv() {
if(env) return env;
return env = createNewEnv();
}
// called by the 3rd-party thread
void myFunction() {
Env* env = getEnv();
// do work
}
From time to time the third-party library kills all threads and later spawn new ones, for which I receive a notification. In order to prevent memory leak I need to delete the Env instances created by the now-dead threads. The problem is that I don't know how to access these instances, because the pointers are thread-local.
I've google up an idea to use the pthread_cleanup_push() function, which is present in the MINGW runtime. That would ideally solve my issue, but I'm not sure if it would work with threads created with _beginthreadex().
Any ideas? May be another workaround exists?
Finally, the idea of using pthread_cleanup_push() turned out completely wrong, because it can be used only within a thread main function, which I don't have access to. It is not even a real function. Actually it's a #defined chunk of code, which should be used together with pthread_cleanup_pop().
I solved the problem of cleaning thread-local variables by implementing a dllmain() in which I process the DLL_THREAD_DETACH event.
Very useful discussion on this topic is here:
How to release the object in a TLS-slot at thread exit on Windows?
I have an application which uses an out of process COM server to access a COM object created by an in-proc COM server. This means that the out of process COM server has to load the in process COM DLL to create the final object which it would then return.
For example:
// Create an object which resides in the out of process COM server
container.CoCreateInstance("HelperServerProcess");
// Grab a reference to an object which resides in an in process COM server DLL,
// hosted by the out of process COM server
object = container.GenerateResults();
// Release the object instantiated by the out of process server
container = NULL; // or return, or go out of scope, etc
// This call will fail, because the out of process server has shutdown unloading
// the inproc DLL hosting <object>
object.DoStuff();
However, once the container object is released, the final server process reference (in CoReleaseServerProcess ) is released, and the server shuts down. This results in an E_RPC_SERVER_UNAVAILABLE error when trying to use the result object. At the same time the in-proc DLL hosted in this EXE server still has outstanding objects and therefor returns S_FALSE from CanUnloadNow.
I think adding IExternalConnection to the EXE server 's class factory to manually do reference counting on the remote references will not help, because the objects registered by the DLL in-proc server will use the DLLs class factory and try using IExternalConnection on this factory. Also, if the server spawns interactive child objects in its process space it wouldn't trigger IExternalConnection either way.
It also isn't possible to modify the DLL's reference counting to use CoAddRefServerProcess / CoReleaseServerProcess as the DLL doesn't have access to the container's shutdown code in case it triggers the last release, and third party DLLs can't be changed anyhow.
The only method which I can think of which might work is adding a loop after the server refcount hits zero, which calls CoFreeUnusedLibraries, and then somehow enumerates all loaded COM DLLs and waits until none are loaded and ensures the server refcount is still zero. This would leak processes if a loaded DLL does not implement CanUnloadNow correctly, and involves messing around with low level COM implmentation details which I would like to avoid.
Is there any easier way to ensure that the COM objects instantiated by class factories of in-proc servers keep the process alive, or to enumerate the class factories of DLLs loaded into the current process and query them for the number of references?
Update: Another method which may work, but sounds very much like the sort of things you aren't supposed to do: intercepting every spawned thread in the process, registering a CoInitialize hook via CoRegisterInitializeSpy, and adding server process reference for every thread that currently has COM initialized.
The out-of-proc EXE can delegate the DLL object rather than return it directly. Have GetResults() return an object that the EXE implements, and have that implementation use the DLL internally as needed. This way, the EXE will not be released until the caller releases the EXE's object, thus keeping the EXE's own refcount active. The EXE's object can implement the same interface that the DLL object implements. This way, the caller does not know (or care) that delegation is being used.
I have a C# wrapper over C library.
In C lib I have function which takes callback as input.
In C# I have exposed callback like delegate.
Everything works fine in normal mode but when I try to call same function in remoting mode it gives exception like
Could not load file or assembly.
My scenario for remoting is like
1)SharedLib: I have a c# shared lib which has functions which are wrapper over C functions.
All functions are defined in this lib.
2)Server Console Application: Role of server is to get session from Shredlib and open a port so that client ask server for Session
3)Client Console application: Client listen to port opened by server and get session object from server.It defines a functions having same signature as delegate in sharedlib
On session object client calls method from sharedLib which take callback as input.
Client pass address of method having same signature like delegate to method from sharedLib which expects callback as input.
After this I got exception like "Could not load file or assembly."
If I pass null to parameter which take callback as input then everything works fine in remoting mode also.
So can anybody help in using callback in remoting mode.
Three suggestions:
1) Are the different AppDomains running at the same trust level? The second domain may not have permissions to load assemblies if the trust level is lower.
2) The remote application doesn't have all of the dependencies required available to it in the directories it loads assemblies from.
3) Are both applications running under the same version of .NET? If one is .NET 4.5 and one is .NET 3.5 and there's a .NET 4.0 assembly there, then the second process would not be able to run it.
You could try turning on Fusion assembly binding logging to see if there's a more detailed reason why the load fails.
I'm developing an Internet Explorer ActiveX plugin using Qt, and trying to make the installer ensure the plugin is not running before continuing. A standard approach to this is to create a named mutex in the application and try to open it in the installer.
This works fine when built as a standalone .exe, but when the plugin DLL is loaded by either idc.exe (to register the server or handle the type library) or IE itself (after adding a test against argv[0] to skip CreateMutex for the idc runs), the CreateMutex call crashes.
Here's how I'm calling it:
CreateMutex((LPSECURITY_ATTRIBUTES)MUTEX_ALL_ACCESS, FALSE, "mutex_name_here");
Is there a reason this should fail when run within the context of an ActiveX server, but work correctly when running standalone? Is there something else I'm missing here?
The first parameter to CreateMutex() is a pointer to a SECURITY_ATTRIBUTES structure (which contains a pointer to a security descriptor); it's not a set of requested access rights bits, which is what you're passing in. I'm not sure why that would work any better in a standalone application.
You probably want to pass in NULL for the first parameters so the mutex gets created with a default security descriptor.
The Desired Access bits would get passed to OpenMutex().