I found some differences between the testnet and the devnet on how the extend_from_slice on VecMapper works. It works ok on the devnet, but the same endpoint/function in the same SC breaks on the testnet. I wonder why.
Here is the function which fails: https://github.com/juliancwirko/elven-nft-minter-sc/blob/main/src/lib.rs#L273
What is strange, on the testnet, it breaks only for the passed value bigger than 64.
Here are the results of the same Smart Contract, the same code, the same endpoint, and sent data :
testnet (works ok for passed 64 or less as argument): https://testnet-explorer.elrond.com/transactions/afdb120f1b807a084a56b6ecc126ff859a2f4f54dd14a11479f1a7e92929a878
testnet (fails for passed 65 or more as argument): https://testnet-explorer.elrond.com/transactions/868b74ce8ecb8d25221949fdee1594bb5633694ec7c47e5a41dc362f9b2965ae
devnet (works ok for passed 5000 as argument): https://devnet-explorer.elrond.com/transactions/dce0b5dcde35dfa159a55524949321bbd0521c62d38fdf6353d883e6c230e006
What works for both environments is not using the extend_from_slice but instead pushing directly to the VecMapper, which consumes a lot of gas, over two times more.
The error data:
identifier: signalError
in topics: execution failed
Here is a response from a group where the question was also asked :
There are some limitations regarding Rust's dynamic allocation, which can cause some sc calls to fail. We recommend using managed types instead, which only allocate memory inside the VM, or use static buffers
Related
So i've been trying to write a pintool that monitors call/ret instructions but i've noticed that threre was a significant inconsistency between the two. For example ret instructions without previous call.
I've run the tool in a console application from which i got the following logs showing this inconsistency (this is an example, there are more inconsistencies like the one listed below in the other call/ret instructions):
1. Call from ntdll!LdrpCallInitRoutine+0x69, expected to return to 7ff88f00502a
2. RETURN to 7ff88f00502a
//call from ntdll!LdrpInitializeNode+0x1ac which is supposed to return at 7ff88f049385 is missing (the previous instruction)
3. RETURN to 7ff88f049385 (ntdll!LdrpInitializeNode+0x1b1)
The above are the first 3 log entries for the call/ret instructions. As one can see, the monitoring started a bit late, at the call found at ntdll!LdrpCallInitRoutine+0x69, it returned to the expected address but then returned to 7ff88f049385 without first tracking the call found in the previous instruction.
Any ideas of what could be the fault?
The program is traced with INS_AddInstrumentFunction with a callback that more or less does:
if INS_IsCall(ins) INS_InsertCall(ins,...
if INS_IsRet(ins) INS_InsertCall(ins,...
I've tried the same program on Linux which worked as expected, without any mismatch.
Any ideas of the reason behind this behavior?
I am having some issues with my virtualHBA driver on Windows Server 2016. A ran the HLK crashdump support test. 3 times out of 10 the test passed. In those 3 failing tests, the crashdump hangs at 0% while taking Complete dump, or Kernel dump or minidump.
By kernel debugging my code, I found that the call to ExAllocatePoolWithTag() for buffer allocation never actually returns.
Below is the statement which never returns.
pDeviceExtension->pcmdbuf=(struct mycmdrsp *)ExAllocatePoolWithTag(NonPagedPoolCacheAligned,pcmdqSignalSize,((ULONG)'TA1'));
I searched on the web regarding this. However, all of the found pages are focusing on this function returning NULL which in my case never returns.
Any help on how to move forward would be highly appreciated.
Thanks in advance.
You can't allocate memory in crash dump mode. You're running at HIGH_LEVEL with interrupts disabled and so you're calling this API at the wrong IRQL.
The typical solution for a hardware adapter is to set the RequestedDumpBufferSize in the PORT_CONFIGURATION_INFORMATION structure during the normal HwFindAdapter call. Then when you're called again in crash dump mode you use the CrashDumpRegion field to get your dump buffer allocation. You then need to write your own "crash dump mode only" allocator to allocate buffers out of this memory region.
It's a huge pain, especially given that it's difficult/impossible to know how much memory you're ultimately going to need. I usually calculate some minimal configuration overhead (i.e. 1 channel, 8 I/O requests at a time, etc.) and then add in a registry configurable slush. The only benefit is that the environment is stripped down so you don't need to be in your all singing, all dancing configuration.
Would it be accurate to call the Heartbleed bug a stack overflow? In my understanding, this is quite a typical example. Is this technically correct?
The heartbleed bug is not a stack overflow error, but a type of a buffer overrun error. A stack overflow error happens when a program runs out of stack space. This usually results in a crash, and is not directly exploitable.
A stack is a data structure with "last in, first out" as its primary characteristic. It allows a caller (a piece of a program) to "push" information onto the stack, and to "pop" off the last item pushed. For a strict stack, no other operations are allowed.
The stack is used for programs when they call subprograms (functions, methods, subroutines are all subprograms, they have different names in different contexts). When a program calls a subprogram, a bunch of information needs to be saved so that it's available when the subprogram returns. So this "execution context" is pushed onto the stack, and then retrieved on return. This operation is so vital to computers that computer hardware supports it directly; in other words, there are machine instructions to do this so that it doesn't have to be done (slower) in software.
There is usually an amount of memory in the computer dedicated to this runtime stack, and even usually to a stack for each program running and a few for the operating system, etc. If subroutines calls get so "deep" that the amount of stack space allocated won't hold all the information needed for a call that occurs, that is a stackoverflow error.
This was not what the heartbleed problem was about. It allowed an exertnal program to set an amount of buffer space to be returned to it, and returned whatever happened to be in the memory beyond the little bit of data that this external program sent.
So the real answer to the question is "no", and I cannot imagine who would have thought that this was a typical example.
Technically, yes. But not in the traditional overflow sense where you try to smash the stack and fiddle with return values and try to execute code. This was purely a "leak private data" problem.
The OpenSSL specification requires that a client sent a chunk of randomish data in its heartbeat packet. The server is required to turn that data exactly as is to the client.
The bug is that the client basically sends two bits of data:
size_of_heartbeat (16bit integer presenting heartbeat data size)
heartbeat_data (up to 64k of data)
A malicious client can LIE about the data it's sending, and say:
size_of_hearbeat = 64k
heartbeat_data = '' (1 byte)
OpenSSL failed to verify that the size_of_hearbeat == actual_size(heartbeat_data), and would trust the size_of_heartbeat, so basically you'd have:
-- allocate as much memory as the client claims they sent to you
-- copy the user's heartbeat packet into the response packet.
Since the user claims they sent you 64k, OpenSSL properly allocated a 64k buffer, but then then did an unbounded memcpy() and would happily copy up to 64k of ram past where there client's heartbeat data actually occurred.
Given enough attempts at this, you could build up a pretty complete picture of what's in the server's memory, 64k at a time, and eventually be able to extract things like the server's SSL certificates, temporary data from previous users who'd passed through the encryption layers, etc...
I have some questions regarding COM memory management:
I have a COM method:
STDMETHODIMP CWhitelistPolicy::GetWebsitesStrings(SAFEARRAY** result)
result = SAFEARRAY(BSTR). If I receive another SAFEARRAY(BSTR) from another interface method(in order to set *result) do I have to make copies of the strings received in order to pass them to *result and outside client? Or considering I will not use the strings for myself I can just pass them to the client (and passing out the ownership)?
2.
STDMETHODIMP CWhitelistPolicy::SetWebsitesStrings(SAFEARRAY* input)
Here I receive a BSTR array as input. Again my method is responsible for the memory allocated in input?
3.
STDMETHOD(SetUsers)(SAFEARRAY* input);
Here I call a method on another interface (SetUsers) and I allocate memory for the input SAFEARRAY. After I call SetUsers I can dispose of the SAFEARRAY? Memory is always copied when marshaling takes place isn't it? (in my case SetUsers method is called on an interface that is hosted as a COM DLL inside my process)
The way I think about it to answer questions like this is to think about a COM call that crosses machines. Then it's obvious for an [out] param; I the caller own and have to free the memory because the remote marshaling layer can't do it. For [in] parameters, it's obvious the marshaling layer must copy my data and again the remote marshaling layer can't free what I passed in.
A core tenet in COM is location neutrality, the rules when calling in the same apartment are the rules when using DCOM across machines.
You're responsible to free - you don't pass ownership when you call the next fnc because it could be remote and getting a copy, not your original data.
No - as the callee, you don't have to free it. If it's intra-apartment, it's the memory the caller provided and the caller has to free it. If it's a remote call, the server stub allocates it and will free it when the method returns.
Yes, you free it - no, it's not always copied (it might be), which is why the answer to 2 is no. If it's copied, there's a stub that allocated and the stub will free it.
Note my answers to your questions didn't cover the case of [in,out] parameters - see the so question Who owns returned BSTR? for some more details on this case.
Com allocation rules are complicated but rational. Get the book "essential com" by Don Box if you want to understand/see examples of all the cases. Still you're going to make mistakes so you should have a strategy for detecting them. I use gflags (part of Windbg) and its heap checking flags to catch when a double free occurs (a debug message is displayed and execution halted at the call with an INT 3). Vstudio's debugger used to turn them on for you when it launched the executable (it likely still does) but you can force them on with gflags under the image options tab.
You should also know how to use UMDH (also part of windbg) to detect leaks. DebugDiag is the newer tool for this and seems easier to use, but sadly, you can only have the 32 bit or 64 bit version installed, but not both.
The problem then are BSTRs, which are cached, make detecting double frees and leaks tricky because interacting with the heap is delayed. You can shut off the ole string cache by setting the environment variable OANOCACHE to 1 or calling the function SetOaNoCache. The function's not defined in a header file so see this SO question Where is SetOaNoCache defined?. Note the accepted answer shows the hard way to call it through GetProcAddress(). The answer below the accepted one shows all you need is an extern "C" as it's in the oleaut32 export lib. Finally, see this Larry Osterman blog post for a more detailed description of the difficulties caused by the cache when hunting leaks.
I got a ERROR_INSUFFICIENT_BUFFER error when invoking FindNextUrlCacheEntry(). Then I want to retrieve the failed entry again, using a enlarged buffer. But I found that when I invoke FindNextUrlCacheEntry(), it seems I was retrieving the one next to the failed entry. Is there any approach I can go back to retrieve the information of the just failed entry?
I also observed the same behavior on XP. I am trying to clear IE cache programmatically using WinInet APIs. The code at the following MSDN link works perfectly fine on Win7/Vista but deletes cache files in batches(multiple runs) on XP. On debugging I found that API FindNextUrlCacheEntry gives different sizes for the same entry when executed multiple times.
MSDN Link: http://support.microsoft.com/kb/815718
Here is what I am doing:
First of all I make a call to determine the size of the next URL entry
fSuccess = FindNextUrlCacheEntry(hCacheHandle, 0, &cacheEntryInfoBufferSizeInitial) // cacheEntryInfoBufferSizeInitial = 0 at this point
The above call returns false with error no as INSUFFICIENT_BUFFER and with cacheEntryInfoBufferSizeInitial parameter set equal to the size of the buffer required to retrieve the cache entry, in bytes. After allocating the required size (cacheEntryInfoBufferSizeInitial) I call the same WinInet API again expecting it to retrieve the entry successfully this time. But sometimes it fails. I see that the cases in which API fails again even though with required buffered sizes (as determined it only) because it expects morebytes then what it retrieved earlier. Most of times the difference is of few bytes but I have also seen cases where the difference is almost 4 to 5 times.
For what it's worth this seems to be solved in Vista.