I am experimenting with EAP-TLS, and have configured a chain with multiple intermediate CAs (this is not used for production).
So far the authentication works if the chain size is 32 KB, but not more, is it possible to increase this value? My tests are related to fragmentation on the network, and I can control the certificate size, but not the other elements of the EAP-TLS packets.
Also, from the wired-autoconfig event logs, when the chain is larger than 32 KBs, I see the following failure reason:
Reason: one or more parameters passed to the function was invalid
Error code: 0x8009035D
Related
I found some differences between the testnet and the devnet on how the extend_from_slice on VecMapper works. It works ok on the devnet, but the same endpoint/function in the same SC breaks on the testnet. I wonder why.
Here is the function which fails: https://github.com/juliancwirko/elven-nft-minter-sc/blob/main/src/lib.rs#L273
What is strange, on the testnet, it breaks only for the passed value bigger than 64.
Here are the results of the same Smart Contract, the same code, the same endpoint, and sent data :
testnet (works ok for passed 64 or less as argument): https://testnet-explorer.elrond.com/transactions/afdb120f1b807a084a56b6ecc126ff859a2f4f54dd14a11479f1a7e92929a878
testnet (fails for passed 65 or more as argument): https://testnet-explorer.elrond.com/transactions/868b74ce8ecb8d25221949fdee1594bb5633694ec7c47e5a41dc362f9b2965ae
devnet (works ok for passed 5000 as argument): https://devnet-explorer.elrond.com/transactions/dce0b5dcde35dfa159a55524949321bbd0521c62d38fdf6353d883e6c230e006
What works for both environments is not using the extend_from_slice but instead pushing directly to the VecMapper, which consumes a lot of gas, over two times more.
The error data:
identifier: signalError
in topics: execution failed
Here is a response from a group where the question was also asked :
There are some limitations regarding Rust's dynamic allocation, which can cause some sc calls to fail. We recommend using managed types instead, which only allocate memory inside the VM, or use static buffers
I need a way to limit the amount of memory that a service may allocate in order to prevent the service from starving the system, similar to the way SQL Server allows you to set "Maximum server memory".
I know SetProcessWorkingSetSize doesn't do exactly what I want, but I'm trying to get it to behave the way that I believe it should. Regardless of the values that I use, my test app's working set is not limited. Further, if I call GetProcessWorkingSetSize immediately afterwards, the values returned are not what I previously specified. Here's the code used by my test app:
var
MinWorkingSet: SIZE_T;
MaxWorkingSet: SIZE_T;
begin
if not SetProcessWorkingSetSize(GetCurrentProcess(), 20, 12800 ) then
RaiseLastOSError();
if GetProcessWorkingSetSize(GetCurrentProcess(), MinWorkingSet, MaxWorkingSet) then
ShowMessage(Format('%d'#13#10'%d', [MinWorkingSet, MaxWorkingSet]));
No error occurs, but both the Min and Max values returned by GetProcessWorkingSetSize are 81,920.
I tried using SetProcessWorkingSetSizeEx using QUOTA_LIMITS_HARDWS_MAX_ENABLE ($00000004) in the Flags parameter. Unfortunately, SetProcessWorkingSetSizeEx fails with "Code 87. The parameter is incorrect" if I pass anything other than $00000000 in Flags.
I've also pursued using Job Objects to accomplish the same goal. I have memory limits working with Job Objects when launching a child process. However, I need the ability for a service to set its own memory limits rather than depending on a "launching" service to do it. So far, I haven't found a way for a single process to create a job object and then add itself to the job object. This always fails with Access Denied.
Any thoughts or suggestions?
The documentation of SetProcessWorkingSetSize function says:
dwMinimumWorkingSetSize [in]
...
This parameter must be greater than
zero but less than or equal to the maximum working set size. The
default size is 50 pages (for example, this is 204,800 bytes on
systems with a 4K page size). If the value is greater than zero but
less than 20 pages, the minimum value is set to 20 pages.
In case of a 4K page size, the imposed minimum value is 20 * 4096 = 81920 bytes which is the value you saw.
The values are specified in bytes.
To actually limit the memory for your service process, I think it's possible to create a new job (CreateJobObject), set the memory limit (SetInformationJobObject) and assign your current process to the job (AssignProcessToJobObject) in the service's start up routine.
Unfortunately, on Windows before 8 and Server 2012, this won't work if the process already belongs to a job:
Windows 7, Windows Server 2008 R2, Windows XP with SP3, Windows Server
2008, Windows Vista and Windows Server 2003: The process must not
already be assigned to a job; if it is, the function fails with
ERROR_ACCESS_DENIED. This behavior changed starting in Windows 8 and
Windows Server 2012.
If this is your case (ie. you get ERROR_ACCESS_DENIED on older Windows) check if the process is already assigned to a job (in which case, you're out of luck) but also make sure that it has the required access rights: PROCESS_SET_QUOTA and PROCESS_TERMINATE.
Would it be accurate to call the Heartbleed bug a stack overflow? In my understanding, this is quite a typical example. Is this technically correct?
The heartbleed bug is not a stack overflow error, but a type of a buffer overrun error. A stack overflow error happens when a program runs out of stack space. This usually results in a crash, and is not directly exploitable.
A stack is a data structure with "last in, first out" as its primary characteristic. It allows a caller (a piece of a program) to "push" information onto the stack, and to "pop" off the last item pushed. For a strict stack, no other operations are allowed.
The stack is used for programs when they call subprograms (functions, methods, subroutines are all subprograms, they have different names in different contexts). When a program calls a subprogram, a bunch of information needs to be saved so that it's available when the subprogram returns. So this "execution context" is pushed onto the stack, and then retrieved on return. This operation is so vital to computers that computer hardware supports it directly; in other words, there are machine instructions to do this so that it doesn't have to be done (slower) in software.
There is usually an amount of memory in the computer dedicated to this runtime stack, and even usually to a stack for each program running and a few for the operating system, etc. If subroutines calls get so "deep" that the amount of stack space allocated won't hold all the information needed for a call that occurs, that is a stackoverflow error.
This was not what the heartbleed problem was about. It allowed an exertnal program to set an amount of buffer space to be returned to it, and returned whatever happened to be in the memory beyond the little bit of data that this external program sent.
So the real answer to the question is "no", and I cannot imagine who would have thought that this was a typical example.
Technically, yes. But not in the traditional overflow sense where you try to smash the stack and fiddle with return values and try to execute code. This was purely a "leak private data" problem.
The OpenSSL specification requires that a client sent a chunk of randomish data in its heartbeat packet. The server is required to turn that data exactly as is to the client.
The bug is that the client basically sends two bits of data:
size_of_heartbeat (16bit integer presenting heartbeat data size)
heartbeat_data (up to 64k of data)
A malicious client can LIE about the data it's sending, and say:
size_of_hearbeat = 64k
heartbeat_data = '' (1 byte)
OpenSSL failed to verify that the size_of_hearbeat == actual_size(heartbeat_data), and would trust the size_of_heartbeat, so basically you'd have:
-- allocate as much memory as the client claims they sent to you
-- copy the user's heartbeat packet into the response packet.
Since the user claims they sent you 64k, OpenSSL properly allocated a 64k buffer, but then then did an unbounded memcpy() and would happily copy up to 64k of ram past where there client's heartbeat data actually occurred.
Given enough attempts at this, you could build up a pretty complete picture of what's in the server's memory, 64k at a time, and eventually be able to extract things like the server's SSL certificates, temporary data from previous users who'd passed through the encryption layers, etc...
I am working on a problem in SNMP extension agent in windows, which is passing traps to snmp.exe via SnmpExtensionTrap callback.
We added a couple of fields to the agent recently, and I am starting to see that some traps are getting lost. When I intercept the call in debugger and reduce the length of some strings, the same traps, that would have been lost, will go through.
I cannot seem to find any references to size limit or anything on the data passed via SnmpExtensionTrap. Does anyone know of one?
I would expect the trap size to be limited by the UDP packet size, since SNMP runs over the datagram-oriented UDP protocol.
The maximum size of a UDP packet is 64Kb but you'll have to take into account the SNMP overhead plus any limitations of the transport you're running over (e.g. ethernet).
I got a ERROR_INSUFFICIENT_BUFFER error when invoking FindNextUrlCacheEntry(). Then I want to retrieve the failed entry again, using a enlarged buffer. But I found that when I invoke FindNextUrlCacheEntry(), it seems I was retrieving the one next to the failed entry. Is there any approach I can go back to retrieve the information of the just failed entry?
I also observed the same behavior on XP. I am trying to clear IE cache programmatically using WinInet APIs. The code at the following MSDN link works perfectly fine on Win7/Vista but deletes cache files in batches(multiple runs) on XP. On debugging I found that API FindNextUrlCacheEntry gives different sizes for the same entry when executed multiple times.
MSDN Link: http://support.microsoft.com/kb/815718
Here is what I am doing:
First of all I make a call to determine the size of the next URL entry
fSuccess = FindNextUrlCacheEntry(hCacheHandle, 0, &cacheEntryInfoBufferSizeInitial) // cacheEntryInfoBufferSizeInitial = 0 at this point
The above call returns false with error no as INSUFFICIENT_BUFFER and with cacheEntryInfoBufferSizeInitial parameter set equal to the size of the buffer required to retrieve the cache entry, in bytes. After allocating the required size (cacheEntryInfoBufferSizeInitial) I call the same WinInet API again expecting it to retrieve the entry successfully this time. But sometimes it fails. I see that the cases in which API fails again even though with required buffered sizes (as determined it only) because it expects morebytes then what it retrieved earlier. Most of times the difference is of few bytes but I have also seen cases where the difference is almost 4 to 5 times.
For what it's worth this seems to be solved in Vista.