getmodulefilenamew function produces false positive (buffer overflow) as it accepts second argument as buffer - of fixed size in our case.
But looking through its documentation: http://msdn.microsoft.com/en-us/library/ms683197%28v=vs.85%29.aspx
Quote: If the buffer is too small to hold the module name, the string is truncated to nSize characters including the terminating null character, the function returns nSize, and the function sets the last error to ERROR_INSUFFICIENT_BUFFER.
Can somebody as trusted third party person confirm or reject this issue as false positive. Thanks for your help!
===
HMODULE applicationModule = GetModuleHandleW(NULL);
WCHAR processName[MAX_PATH];
memset(processName, 0, sizeof(processName));
GetModuleFileNameW(applicationModule, processName, sizeof(processName));
===
The problem is line with GetModuleFileNameW function
Scan was provided by Veracode static analyzer.
Your problem is that you are passing an incorrect value for nSize. You are passing the number of bytes but you should be passing the number of characters, MAX_PATH. These values differ because a wide character has a size of 2 bytes.
So, yes there is an error in your code. If the module name is sufficiently long, Windows will attempt to write up to 520 characters to a buffer that only has room for 260.
Related
Value: 1,921,222, is too large to be stored as a short, so numeric overflow occurs and it becomes 20,678.
Can anyone demonstrate the process of 1,921,222 becoming 20,678 ?
how to “wraps around” to the next lowest value and counts up from there to get 20,678
Thank you in advance
In language C, the "short" type has 2 bytes. Every integer value is treated by the compiler as a 32-bit or 4-byte "int" type (this can vary depending on the compiler).
short s = 1921222;
In this sentence you are losing 2 bytes of data:
Information that remains in the variable (2 bytes)
^ ^
00000000 00011101 01010000 11000110 -> total data (4 bytes, 32 bits)
v v
Information discarded when you put this value in a short type.
In other words, you "crop" the data, leaving only the part that fits the specified type.
01010000 11000110
"01010000 11000110" is 20678.
This site can help you to understand better how this process works:
https://hexed.it/
The documentation for the OUTPUT_DEBUG_STRING_INFO structure doesn't explain, how to determine the length (or size) of the string value it points to. Specifically, the documentation for nDebugStringLength is confusing:
The lower 16 bits of the length of the string in bytes. As nDebugStringLength is of type WORD, this does not always contain the full length of the string in bytes.
For example, if the original output string is longer than 65536 bytes, this field will contain a value that is less than the actual string length in bytes.
As I understand it, the true size can be any value that's a solution to the equation:
size = nDebugStringLength + (n * 65536)
for any n in [0..65536).
Question:
How do I determine the correct size of the string? Unless I'm overlooking something, the documentation appears to be insufficient in this regard.
initially the debug event comes in the form DBGUI_WAIT_STATE_CHANGE
if use WaitForDebugEvent[Ex] api - it internally convert DBGUI_WAIT_STATE_CHANGE to DEBUG_EVENT by using DbgUiConvertStateChangeStructure[Ex]
the DbgExceptionStateChang ( in NewState) event with DBG_PRINTEXCEPTION_WIDE_C and DBG_PRINTEXCEPTION_C (in ExceptionCode) converted to OUTPUT_DEBUG_STRING_INFO. the nDebugStringLength is taken from Exception.ExceptionRecord.ExceptionInformation[0] or from ExceptionInformation[3] (in case DBG_PRINTEXCEPTION_C and api version without Ex ). but because nDebugStringLength is only 16 bit length, when original value is 32/64 bit length - it truncated - only low 16 bit of ExceptionInformation[0] (or [3]) is used.
note that ExceptionInformation[0] (and [3] in case DBG_PRINTEXCEPTION_WIDE_C ) containing string length in characters, including terminating 0.
in contrast nDebugStringLength in bytes (if we using WaitForDebugEventEx and DBG_PRINTEXCEPTION_WIDE_C exception - nDebugStringLength = (WORD)(ExceptionInformation[0] * sizeof(WCHAR))
I have this line in fortran and I'm getting the compiler error in the title. dFeV is a 1d array of reals.
dFeV(x)=R1*5**(15) * (a**2) * EXP(-(VmigFe)/kbt)
for the record, the variable names are inherited and not my fault. I think this is an issue with not having the memory space to compute the value on the right before I store it on the left as a real (which would have enough room), but I don't know how to allocate more space for that computation.
The problem arises as one part of your computation is done using integer arithmetic of type integer(4).
That type has an upper limit of 2^31-1 = 2147483647 whereas your intermediate result 5^15 = 30517578125 is slightly larger (thanks to #evets comment).
As pointed out in your question: you save the result in a real variable.
Therefor, you could just compute that exponentiation using real data types: 5.0**15.
Your formula will end up like the following
dFeV(x)= R1 * (5.0**15) * (a**2) * exp(-(VmigFe)/kbt)
Note that integer(4) need not be the same implementation for every processor (thanks #IanBush).
Which just means that for some specific machines the upper limit might be different from 2^31-1 = 2147483647.
As indicated in the comment, the value of 5**15 exceeds the range of 4-byte signed integers, which are the typical default integer type. So you need to instruct the compiler to use a larger type for these constants. This program example shows one method. The ISO_FORTRAN_ENV module provides the int64 type. UPDATE: corrected to what I meant, as pointed out in comments.
program test_program
use ISO_FORTRAN_ENV
implicit none
integer (int64) :: i
i = 5_int64 **15_int64
write (*, *) i
end program
Although there does seem to be an additional point here that may be specific to gfortran:
integer(kind = 8) :: result
result = 5**15
print *, result
gives: Error: Result of exponentiation at (1) exceeds the range of INTEGER(4)
while
integer(kind = 8) :: result
result = 5**7 * 5**8
print *, result
gives: 30517578125
i.e. the exponentiation function seems to have an integer(4) limit even if the variable to which the answer is being assigned has a larger capacity.
According to https://blog.golang.org/strings and my testings, it looks like while we range a string, the characters we get are rune type, but if we get it by str[index], they will be byte type, why is it?
To the first level, the why is because that's how the language is defined. The String type tells us that:
A string value is a (possibly empty) sequence of bytes. The number of bytes is called the length of the string and is never negative. Strings are immutable: once created, it is impossible to change the contents of a string.
and:
A string's bytes can be accessed by integer indices 0 through len(s)-1.
Meanwhile, range is a clause you can insert into a for statement, and the specification says:
The expression on the right in the "range" clause is called the range expression, which may be ... [a] string ...
and:
For a string value, the "range" clause iterates over the Unicode code points in the string starting at byte index 0. On successive iterations, the index value will be the index of the first byte of successive UTF-8-encoded code points in the string, and the second value, of type rune, will be the value of the corresponding code point. If the iteration encounters an invalid UTF-8 sequence, the second value will be 0xFFFD, the Unicode replacement character, and the next iteration will advance a single byte in the string.
If you want to know why the language is defined that way, you really have to ask the definers themselves. However, note that if for ranged only over the bytes, you'd need to construct your own fancier loops to range over the runes. Given that for ... range does work through the runes, if you want to work through the bytes in string s instead, you can write:
for i := 0; i < len(s); i++ {
...
}
and easily access s[i] inside the loop. You can also write:
for i, b := range []byte(s) {
}
and access both index i and byte b inside the loop. (Conversion from string to []byte, or vice versa, can require a copy since []byte can be modified. In this case, though, the range does not modify it and the compiler can optimize away the copy. See icza's comment below or this answer to golang: []byte(string) vs []byte(*string).) So you have not lost any ability, just perhaps a smidgen of concision.
Just a quick and simple answer on why the language is defined this way.
Think about what a rune is. A rune represents a Unicode code point, which can be composed of multiple bytes and also have different representations depending on the encoding.
Now think what doing mystring[i] would mean if that returned a rune and not a byte. Since you cannot know the length of each rune without scanning the string, that operation would require scanning the whole string every single time, thus making array-like access take O(n) instead of O(1).
It would be very counter-intuitive for the users of the language if mystring[i] scanned the whole string every time, and also more complex for the language developers. This is why most programming languages (like Go, Rust, Python) differentiate between Unicode characters and bytes, and sometimes only support indexing on bytes.
Accessing a string one rune at a time is instead much simpler when iterating from the beginning of it, like for example using range. Consecutive bytes can be scanned and grouped together until they form a valid Unicode character that can be returned as a rune, moving on to the next one.
Just letting you know. If you want to iterate with a classic for loop over a string and using operator [] to get the rune, you can do:
{
rstr := []rune(MyString)
for idx := 0; idx < len(rstr); idx++ {
// code before...
currentRune := rstr[idx]
_ = currentRune // to avoid unused error
// code after...
}
}
Maybe it's stupid or obvious but I couldn't google any answer. What character ends a null-terminated string in C++11? NULL (which is in fact 0) or new nullptr? On the one hand, nullptr is supposed to replace NULL. On the other, though, I'm not sure if nullptr is a character at all. Or can be interpreted as one.
NULL and nullptr has little to do with null-terminated strings. Both NULL and nullptr are used to denote a pointer which points to nothing, ie. null.
The null-termination of c-style strings is still (and has always) been denoted by a CharT having the integral value 0; or as it's most often written when talking, through a char-literal; '\0'.
Remember that character types are nothing more than integral types with some special meaning.
Comparing a char to an int (which is the type of literal 0) is allowed, it's also allowed to assign the value 0 to a char, as stated: a character type is an integral type.. and integral types hold integral values.
Why this confusion?
Back in the days when we didn't have nullptr, instead we had the macro NULL to denote that a certain pointer didn't have anything to point towards. The value of NULL is, and was, implementation-specific but the behaviour was well-defined; it shall not compare equal to any pointer value that is actually pointing to something.
As a result of how the behaviour of NULL was described plenty of compilers used #define NULL 0, or similar construct, resulting in a "feature" where one could easily compare NULL to any integral type (including char) to see if it's relation to the value zero.
With the previously stated in mind you'd often stumbled upon code such as the below, where the for-condition would be equivalent of having *ptr != 0.
char const * str = "hello world";
for (char const * ptr = str; *ptr != NULL; ++ptr) {
...
}
Lesson learned: Just because something works doesn't mean that it is correct...
NULL and nullptr are completely separate concepts from the "null terminator". They have nothing more in common than the word "null". The null terminator is a character with value 0. It has nothing to do with null pointers.
You can use 0 or '\0' etc.