I feel confused about lambda in c++.Is it related to the compiler?
The following code run correct in ubuntu g++ 4.6.3 and g++ 5.2. But when I run it in centos 4.8.5,the result is error.
//
class ScopeGuard
{
public:
explicit ScopeGuard(std::function<void ()> onExitScope)
:onExitScope_(onExitScope)
{
}
~ScopeGuard()
{
onExitScope_();
}
private:
std::function<void ()> onExitScope_;
};
And there is a function to uncompress the data.
//
...
int dstLen = 10 * 1024 * 1024;
char *dstBuf = new char[dstLen];
// When I comment this line the err return Z_OK, otherwise return Z_BUFF_ERROR.
ScopeGuard guard([&](){if (dstBuf) delete[] dstBuf; dstBuf=NULL;});
// zlib function.
int err = uncompress((Bytef *)dstBuf, (uLongf*)&dstLen, (Bytef*)src, fileLen);
if (err != Z_OK)
{
cout<<"uncompress error..."<<err<<endl;
return false;
}`
This is most likely because of this: (uLongf*)&dstLen.
dstLen is an int, which is 32 bits and all current typical systems. uLongf, however, is an alias for unsigned long, which is 32 bits on Windows and 32-bit *nix systems, but 64 bits on 64-bit *nix systems.
It is not safe, and likely to do the wrong thing, to cast an int* to an uLongf*.
The solution is to make dstLen an uLongf and remove the cast.
Related
I know this is not adequate for stack overflow question, but ..
This is a function in scripts/dtc/libfdt/fdt_ro.c of u-boot v2021.10.
const void *fdt_getprop_namelen(const void *fdt, int nodeoffset,
const char *name, int namelen, int *lenp)
{
int poffset;
const struct fdt_property *prop;
printf("uuu0 nodeoffset = 0x%x, name = %s, namelen = %d\n", nodeoffset, name, namelen);
prop = fdt_get_property_namelen_(fdt, nodeoffset, name, namelen, lenp,
&poffset);
//printf("uuu1 prop = 0x%lx, *lenp = 0x%x, poffset = 0x%x\n", prop, *lenp, poffset);
if (!prop)
return NULL;
/* Handle realignment */
if (fdt_chk_version() && fdt_version(fdt) < 0x10 &&
(poffset + sizeof(*prop)) % 8 && fdt32_to_cpu(prop->len) >= 8)
return prop->data + 4;
return prop->data;
}
When I build the program, if I uncomment the second printf, the compiler seg-faults.
I have no idea. Is it purely compiler problem(I think so it should never die at least)? or can it be linked to my fault somewhere in another code? Is there any method to know the cause of the segfault? (probably not.).
If you're getting a segmentation fault when running the compiler itself, the compiler has a bug. There are some errors in your code, but those should result in compile-time diagnostics (warnings or error messages), never a compile-time crash.
The code in your question is incomplete (missing declarations for fdt_get_property_namelen_, printf, NULL, etc.). Reproduce the problem with a complete self-contained source file and submit a bug report: https://gcc.gnu.org/bugzilla/
printf("uuu1 prop = 0x%lx, *lenp = 0x%x, poffset = 0x%x\n", prop, *lenp, poffset);
prop is a pointer, so I'd use %p instead of %lx
lenp is a pointer, so I'd make sure that it points to valid memory
I'd started a bare-metal (Cortex-M) project some years ago. At project setup we decided to use gcc toolchain with C++11 / C++14 etc. enabled and even for using C++ exceptions and rtti.
We are currently using gcc 4.9 from launchpad.net/gcc-arm-embedded (having some issue which prevent us currently to update to a more recent gcc version).
For example, I'd wrote a base class and a derived class like this (see also running example here):
class OutStream {
public:
explicit OutStream() {}
virtual ~OutStream() {}
OutStream& operator << (const char* s) {
write(s, strlen(s));
return *this;
}
virtual void write(const void* buffer, size_t size) = 0;
};
class FixedMemoryStream: public OutStream {
public:
explicit FixedMemoryStream(void* memBuffer, size_t memBufferSize): memBuffer(memBuffer), memBufferSize(memBufferSize) {}
virtual ~FixedMemoryStream() {}
const void* getBuffer() const { return memBuffer; }
size_t getBufferSize() const { return memBufferSize; }
const char* getText() const { return reinterpret_cast<const char*>(memBuffer); } ///< returns content as zero terminated C-string
size_t getSize() const { return index; } ///< number of bytes really written to the buffer (max = buffersize-1)
bool isOverflow() const { return overflow; }
virtual void write(const void* buffer, size_t size) override { /* ... */ }
private:
void* memBuffer = nullptr; ///< buffer
size_t memBufferSize = 0; ///< buffer size
size_t index = 0; ///< current write index
bool overflow = false; ///< flag if we are overflown
};
So that the customers of my class are now able to use e.g.:
char buffer[10];
FixedMemoryStream ms1(buffer, sizeof(buffer));
ms1 << "Hello World";
Now I'd want to make the usage of the class a bit more comfortable and introduced the following template:
template<size_t bufferSize> class FixedMemoryStreamWithBuffer: public FixedMemoryStream {
public:
explicit FixedMemoryStreamWithBuffer(): FixedMemoryStream(buffer, bufferSize) {}
private:
uint8_t buffer[bufferSize];
};
And from now, my customers can write:
FixedMemoryStreamWithBuffer<10> ms2;
ms2 << "Hello World";
But from now, I'd observed increasing size of my executable binary. It seems that gcc added symbol information for each different template instantiation of FixedMemoryStreamWithBuffer (because we are using rtti for some reason).
Might there be a way to get rid of symbol information only for some specific classes / templates / template instantiations?
It's ok to get a non portable gcc only solution for this.
For some reason we decided to prefer templates instead of preprocessor macros, I want to avoid a preprocessor solution.
First of all, keep in mind that compiler also generates separate v-table (as well as RTTI information) for every FixedMemoryStreamWithBuffer<> type instance, as well as every class in the inheritance chain.
In order to resolve the problem I'd recommend using containment instead of inheritance with some conversion function and/or operator inside:
template<size_t bufferSize>
class FixedMemoryStreamWithBuffer
{
uint8_t buffer[bufferSize];
FixedMemoryStream m_stream;
public:
explicit FixedMemoryStreamWithBuffer() : m_stream(m_buffer, bufferSize) {}
operator FixedMemoryStream&() { return m_stream; }
FixedMemoryStream& toStream() { return m_stream; }
};
Yes, there's a way to bring the necessary symbols almost down to 0: using the standard library. Your OutStream class is a simplified version of std::basic_ostream. Your OutStream::write is really just std::basic_ostream::write and so on. Take a look at it here. Overflow is handled really closely, though, for completeness' sake, it also deals with underflow i.e. the need for data retrieval; you may leave it as undefined (it's virtual too).
Similarly, your FixedMemoryStream is std::basic_streambuf<T> with a fixed-size (a std::array<T>) get/put area.
So, just make your classes inherit from the standard ones and you'll cut off on binary size since you're reusing already declared symbols.
Now, regarding template<size_t bufferSize> class FixedMemoryStreamWithBuffer. This class is very similar to std::array<std::uint8_t, bufferSize> as for the way memory is specified and acquired. You can't optimize much about that: each instantiation is a different type with all what that implies. The compiler cannot "merge" or do anything magic about them: each instantiation must have its own type.
So either fall back on std::vector or have some fixed-size specialized chunks, like 32, 128 etc. and for any values in between would choose the right one; this can be achieved entirely at compile-time, so no runtime cost.
I am running out of ideas here. I have a piece of code adapted from http://thetechnofreak.com/technofreak/keylogger-visual-c/ to convert keycodes to unicode chars. It works fine in all situations except when you try to run the 32-bit version from 64-bit Windows. For some reason pKbd->pVkToWcharTable keeps returning NULL. I have tried __ptr64 as well as explicitly specifying SysWOW64 and System32 for the kbd dll path. I have found several items across the internet referring to this exact or very similar problem but I cannot seem to get any of the solutions to work (See: KbdLayerDescriptor returns NULL at 64bit architecture) The following is my test code that was compiled with mingw-32 on Windows XP (gcc -std=c99 Wow64Test.c) and then executed on Windows 7 64-bit. On Windows XP I am getting a valid pointer, however on Windows 7 I am getting NULL.
***Update: So it looks like the problems I am having are due to mingw not implementing __ptr64 correctly as the sizeof operation gives 4 bytes instead of the 8 bytes given by visual studio. So the real solution would be figuring out a way to make the size of KBD_LONG_POINTER dynamic or at least 64-bits but I am not sure if thats possible. Any ideas?
#include <windows.h>
#include <stdio.h>
#define KBD_LONG_POINTER __ptr64
//#define KBD_LONG_POINTER
typedef struct {
BYTE Vk;
BYTE ModBits;
} VK_TO_BIT, *KBD_LONG_POINTER PVK_TO_BIT;
typedef struct {
PVK_TO_BIT pVkToBit;
WORD wMaxModBits;
BYTE ModNumber[];
} MODIFIERS, *KBD_LONG_POINTER PMODIFIERS;
typedef struct _VK_TO_WCHARS1 {
BYTE VirtualKey;
BYTE Attributes;
WCHAR wch[1];
} VK_TO_WCHARS1, *KBD_LONG_POINTER PVK_TO_WCHARS1;
typedef struct _VK_TO_WCHAR_TABLE {
PVK_TO_WCHARS1 pVkToWchars;
BYTE nModifications;
BYTE cbSize;
} VK_TO_WCHAR_TABLE, *KBD_LONG_POINTER PVK_TO_WCHAR_TABLE;
typedef struct {
DWORD dwBoth;
WCHAR wchComposed;
USHORT uFlags;
} DEADKEY, *KBD_LONG_POINTER PDEADKEY;
typedef struct {
BYTE vsc;
WCHAR *KBD_LONG_POINTER pwsz;
} VSC_LPWSTR, *KBD_LONG_POINTER PVSC_LPWSTR;
typedef struct _VSC_VK {
BYTE Vsc;
USHORT Vk;
} VSC_VK, *KBD_LONG_POINTER PVSC_VK;
typedef struct _LIGATURE1 {
BYTE VirtualKey;
WORD ModificationNumber;
WCHAR wch[1];
} LIGATURE1, *KBD_LONG_POINTER PLIGATURE1;
typedef struct tagKbdLayer {
PMODIFIERS pCharModifiers;
PVK_TO_WCHAR_TABLE pVkToWcharTable;
PDEADKEY pDeadKey;
PVSC_LPWSTR pKeyNames;
PVSC_LPWSTR pKeyNamesExt;
WCHAR *KBD_LONG_POINTER *KBD_LONG_POINTER pKeyNamesDead;
USHORT *KBD_LONG_POINTER pusVSCtoVK;
BYTE bMaxVSCtoVK;
PVSC_VK pVSCtoVK_E0;
PVSC_VK pVSCtoVK_E1;
DWORD fLocaleFlags;
BYTE nLgMax;
BYTE cbLgEntry;
PLIGATURE1 pLigature;
DWORD dwType;
DWORD dwSubType;
} KBDTABLES, *KBD_LONG_POINTER PKBDTABLES;
typedef PKBDTABLES(CALLBACK *KbdLayerDescriptor) (VOID);
int main() {
PKBDTABLES pKbd;
HINSTANCE kbdLibrary = NULL;
//"C:\\WINDOWS\\SysWOW64\\KBDUS.DLL"
//"C:\\WINDOWS\\System32\\KBDUS.DLL"
kbdLibrary = LoadLibrary("C:\\WINDOWS\\SysWOW64\\KBDUS.DLL");
KbdLayerDescriptor pKbdLayerDescriptor = (KbdLayerDescriptor) GetProcAddress(kbdLibrary, "KbdLayerDescriptor");
if(pKbdLayerDescriptor != NULL) {
pKbd = pKbdLayerDescriptor();
printf("Is Null? %d 0x%X\n", sizeof(pKbd->pVkToWcharTable), pKbd->pVkToWcharTable);
}
FreeLibrary(kbdLibrary);
kbdLibrary = NULL;
}
It might be late for you, but here is a solution for anyone having the same problem. This demo and incomplete explanation helps, but only works in Visual Studio:
http://www.codeproject.com/Articles/439275/Loading-keyboard-layout-KbdLayerDescriptor-in-32-6
The pointers in the structures in kbd.h all have the KBD_LONG_POINTER macro, which is defined as *__ptr64* on 64 bit operating systems. In Visual Studio, this makes the pointers take up 8 bytes instead of the usual 4 of 32 bit programs. Unfortunately in MinGW, *__ptr64* is defined to not do anything.
As written in the linked explanation, the KbdLayerDescriptor function returns pointers differently on 32 bit and 64 bit Windows. The size of pointers seem to depend on the operating system and not on the running program. Actually, the pointers are still 4 bytes on a 64 bit operating system for a 32 bit program, but in VS, the __ptr64 keyword lies that they are not.
For example some structures look like this in kbd.h:
typedef struct {
BYTE Vk;
BYTE ModBits;
} VK_TO_BIT, *KBD_LONG_POINTER PVK_TO_BIT;
typedef struct {
PVK_TO_BIT pVkToBit;
WORD wMaxModBits;
BYTE ModNumber[];
} MODIFIERS, *KBD_LONG_POINTER PMODIFIERS;
This can't work neither in MinGW nor in VS for 32 bit programs on 64 bit Windows. Because the pVkToBit member in MODIFIERS is only 4 bytes without __ptr64. The solution is to forget about KBD_LONG_POINTER (you could even remove them all) and define structures similar to the above. i.e. :
struct VK_TO_BIT64
{
BYTE Vk;
BYTE ModBits;
};
struct MODIFIERS64
{
VK_TO_BIT64 *pVkToBit;
int _align1;
WORD wMaxModBits;
BYTE ModNumber[];
};
(You could use VK_TO_BIT and not define your own VK_TO_BIT64, as they are the same, but having separate definitions help understanding what's going on.)
The member pVkToBit still takes up 4 bytes, but KbdLayerDescriptor pads pointers to 8 bytes on a 64 bit OS, so we have to insert some padding (int _align1).
You have to do the same thing for the other structures in kbd.h. For example this will replace KBDTABLES:
struct WCHARARRAY64
{
WCHAR *str;
int _align1;
};
struct KBDTABLES64
{
MODIFIERS64 *pCharModifiers;
int _align1;
VK_TO_WCHAR_TABLE64 *pVkToWcharTable;
int _align2;
DEADKEY64 *pDeadKey;
int _align3;
VSC_LPWSTR64 *pKeyNames;
int _align4;
VSC_LPWSTR64 *pKeyNamesExt;
int _align5;
WCHARARRAY64 *pKeyNamesDead;
int _align6;
USHORT *pusVSCtoVK;
int _align7;
BYTE bMaxVSCtoVK;
int _align8;
VSC_VK64 *pVSCtoVK_E0;
int _align9;
VSC_VK64 *pVSCtoVK_E1;
int _align10;
DWORD fLocaleFlags;
byte nLgMax;
byte cbLgEntry;
LIGATURE64_1 *pLigature;
int _align11;
DWORD dwType;
DWORD dwSubType;
};
(Notice that the _align8 member does not come after a pointer.)
To use this all, you have to check whether you are running on 64 bit windows with this: http://msdn.microsoft.com/en-us/library/ms684139%28v=vs.85%29.aspx
If not, use the original structures from kbd.h, because the pointers behave correctly. They take up 4 bytes. In case the program is running on a 64 bit OS, use the structures you created. You can achieve it with this:
typedef __int64 (CALLBACK *LayerDescriptor64)(); // Result should be cast to KBDTABLES64.
typedef PKBDTABLES (CALLBACK *LayerDescriptor)(); // This is used on 32 bit OS.
static PKBDTABLES kbdtables = NULL;
static KBDTABLES64 *kbdtables64 = NULL;
And in some initialization function:
if (WindowsIs64Bit()) // Your function that checks the OS version.
{
LayerDescriptor64 KbdLayerDescriptor = (LayerDescriptor64)GetProcAddress(kbdLibrary, "KbdLayerDescriptor");
if (KbdLayerDescriptor != NULL)
kbdtables64 = (KBDTABLES64*)KbdLayerDescriptor();
else
kbdtables64 = NULL;
}
else
{
LayerDescriptor KbdLayerDescriptor = (LayerDescriptor)GetProcAddress(kbdLibrary, "KbdLayerDescriptor");
if (KbdLayerDescriptor != NULL)
kbdtables = KbdLayerDescriptor();
else
kbdtables = NULL;
}
This solution does not use __ptr64 at all, and works both in VS and MinGW. The things you have to watch out for are:
The structures should be aligned on 8 byte boundaries. (This is the default in current VS or MinGW, at least for C++.)
Don't define KBD_LONG_POINTER to __ptr64, or remove it from everywhere. Although you are better off not changing kbd.h.
Understand how alignment for structure members work. (I have compiled this as C++ and not C. I'm not sure whether alignment rules would be any different for C.)
Use the correct variable (either kbdtables or kbdtables64) depending on the OS.
This solution is obviously not needed when compiling a 64 bit program.
I have been looking for some time but have not found anywhere near sufficient documentation / examples on how to use the CryptoAPI that comes with linux in the creation of syscalls / in kernel land.
If anyone knows of a good source please let me know, I would like to know how to do SHA1 / MD5 and Blowfish / AES within the kernel space only.
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/err.h>
#include <linux/scatterlist.h>
#define SHA1_LENGTH 20
static int __init sha1_init(void)
{
struct scatterlist sg;
struct crypto_hash *tfm;
struct hash_desc desc;
unsigned char output[SHA1_LENGTH];
unsigned char buf[10];
int i;
printk(KERN_INFO "sha1: %s\n", __FUNCTION__);
memset(buf, 'A', 10);
memset(output, 0x00, SHA1_LENGTH);
tfm = crypto_alloc_hash("sha1", 0, CRYPTO_ALG_ASYNC);
desc.tfm = tfm;
desc.flags = 0;
sg_init_one(&sg, buf, 10);
crypto_hash_init(&desc);
crypto_hash_update(&desc, &sg, 10);
crypto_hash_final(&desc, output);
for (i = 0; i < 20; i++) {
printk(KERN_ERR "%d-%d\n", output[i], i);
}
crypto_free_hash(tfm);
return 0;
}
static void __exit sha1_exit(void)
{
printk(KERN_INFO "sha1: %s\n", __FUNCTION__);
}
module_init(sha1_init);
module_exit(sha1_exit);
MODULE_LICENSE("Dual MIT/GPL");
MODULE_AUTHOR("Me");
There are a couple of places in the kernel which use the crypto module: the eCryptfs file system (linux/fs/ecryptfs/) and the 802.11 wireless stack (linux/drivers/staging/rtl8187se/ieee80211/). Both of these use AES, but you may be able to extrapolate what you find there to MD5.
Another good example is from the 2.6.18 kernel source in security/seclvl.c
Note: You can change CRYPTO_TFM_REQ_MAY_SLEEP if needed
static int
plaintext_to_sha1(unsigned char *hash, const char *plaintext, unsigned int len)
{
struct crypto_tfm *tfm;
struct scatterlist sg;
if (len > PAGE_SIZE) {
seclvl_printk(0, KERN_ERR, "Plaintext password too large (%d "
"characters). Largest possible is %lu "
"bytes.\n", len, PAGE_SIZE);
return -EINVAL;
}
tfm = crypto_alloc_tfm("sha1", CRYPTO_TFM_REQ_MAY_SLEEP);
if (tfm == NULL) {
seclvl_printk(0, KERN_ERR,
"Failed to load transform for SHA1\n");
return -EINVAL;
}
sg_init_one(&sg, (u8 *)plaintext, len);
crypto_digest_init(tfm);
crypto_digest_update(tfm, &sg, 1);
crypto_digest_final(tfm, hash);
crypto_free_tfm(tfm);
return 0;
}
Cryptodev-linux
https://github.com/cryptodev-linux/cryptodev-linux
It is a kernel module that exposes the kernel crypto API to userspace through /dev/crypto .
SHA calculation example: https://github.com/cryptodev-linux/cryptodev-linux/blob/da730106c2558c8e0c8e1b1b1812d32ef9574ab7/examples/sha.c
As others have mentioned, the kernel does not seem to expose the crypto API to userspace itself, which is a shame since the kernel can already use native hardware accelerated crypto functions internally.
Crypto operations cryptodev supports: https://github.com/nmav/cryptodev-linux/blob/383922cabeea7dca354415e8c590f8e932f4d7a8/crypto/cryptodev.h
Crypto operations Linux x86 supports: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/arch/x86/crypto?id=refs/tags/v4.0
The best place to start is Documentation/crytpo in the kernel sources. dm-crypt is one of the many components that probably uses the kernel crypto API and you can refer to it to get an idea about usage.
how to do SHA1 / MD5 and Blowfish / AES within the kernel space only.
Example of hashing data using a two-element scatterlist:
struct crypto_hash *tfm = crypto_alloc_hash("sha1", 0, CRYPTO_ALG_ASYNC);
if (tfm == NULL)
fail;
char *output_buf = kmalloc(crypto_hash_digestsize(tfm), GFP_KERNEL);
if (output_buf == NULL)
fail;
struct scatterlist sg[2];
struct hash_desc desc = {.tfm = tfm};
ret = crypto_hash_init(&desc);
if (ret != 0)
fail;
sg_init_table(sg, ARRAY_SIZE(sg));
sg_set_buf(&sg[0], "Hello", 5);
sg_set_buf(&sg[1], " World", 6);
ret = crypto_hash_digest(&desc, sg, 11, output_buf);
if (ret != 0)
fail;
One critical note:
Never compare the return value of crypto_alloc_hash function to NULL for detecting the failure.
Steps:
Always use IS_ERR function for this purpose. Comparing to NULL does not capture the error, hence you get segmentation faults later on.
If IS_ERR returns fail, you possibly have a missing crypto algorithm compiled into your kernel image (or as a module). Make sure you have selected the appropriate crypto algo. form make menuconfig.
I work on a project written for MSVCC / Windows, that I have to port to GCC / Linux. The Project has its own String Class, which stores its Data in a QString from Qt. For conversion to wchar_t* there was originally this method (for Windows):
const wchar_t* String::c_str() const
{
if (length() > 0)
{
return (const wchar_t*)QString::unicode();
}
else
{
return &s_nullString;
}
}
Because unicode() returns a QChar (which is 16 Bit long), this worked under Windows as wchar_t is 16 Bit there, but now with GCC wchar_t is 32 Bit long, so that doesn't work anymore. I've tried to solve that using this:
const wchar_t* String::c_str() const
{
if ( isEmpty() )
{
return &s_nullString;
}
else
{
return toStdWString().c_str();
}
}
The problem with this is, that the object doesn't live anymore when this function returns, so this doesn't work eiter.
I think the only way to solve this issue is to either:
Don't use String::c_str() and call .toStdString().c_str() directly
Make GCC treat wchar_t as 16 bit type
Possibility one would mean several hours of needless work to me and I don't know if possiblity 2 is even possible. My question is, how do I solve this issue best?
I'd appreciate any useful suggestion. Thank you.
In my opinion, there are 2 ways :
convert QString to wchar_t* when needed
Let QString to store wchar_t* and return QString::unicode directly
These two functions can convert a QString to std::string and std::wstring
QString::toStdWString
QString::toStdString
To build QString as ucs4 :
#define QT_QSTRING_UCS_4
#include "qstring.h"
This can be used in qt3(qstring.h). I can't find the source of qt4.