When json-c json_object_to_json_string release the memory - json-c

I am using json-c library to send json-object to client.And I notice there is no native function to release the memory which json_object_to_json_string allocate.Does the library release it automaticlly? OR I have to "free(str)" to avoid memory leak?
I tried to read its source code but it makes me unconscious...So anybody know this?

It seems that you don't need to free it manually.
I see that this buffer comes from within the json_object (see the last line of this function):
const char* json_object_to_json_string_ext(struct json_object *jso, int flags)
{
if (!jso)
return "null";
if ((!jso->_pb) && !(jso->_pb = printbuf_new()))
return NULL;
printbuf_reset(jso->_pb);
if(jso->_to_json_string(jso, jso->_pb, 0, flags) < 0)
return NULL;
return jso->_pb->buf;
}
The delete function frees this buffer:
static void json_object_generic_delete(struct json_object* jso)
{
#ifdef REFCOUNT_DEBUG
MC_DEBUG("json_object_delete_%s: %p\n",
json_type_to_name(jso->o_type), jso);
lh_table_delete(json_object_table, jso);
#endif /* REFCOUNT_DEBUG */
printbuf_free(jso->_pb);
free(jso);
}
It is important to understand that this buffer is only valid while the object is valid. If the object reaches 0 reference count, the string is also freed and if you are using it after it is freed the results are unpredictable.

Related

Copy structure with included user pointers from user space to kernel space (copy_from_user)

I want to transfer a transaction structure, which contains an user space pointer to an array, to kernel by using copy_from_user.
The goal is, to get access to the array elements in kernel space.
User space side:
I allocate an array of _sg_param structures in user space. Now i put the address of this array in a transaction structure (line (*)).
Then i transfer the transaction structure to the kernel via ioctl().
Kernel space side:
On executing this ioctl, the complete transaction structure is copied to kernel space (line ()). Now kernel space is allocated for holding the array (line (*)). Then i try to copy the array from user space to the new allocated kernel space (line (****)), and here start my problems:
The kernel is corrupted during execution of this copy. dmesg shows following output:
[ 54.443106] Unhandled fault: page domain fault (0x01b) at 0xb6f09738
[ 54.448067] pgd = ee5ec000
[ 54.449465] [b6f09738] *pgd=2e9d7831, *pte=2d56875f, *ppte=2d568c7f
[ 54.454411] Internal error: : 1b [#1] PREEMPT SMP ARM
Any ideas ???
Following an simplified extract of my code:
// structure declaration
typedef struct _sg_param {
void *seg_buf;
int seg_len;
int received;
} sg_param_t;
struct transaction {
...
int num_of_elements;
sg_param_t *pbuf_list; // Array of sg_param structure
...
} trans;
// user space side:
if ((pParam = (sg_param_t *) malloc(NR_OF_STRUCTS * sizeof(sg_param_t))) == NULL) {
return -ENOMEM;
}
else {
trans.num_of_elements = NR_OF_STRUCTS;
trans.pbuf_list = pParam; // (*)
}
rc = ioctl(dev->fd, MY_CMD, &trans);
if (rc < 0) {
return rc;
}
// kernel space side
static long ioctl(struct file *file, unsigned int cmd, unsigned long arg)
{
arg_ptr = (void __user *)arg;
// Perform the specified command
switch (cmd) {
case MY_CMD:
{
struct transaction *__user user_trans;
user_trans = (struct transaction *__user)arg_ptr;
if (copy_from_user(&trans, arg_ptr, sizeof(trans)) != 0) { // (**)
k_err("Unable to copy transfer info from userspace for "
"AXIDMA_DMA_START_DMA.\n");
return -EFAULT;
}
int size = trans.num_of_elements * sizeof(sg_param_t);
if (trans.pbuf_list != NULL) {
// Allocate kernel memory for buf_list
trans.pbuf_list = (sg_param_t *) kmalloc(size, GFP_KERNEL); // (***)
if (trans.pbuf_list == NULL) {
k_err("Unable to allocate array for buffers.\n");
return -ENOMEM;
}
// Now copy pbuf_list from user space to kernel space
if (copy_from_user(trans.pbuf_list, user_trans->pbuf_list, size) != 0) { // (****)
kfree(trans.pbuf_list);
return -EFAULT;
}
}
break;
}
}
You're directly accessing userspace data (user_trans->pbuf_list). You should use the one that you've already copied to kernel (trans.pbuf_list).
Code for this would normally be something like:
sg_param_t *local_copy = kmalloc(size, ...);
// TODO check it succeeded
if (copy_from_user(local_copy, trans.pbuf_list, size) ...)
trans.pbuf_list = local_copy;
// use trans.pbuf_list
Note that you also need to check trans.num_of_elements to be valid (0 would make kmalloc return ZERO_SIZE_PTR, and too big value might be a way for DoS).

boost beast async_write increases memory footprint dramatically

I am currently experimenting with the boost beast library and now very surprised by it's memory footprint. I've found out by using three different response types (string, file, dynamic) the program size grows up to 6Mb.
To get closer to the cause, I took the small server example from the library and reduced it to the following steps:
class http_connection : public std::enable_shared_from_this<http_connection>
{
public:
http_connection(tcp::socket socket) : socket_(std::move(socket)) { }
void start() {
read_request();
}
private:
tcp::socket socket_;
beast::flat_buffer buffer_{8192};
http::request<http::dynamic_body> request_;
void read_request() {
auto self = shared_from_this();
http::async_read(
socket_, buffer_, request_,
[self](beast::error_code ec,
std::size_t bytes_transferred)
{
self->write_response(std::make_shared<http::response<http::dynamic_body>>());
self->write_response(std::make_shared<http::response<http::file_body>>());
self->write_response(std::make_shared<http::response<http::string_body>>(), true);
});
}
template <class T>
void write_response(std::shared_ptr<T> response, bool dostop=false) {
auto self = shared_from_this();
http::async_write(
socket_,
*response,
[self,response,dostop](beast::error_code ec, std::size_t)
{
if (dostop)
self->socket_.shutdown(tcp::socket::shutdown_send, ec);
});
}
};
when I comment out the three self->write_response lines and compile the program and execute the size command on the result, I get:
text data bss dec hex filename
343474 1680 7408 352562 56132 small
When I remove the comment of the first write, then I get:
864740 1714 7408 873862 d5586 small
text data bss dec hex filename
After removing all comments the final size become:
text data bss dec hex filename
1333510 1730 7408 1342648 147cb8 small
4,8M Feb 16 22:13 small*
The question now is:
Am I doing something wrong?
Is there a way to reduce the size?
UPDATE
the real process_request looks like:
void process_request() {
auto it = router.find(request.method(), request.target());
if (it != router.end()) {
auto response = it->getHandler()(doc_root_, request);
if (boost::apply_visitor(dsa::type::handler(), response) == TypeCode::dynamic_r) {
auto r = boost::get<std::shared_ptr<dynamic_response>>(response);
send(r);
return;
}
if (boost::apply_visitor(dsa::type::handler(), response) == TypeCode::file_r) {
auto r = boost::get<std::shared_ptr<file_response>>(response);
send(r);
return;
}
if (boost::apply_visitor(dsa::type::handler(), response) == TypeCode::string_r) {
auto r = boost::get<std::shared_ptr<string_response>>(response);
send(r);
return;
}
}
send(boost::get<std::shared_ptr<string_response>>(send_bad_response(
http::status::bad_request,
"Invalid request-method '" + std::string(req.method_string()) + "'\r\n")));
}
Thanks in advance
If you aren't actually leaking memory, then there is nothing wrong. Whatever memory is allocated by the system will either be reused for your program or eventually given back. It can be very difficult to measure the true memory usage of a program, especially under Linux, because of the virtual memory system. Unless you see an actual leak or real problem, I would ignore those memory reports and simply continue implementing your business logic. Beast itself contains no memory leaks (tested extensively per-commit on Travis and Appveyor under valgrind, asan, and ubsan).
Try use malloc_trim(0) , ex: in destructor of http_connection.
from man:
malloc_trim - release free memory from the top of the heap.
The malloc_trim() function attempts to release free memory at the top of the heap (by calling sbrk(2) with a suitable argument).
The pad argument specifies the amount of free space to leave untrimmed at the top of the heap.
If this argument is 0, only the minimum amount of memory is maintained at the top of the heap (i.e., one page or less). A nonzero argument can be used to maintain some trailing space at the top of the heap in order to allow future allocations to be made without having to extend the heap with
sbrk(2).

Whic is correct way to use pair GlobalLock() \ GlobalUnlock()?

Documentation about GlobalLock says:
Return value
If the function succeeds, the return value is a pointer to the first byte of the memory block.
If the function fails, the return value is NULL. To get extended error information, call GetLastError.
Remarks
Each successful call that a process makes to GlobalLock for an object must be matched by a corresponding call to GlobalUnlock.
....
If the specified memory block has been discarded or if the memory block has a zero-byte size, this function returns NULL.
So, as we see, GlobalLock() could return NULL if error or memory block size has zero-byte size.
On the other hand, GlobalUnlock() should be called ONLY if GlobalLock() was successful. So, how correctly define case when GlobalUnlock() should be called? What approach is correct from following variants and why?
Variant 0:
HGLOBAL hMem = /*some handle on global memory block*/;
// lock block
auto pMem = static_cast<LPBYTE>(::GlobalLock(hMem));
if (pMem!=nullptr)
{
// ... work with pMem
}
// call unlock in any case
::GlobalUnlock(hMem);
Variant 1:
HGLOBAL hMem = /*some handle on global memory block*/;
// lock block
auto pMem = static_cast<LPBYTE>(::GlobalLock(hMem));
if (pMem!=nullptr)
{
// ... work with pMem
// unlock block
::GlobalUnlock(hMem);
}
Variant 2:
HGLOBAL hMem = /*some handle on global memory block*/;
// lock block
auto pMem = static_cast<LPBYTE>(::GlobalLock(hMem));
auto isMemLocked = (pMem!=nullptr);
if (isMemLocked)
{
// ... work with pMem
}
else
{
// is it real error?
isMemLocked = ::GetLastError()==NOERROR;
}
if (isMemLocked)
{
// unlock block
::GlobalUnlock(hMem);
}
Update:
We assume that hMem is valid (handle is not NULL).
P.S.: Great thanks for your answers.
from GlobalLock documentation
Each successful call that a process makes to GlobalLock for an
object must be matched by a corresponding call to GlobalUnlock.
and
If the function succeeds, the return value is a pointer to the first
byte of the memory block.
If the function fails, the return value is NULL
so we need call GlobalUnlock only if previous call to GlobalLock return not NULL
pattern is next:
if (PVOID pv = GlobalLock(hg))
{
//...
GlobalUnlock(hg);
}
in case we try do GlobalLock on memory block which has a zero-byte size - we always got 0 and ERROR_DISCARDED. we not need call GlobalUnlock in this case - it simply return ERROR_NOT_LOCKED in this case.
if look from c++ perspective GlobalAlloc with GMEM_MOVEABLE flag return ~ weak_ptr - so HGLOBAL by fact point to object like weak_ptr in this case. the GlobalLock(hg) is analog of weak_ptr::lock which return shared_ptr (direct pointer to actual memory block). and GlobalLock is analog of release this shared_ptr. after call GlobalDiscard on HGLOBAL hg - shared_ptr (real memory block) will be destroyed. but HGLOBAL hg (weak_ptr) still will be valid, simply every GlobalLock(hg) (weak_ptr::lock) call on it fail with error ERROR_DISCARDED. finally GlobalFree delete this weak_ptr. demo code:
if (HGLOBAL hg = GlobalAlloc(GMEM_MOVEABLE, 8))
{
if (PVOID pv = GlobalLock(hg))
{
ASSERT(!GlobalDiscard(hg));
GlobalUnlock(hg);
}
ASSERT(GlobalDiscard(hg));
ASSERT(!GlobalLock(hg));
ASSERT(GetLastError() == ERROR_DISCARDED);
ASSERT(!GlobalUnlock(hg));
ASSERT(GetLastError() == ERROR_NOT_LOCKED);
GlobalFree(hg);
}
if (HGLOBAL hg = GlobalAlloc(GMEM_MOVEABLE, 0))
{
ASSERT(!GlobalLock(hg));
ASSERT(GetLastError() == ERROR_DISCARDED);
ASSERT(!GlobalUnlock(hg));
ASSERT(GetLastError() == ERROR_NOT_LOCKED);
GlobalFree(hg);
}
if (PVOID p = GlobalLock(hGlob))
{
...
GlobalUnlock(hGlob);
}
is the correct pattern and answered by RbMm but variant 0 is also accepted by Windows because GlobalUnlock(NULL) returns TRUE without doing anything else. This is of course a undocumented implementation detail and I only verified this on Windows NT 4 and Windows 8 but I assume everything in between acts the same.
This happens because Windows uses certain tag bits and alignment to tell if the HGLOBAL is fixed or moveable memory and NULL obviously has no tag bits set so GlobalUnlock just returns.
There is no reason to use this alternative pattern because:
You would be relying on implementation details.
You cannot omit the GlobalLock return value check unless you know that the HGLOBAL is fixed memory and in that case you can omit all the locking/unlocking because it is pointless overhead if you are only using fixed memory.

I/O to device from kernel module fails with EFAULT

I have created block device in kernel module. When some I/O happens I read/write all data from/to another existing device (let's say /dev/sdb).
It opens OK, but read/write operations return 14 error(EFAULT,Bad Address). After some research I found that I need map address to user space(probably buffer or filp variables), but copy_to_user function does not help. Also I looked to mmap() and remap_pfn_range() functions, but I can not get how to use them in my code, especially where to get correct vm_area_struct structure. All examples that I found, used char devices and file_operations structure, not block device.
Any hints? Thanks for help.
Here is my code for reading:
mm_segment_t old_fs;
old_fs = get_fs();
set_fs(KERNEL_DS);
filp = filp_open("/dev/sdb", O_RDONLY | O_DIRECT | O_SYNC, 00644);
if(IS_ERR(filp))
{
set_fs(old_fs);
int err = PTR_ERR(filp);
printk(KERN_ALERT"Can not open file - %d", err);
return;
}
else
{
bytesRead = vfs_read(filp, buffer, nbytes, &offset); //It gives 14 error
filp_close(filp, NULL);
}
set_fs(old_fs);
I found a better way for I/O to block device from kernel module. I have used bio structure for that. Hope this information save somebody from headache.
1) So, if you want to redirect I/O from your block device to existing block device, you have to use own make_request function. For that you should use blk_alloc_queue function to create queue for your block device like this:
device->queue = blk_alloc_queue(GFP_KERNEL);
blk_queue_make_request(device->queue, own_make_request);
Than into own_make_request function change bi_bdev member into bio structure to device in which you redirecting I/O and call generic_make_request function:
bio->bi_bdev = device_in_which_redirect;
generic_make_request(bio);
More information here at 16 chapter. If link is broken by some cause, here is name of the book - "Linux Device Drivers, Third Edition"
2) If you want read or write your own data to existing block device from kernel module you should use submit_bio function.
Code for writing into specific sector(you need to implement writeComplete function also):
void writePage(struct block_device *device,
sector_t sector, int size, struct page *page)
{
struct bio *bio = bio_alloc(GFP_NOIO, 1);
bio->bi_bdev = vnode->blkDevice;
bio->bi_sector = sector;
bio_add_page(bio, page, size, 0);
bio->bi_end_io = writeComplete;
submit_bio(WRITE_FLUSH_FUA, bio);
}
Code for reading from specific sector(you need to implement readComplete function also):
int readPage(struct block_device *device, sector_t sector, int size,
struct page *page)
{
int ret;
struct completion event;
struct bio *bio = bio_alloc(GFP_NOIO, 1);
bio->bi_bdev = device;
bio->bi_sector = sector;
bio_add_page(bio, page, size, 0);
init_completion(&event);
bio->bi_private = &event;
bio->bi_end_io = readComplete;
submit_bio(READ | REQ_SYNC, bio);
wait_for_completion(&event);
ret = test_bit(BIO_UPTODATE, &bio->bi_flags);
bio_put(bio);
return ret;
}
page can be allocated with alloc_page(GFP_KERNEL). Also for changing data in page use page_address(page). It returns void* so you can interpret that pointer as whatever you want.

How do I use Loki's Small Object Allocator in Lua successfully?

I've read somewhere on here where someone recommended using Loki's Small Object Allocator for Lua to help improve allocation performance. I read through the section in 'Modern C++ Design' and I think I've got a good enough understand on using Loki for this, with the exception of not using the SmallObject - Lua just wants raw memory, so I took a first stab at using the SmallObjAllocator directly.
The allocations seem like they are working, but everything completely fails once I tried to load a script (either using lua_load() with my own custom reader, or using luaL_loadfile() to read the file directly).
Here's my implementation of the SmallObjAllocator class:
class MySmallAllocator : public Loki::SmallObjAllocator
{
public:
MySmallAllocator( std::size_t pageSize,
std::size_t maxObjectSize,
std::size_t objectAlignSize ) : Loki::SmallObjAllocator( pageSize, maxObjectSize, objectAlignSize )
{
}
virtual ~MySmallAllocator()
{
}
};
static MySmallAllocator alloc_(4096,64,4);
And when I create the Lua state, I give it the allocation function that uses this new allocator:
masterState_ = lua_newstate(customAlloc_, &heap_);
void* customAlloc_( void* ud, void* ptr, size_t osize, size_t nsize )
{
// If the new size is zero, we're destroying a block
if (nsize == 0)
{
alloc_.Deallocate( ptr );
ptr = NULL;
}
// If the original size is zero, then we're creating one
else if (0 != nsize && 0 == osize)
{
ptr = alloc_.Allocate( nsize, false );
}
else
{
alloc_.Deallocate( ptr );
ptr = alloc_.Allocate( nsize, false );
}
return ptr;
}
And here I go to load the file:
int result = luaL_loadfile( masterState_, "Global.lua" );
If i have a simple for loop in Global.lua the system never returns from the call to luaL_loaloadfile():
for i=1,100 do
local test = { }
end
What is wrong, how should I diagnose this, and how do I fix it?
The issue that leaps out at me is that your custom allocator needs to behave like C's realloc() function. This is critical in the case where osize != nsize and both are non-zero. The key property of realloc() in this case is that it preserves the values of the first min(osize,nsize) bytes of the old block as the beginning of the new block.
You have:
else
{
alloc_.Deallocate( ptr );
ptr = alloc_.Allocate( nsize, false );
}
which abandons all the content of the old allocation.
This is specified
The allocator function must provide a
functionality similar to realloc, but
not exactly the same.
in the documentation for lua_Alloc.
Good call! I really didn't understand what realloc() did, so you set me on the right track. I replaced the reallocation part to the code below and everything works now, but my performance right now is actually a bit worse than just using the HeapAlloc/HeapReAlloc/HeapFree I had before.
void* replacementPtr = alloc_.Allocate( nsize, true );
memcpy( replacementPtr, ptr, min(osize, nsize) );
alloc_.Deallocate( ptr );
ptr = replacementPtr;
I suspect an issue is because Loki uses malloc/free for each Chunk as well as when the size is > GetMaxObjectSize()...

Resources