We are streaming binary data from our server via websockets to web browsers.
The browser receives the data just fine; we can manipulate it with Javascript.
The code is a very basic one:
function WebSocketTest()
{
if ("WebSocket" in window)
{
var ws = new WebSocket("ws://localhost:3456/someresource");
......
ws.onmessage = function (evt)
{
//var received_msg = evt.data;
//var reader = new window.FileReader();
//...........
};
..............................................
..............................................
The problem is - even in this simplest form (onmessage function is all empty, so we do nothing with evt.data, which is a binary blob), the memory usage of Chrome browser keeps growing, as if garbage collector never frees the blob. It grows forever until Chrome closes websocket connection or crashes. We have tried to manually release evt.data with various methods, but nothing helps. You would expect that the code above will not cause memory leak in Chrome, correct? How to release the blob referenced by evt.data?
Hey I have found the reason for "memory leak" - it's not a memory leak but my programming error. Posting here just in case anyone else makes the same mistake. What happened is - for messages larger than 64KB you need to format the data length as a 64-bit integer packaged in network byte order. I was not converting my Windows-based __int64 to network byte order correctly. So the browser thought it's downloading a huge message (10GB or something like that) - and kept downloading until run out of memory.
Related
There is a good SO Q/A session on the general use of WM_COPYDATA messages here, and a 'discussion' about whether or not this will work between apps of different 32/64-bitness here. However, the latter seems to be focussed on possible misuse of the 'data pointer' being passed. So, I'm raising a new question here.
I am working on getting two Windows apps to communicate/synchronize with each other and, as a first-round approach, I'm using Windows Messaging to implement this. Everything seems OK for now … but I'm using the WM_COPYDATA message to pass info between the apps.
My question: Is this approach guaranteed to be safe when the two apps have different (32/64) bitness? I've done some tests using the code below with all four possible combinations of 32 vs 64 builds between 'client' and 'server', and all work as expected; but is this just because I'm getting 'lucky' results (from possible undefined behaviour), or does the WOW64 system (especially when server is 64-bit and client is 32) take care of all the necessary marshalling?
If anyone can confirm that it is guaranteed to work, I would very much appreciate an 'official' link/reference confirming that.
Shared header file:
static const WPARAM nmIdFilePath = 0x00001100;
struct nmLockInfoType {
char filePathID[1024];
// More elements will be added later!
};
static const nmLockInfoType nmLockInfoDefault = {
"<<<Uninitialised Image Data Path>>>",
//...
};
extern nmLockInfoType nmLockInfo; // MUST be provided by each app!
///nmLockInfoType nmLockInfo = nmLockInfoDefault; // Use this code to instatiate it (once, somewhere)!
Server program code (inside the handler for a RegisterWindowMessage(L"HANDSHAKE"); message):
//...
COPYDATASTRUCT cds;
cds.dwData = nmIdFilePath; // Pre-defined ID
cds.cbData = sizeof(nmLockInfoType);
cds.lpData = &nmLockInfo; // Pre-defined structure (see above)
//...
// Send a copy of the "Welcome Pack" data structure to the client app ...
::SendMessage(clientLock, WM_COPYDATA, WPARAM(m_hWnd), LPARAM(&cds)); // "clientLock is the HWND of the client app's MainWnd
Client Program code:
BOOL MyFrame::OnCopyData(CWnd* pWnd, COPYDATASTRUCT* pCopyDataStruct)
{
switch (pCopyDataStruct->dwData)
{
case nmIdFilePath:
memcpy(&nmLockInfo, pCopyDataStruct->lpData, pCopyDataStruct->cbData);
return nmsSucceeded; // This is NON_ZERO so evaluates to TRUE
// Other cases here ...
}
return CMDIFrameWnd::OnCopyData(pWnd, pCopyDataStruct);
}
I'm particularly concerned about the case when the client is 32-bit but the server is 64-bit; in such a case, it would be sending a 64-bit data address to a 32-bit app (albeit, a WOW64 app). Does the in-built 'marshalling' handle this in WOW64 situations?
It's safe only when we follow the rule how to use it. Please refer the remarks of WM_COPYDATA message from below:
The data being passed must not contain pointers or other references to
objects not accessible to the application receiving the data.
While this message is being sent, the referenced data must not be
changed by another thread of the sending process.
The receiving application should consider the data read-only. The
lParam parameter is valid only during the processing of the message.
The receiving application should not free the memory referenced by
lParam. If the receiving application must access the data after
SendMessage returns, it must copy the data into a local buffer.
For example, if we are trying to passing the data type: ULONG_PTR, then the data copy maybe not function well when pass it from 64-bit application to 32-bit application. Because it is 32-bit on 32-bit application and 64-bit on 64-bit application.
You can test it via modify the code below:
struct nmLockInfoType {
char filePathID[1024];
ULONG_PTR point64_32;
// More elements will be added later!
};
The scenario mentioned above, which should be safe as the result you tested. Feel free to let me know if you still have concern about.
In-addition, below is an helpful document about developing 64-bit application for your reference:
Common Visual C++ 64-bit Migration Issues
Dear programmers, i wrote a program wich target a Windows Mobile platform (NetCF 3.5).
My programm has a method of answers check and this method show dynamically created pictureboxes, textboxes and images in new form. Here is a method logic:
private void ShowAnswer()
{
PictureBox = new PictureBox();
PictureBox.BackColor = Color.Red;
PictureBox.Location = new Point(x,y);
PictureBox.Name = "Name";
PictureBox.Size = Size(w,h);
PictureBox.Image = new Bitmap(\\Image01.jpg);
}
My problem is in memory leaks or something. If the user work with a programm aproximately 30 minutes and run the ShowAnswer() method several times, Out of memry exception appears. I know that the reason may be in memory allocation of bitmaps, but i even handle the ShowAnswers form closing event and manually trying to release all controls resources and force a garbage collector:
foreach(Control cntrl in this.Controls)
{
cntrl.Dispose();
GC.Collect();
}
It seems like everything collects and disposes well, every time i check the taskmanager on my windows mobile device during the programm tests and see that memory were released and child form was closed properly, but in every ShowAnswer() method call and close i see a different memory amount in device taskmanager (somtimes it usues 7.5 Mb, sometimes 11.5, sometimes 9.5) any time its different, but it seems like sometimes when the method start to run as usual memory is not allocated and Out of memory exception appears.. Please advice me how to solve my problem.. Maybe i should use another Dispose methods, or i should set bitmap another way.. thank you in advance!!!
Depending on how you're handling the form generation, you might need to dispose of the old Image before loading a new one.
private void ShowAnswer()
{
PictureBox = new PictureBox();
PictureBox.BackColor = Color.Red;
PictureBox.Location = new Point(x,y);
PictureBox.Name = "Name";
PictureBox.Size = Size(w,h);
if(PictureBox.Image != null) //depending on how you construct the form
PictureBox.Image.Dispose();
PictureBox.Image = new Bitmap(\\Image01.jpg);
}
However, you should also check before you load the image that it's not so obscenely large that it munches up all of your device's memory.
Edit: I don't just mean the size of the compressed image in memory - I also mean the physical size of the image (height & width). The Bitmap will create an uncompressed image that will take up much, much more memory than is resident on storage memory (height*width*4). For a more in-depth explanation, check out the following SO question:
OutOfMemoryException loading big image to Bitmap object with the Compact Framework
On Windows XP when I am calling WSASend in iterations on non-blocking socket, it fails with WSAENOBUFS.
I've two cases here:
Case 1:
On non-blocking socket I am calling WSASend. Here is pseudo-code:
while(1)
{
result = WSASend(...); // Buffersize 1024 bytes
if (result == -1)
{
if (WSAGetLastError() == WSAENOBUFS)
{
// Wait for some time before calling WSASend again
Sleep(1000);
}
}
}
In this case WSASend returns sucessfully for around 88000 times. Then it fails with WSAENOBUFS and never recovers even when tried after some time as shown in the code.
Case 2:
In order to solve this problem, I referred this and as suggested there,
just before above code, I called setsockopt with SO_SNDBUF and set buffersize 0 (zero)
In this case, WSASend returns sucessfully for around 2600 times. Then it fails. But after waiting it succeeds again for 2600 times then fails.
Now I've these questions in both the cases:
Case 1:
What factors decides this number 88000 here?
If the failure was because of TCP buffer was full, why it didn't recover after some time?
Case 2:
Again, what factors decides the number 2600 here?
As given in Microsoft KB article, if instead of internal TCP buffers it sends from application buffer directly, why would it fail with WSAENOBUFS?
EDIT:
In case of asynchronous sockets (On Windows XP), the behavior is more strange. If I ignore WSAENOBUFS and continued further writing to socket I eventually get disconnection WSAECONNRESET. And not sure at the moment why does that happen?
The values are undocumented and depend on what's installed on your machine that may sit between your application and the network driver. They're likely linked to the amount of memory in the machine. The limits (most probably non-paged pool memory and i/o page lock limit) are likely MUCH higher on Vista and above.
The best way to deal with the problem is add application level flow control to your protocol so that you don't assume that you can just send at whatever rate you feel like. See this blog posting for details of how non-blocking and async I/O can cause resource usage to balloon and how you have no control over it unless you have your own flow control.
In summary, never assume that you can just write data to the wire as fast as you like using non-blocking/async APIs. Remember that due to how TCP/IP's internal flow control works you COULD be using an uncontrollable amount of local machine resources and the client is the only thing that has any control over how fast those resources are released back to the O/S on the server machine.
A question to windows network programming experts.
When I use pseudo-code like this:
reconnect:
s = socket(...);
// more code...
read_reply:
recv(...);
// merge received data
if(high_level_protocol_error) {
// whoops, there was a deviation from protocol, like overflow
// need to reset connection and discard data right now!
closesocket(s);
goto reconnect;
}
Does kernel un-associate and frees all data "physically" received from NIC(since it must really already be there, in kernel memory, waiting for user-level to read it with recv()), when I closesocket()? Well, it logically should since data is not associated with any internal object anymore, right?
Because I don't really want to waste unknown amount of time for clean shutdown like "call recv() until returns error". That does not make sense: what if it will never return error, say, server continues to send data forever and not closes connection, but that is bad behaviour?
I'm wondering about it since I don't want my application to cause memory leaks anywhere. Is this way of forced resetting connection, that still expected to send in unknown amount of data correct?
// optional addition to question: if this method considered correct for windows, can it be considered correct (with change of closesocket() to close() ) for UNIX-compliant OS?
Kernel drivers in Windows (or any OS really), including tcpip.sys, are supposed to avoid memory leaks in all circumstances, regardless of what you do in user mode. I would think that the developers have charted the possible states, including error states, to make sure that resources aren't leaked. As for user mode, I'm not exactly sure but I wouldn't think that resources are leaked in your process either.
Sockets are just file objects in Windows. When you close the last handle to a file, the IO manager sends a IRP_MJ_CLEANUP message to the driver that owns the file to clean up resources associated with it. The receive buffers associated with the socket would be freed along with the file object.
It does say in the closesocket documentation that pending operations are canceled but that async operations may complete after the function returns. It sounds like closing the socket while in use is a supported scenario and wouldn't lead to a memory leak.
There will be no leak and you are under no obligation to read the stream to EOS before closing. If the sender is still sending after you close it will eventually get a 'connection reset'.
I am pretty sure I am suffering from memory leakage, but I havent 100% nailed down how its happening.
The application Iv'e written downloads 2 images from a url and queues each set of images, called a transaction, into a queue to be popped off by the user interface and displayed. The images are pretty big, averaging about 2.5MB. So as a way of speeding up the user interface and making it more responsive, I pre-load each transaction images into wxImage objects and store them.
When the user pops off another transaction, I feed the preloaded image into a window object that then converts the wxImage into a bitmap and DC blits to the window. The window object is then displayed on a panel.
When the transaction is finished by the user, I destroy the window object (presumably the window goes away, as does the bitmap) and the transaction data structure is overwritten with 'None'.
However, depending on how many images ive preloaded, whether the queue size is set large and its done all at once, or whether I let a small queue size sit over time, it eventually crashes. I really cant let this happen .. :)
Anyone see any obvious logical errors in what im doing? Does python garbage collect? I dont have much experience with having to deal with memory issues.
[edit] here is the code ;) This is the code related to the thread that downloads the images - it is instanced in the main thread the runs the GUI - the download thread's main function is the 'fill_queue' function:
def fill_queue(self):
while True:
if (self.len() < self.maxqueuesize):
try:
trx_data = self.download_transaction_data(self.get_url)
for trx in trx_data:
self.download_transaction_images(trx)
if self.valid_images([trx['image_name_1'], trx['image_name_2']]):
trx = self.pre_load_images(trx)
self.append(trx)
except IOError, error:
print "Received IOError while trying to download transactions or images"
print "Error Received: ", error
except Exception, ex:
print "Caught general exception while trying to download transactions or images"
print "Error Received: ", ex
else:
time.sleep(1)
def download_transaction_images(self, data):
""" Method will download all the available images for the provided transaction """
for(a, b) in data.items():
if (b) and (a == "image_name_1" or a == "image_name_2"):
modified_url = self.images_url + self.path_from_filename(b)
download_url = modified_url + b
local_filepath = self.cache_dir + b
urllib.urlretrieve(download_url, local_filepath)
urllib.urlcleanup()
def download_transaction_data(self, trx_location):
""" Method will download transaction data and return a parsed list of hash structures """
page = urllib.urlopen(trx_location)
data = page.readlines()
page.close()
trx_list = []
trx_data = {}
for line in data:
line = line.rstrip('|!\n')
if re.search('id=', line):
fields = re.split('\|', line)
for jnd in fields:
pairs = jnd.split('=')
trx_data[pairs[0]] = pairs[1]
trx_list.append(trx_data)
return trx_list
def pre_load_images(self, trx):
""" Method will create a wxImage and load it into memory to speed the image display """
path1 = self.cache_dir + trx['image_name_1']
path2 = self.cache_dir + trx['image_name_2']
image1 = wx.Image(path1)
image2 = wx.Image(path2)
trx['loaded_image_1'] = image1
trx['loaded_image_2'] = image2
return trx
def valid_images(self, images):
""" Method verifies that the image path is valid and image is readable """
retval = True
for i in images:
if re.search('jpg', i) or re.search('jpeg', i):
imagepath = self.cache_dir + i
if not os.path.exists(imagepath) or not wx.Image.CanRead(imagepath):
retval = False
else:
retval = False
return retval
Also, I'd like to add that sometimes, just before the crash I get peculiar errors in my console, they look like corrupt image errors but the images are not corrupted, the error has happened at all stages on all images.
Application transferred too few
scanlines [2009-09-08 11:12:03] Error:
JPEG: Couldn't load - file is probably
corrupted. [2009-09-08 11:12:11]
Debug: ....\src\msw\dib.cpp(134):
'CreateDIBSection' fail ed with error
0x00000000 (the operation completed
successfully.).
These errors can happen a la carte, or all together. What I think is happening is that at some point the memory becomes corrupted and anything that happens next, if I load a new transaction, or image, or do a cropping operation - it takes a dive.
So unfortunately after trying out the suggestion of moving the pre-loading function call to wxImage into the main gui thread I am still getting the error - again it will occur after too many images have been loaded into memory or if they sit in memory for too long. Then when I attempt to crop an image the i get a memory error - something is corrupting, whether in the former case I am using too much or dont have enough (which makes no sense because I've increased my paging file size to astronomical proportions) or in the latter case where the length of time is causing a leak or corruption
The only way I think I can go at this point is to use a debugger - are there any easy ways to debug a wxPython application? I would like to see the memory usage in particular.
The main reason why I think I need to preload the images is because if I call wxImage on each image ( I show two at a time) each time i load a 'transaction' the interface from one transaction to the next is very slow and clunky - If I load them in memory its very fast - but then I get my memory error.
Two thoughts:
You do not mention if the downloading is running a separate thread (actually now I see that this is running in a separate thread, I should read more closely). I'm pretty sure that wx.Image is not thread-safe, so if you are instantiating wx.Images in a non-GUI thread, that could lead to trouble like this. (This is almost certainly the issue, most wx classes/objects/functions are not thread-safe).
I've been bitten by nasty IncRef/DecRef bugs in wxPython (due to the underlying C++ bindings) before (mostly associated with wx.Grid and associated classes). While I don't know of any with wx.Image, it wouldn't surprise me to find out you may be required to manually manage memory like you have to in wx.Grid sometimes.
Edit
You need to instantiate the wx.Image in the GUI thread, not the downloading thread (which your above code looks like you are currently instantiating in the non-GUI thread). In general this is almost always going to cause lots of problems in any GUI toolkit. You can search the wxPython mailing list for lots of emails where this is the case. Personally I would do this:
Queue for download urls.
Thread to download images.
Have the downloading thread places a disk location (watch out for race conditions!) in a separate queue and post custom wx.Event(threadsafe) (threadsafe with wx.PostEvent function) to the App thread.
Have the GUI thread pop the file locations and instantiate wx.Image ----> wx.Bitmap (maybe with wx.CallAfter to process when App is idle)
Display (Blit) as needed.