Firefox unaccesible spectrum cause sound object crash - firefox

I'm having a issue with SoundManager2 API.
I use whileplaying parameter to call a function where I obtain spectrum of the sound (Created in the API) and create a wave.
The problem resides when I open another flash object with a soundSpectrum the API throw following errors on the console:
"(Flash): getWaveformData() (waveform data) SecurityError: Error #2122"
"(Flash): computeSpectrum() (EQ data) SecurityError: Error #2122"
"sound: Data error: data unavailable: SecurityError: Error #2122"
And I'm not aviable to call the sound object again, this only happens in firefox.
Is there a solution for this?

Well I did few things here, isn't a total fix but work at last:
computeSpectrum try to access the sound card output but when isn't aviable throw a error (Managed in SoundManager2 by ondataerror event).
Add a External Callback from flash which returns SoundMixer.areSoundsInaccessible() then when the event ondataerror is call stops the music and start a loop waiting until the sound card output is acessible again and then restart the music (Including whileplaying event).
(I did modified the flash file and some parts of the code).
Hope this help someone but isn't the awnser I was looking for.

Related

MFC: Conflicting information on how to clean up after using COleDataSource::DoDragDrop()

Can someone clear up all the conflicting information.
The documentation for COleDataSource::CacheData() says:
After the call to CacheData the ptd member of lpFormatEtc and the
contents of lpStgMedium are owned by the data object, not by the
caller.
The documentation for COleDataSource::CacheGlobalData() doesn't say that.
You find code examples of how to use COleDataSource::DoDragDrop() on places like Code Project which call COleDataSource::CacheGlobalData() then check the DoDragDrop() results and free the memory if it didn't drop:
DROPEFFECT dweffect=datasrc->DoDragDrop(DROPEFFECT_COPY);
// They say if operation wasn't accepted, or was canceled, we
// should call GlobalFree() to clean up.
if (dweffect==DROPEFFECT_NONE) {
GlobalFree(hgdrop);
}
You also have Q182219 (which for some unknown reason MS has totally broken all the good old MSKB links and you can't find any information anymore (links all over the Internet are dead). Q182219 says something about after a successful move operation on NT based OS (Win10), it will return DROPEFFECT_NONE so you have to check if things really moved.
So the questions are:
1) Should you really free the memory if the drop operation returns DROPEFFECT_NONE or would MFC handle all that?
2) Does it still return DROPEFFFECT_NONE after a successful move operation or does MFC handle that internally (or fixed in some Windows version) ?
3) If it does return DROPEFFECT_NONE after a move, wouldn't logic like the above be a double free on memory if it was a move operation?
Extra:
You also find examples and usage of people doing COleDataSource mydatasource which is WRONG. You have to allocate it like COleDataSource *mydatasource=new ColeDataSource() and then at the end use mydatasource->ExternalRelease() to release it. (ExternalRelease() handles calling InternalRelease() if the object isn't using aggregation (that is a derived class that acts as a wrapper))
TIA!!

Custom windows credential provider crashes with Exception code: 0xc0000374

I have developed a custom credential provider. This credential provider uses 1) camera 2) facial sdk to match the user. Once the user is matched account name is populated and CredentialsChanged signal is triggered. I have customized samplehardwareeventcredentialprovider
to achieve this functionality. This works fine with few of the machine ( all windows 10). When I tried to execute this another machine ( different brand), I get the following exception randomly and makes the screen go black , unstable login screen. All the dependencies are in place but it is not stable at all.
I have turned off the winbio service, disabled many of default credential providers but I face the same issue.
My Flow:
I initiate the facial identification flow in CSampleCredential::Initialize api and once it is identified, update the value for rgFieldStrings[SFI_USERNAME]
In the following method, after completing CSampleCredential::Initialize , I use CSampleProvider::OnConnectStatusChanged method to trigger login window. If everything works as expected, it launches login window with user name auto populated. The entire flow works file, but it is not stable in few machine.
HRESULT CSampleProvider::SetUsageScenario(
__in CREDENTIAL_PROVIDER_USAGE_SCENARIO cpus,
__in DWORD dwFlags
)
Am I doing something fundamentally wrong here?
Any pointers will be helpful! Thanks
I generated localdump by following Steps to Catch a Simple “Crash Dump” of a Crashing Process
By analyzing the log, it was evident that there was a heap corruption. By mistake, malloc allocation was done for the size of 4. Actually this allocation should be of size 260. When the memory is accessed beyond this size, it was triggering the random crash based on the input data.
Original code with bug:
uint8_t* data = (uint8_t*)malloc(sizeof(MAX_PATH));
Fixed code:
uint8_t* data = (uint8_t*)malloc(MAX_PATH*sizeof(uint8_t));

op_ici_install() function call within RTI callback causing OPNET to crash

I use Opnet in conjunction with other simulators for co-simulation under High Level Architecture.
Upon receiving co-simulation messages from other simulators (interaction-receive / attribute update), the callback routine attempts to schedule remote interrupt with ICI installed.
However, the op_ici_install() function call within the callback routine always result in fatal crash, with error Access Violation Exception, hence I suspect that op_ici_install function
cannot be used from within RTI callback.
Please suggest probable causes and work around solutions.
More experiments confirmed that op_ici_install() would not work on an RTI callback. Work around solution is to use another process communication mechanism op_ev_state_install(), which can achieve pretty much the same function.
Yes, right, Don't use the ICI with an interrupt. Event state will provide what you want to do.

USBHIDManager HID, getReport() and setReport() On Mac Environment

We are trying to communicate with a USB HIDDevice. This device is working fine in windows, where we can send a report and get a report back using WriteFile() and ReadFile().
On the Mac, we are trying to interface with the device using setReoprt() and getReport(). But getReport() is not returning any data, but an error.
What is the wrong in the application?
In order to make use of asynchronous behavior, the event source obtained using getAsyncEventSource must be added to a run loop.
The above note is part of the comment of setReport. U might need to learn the runloop mechanism of Runloop in Mac OS first.
Since it's impossible to explain the mechanism here. The following functions and orders might help u coding when u get familiar with RunLoop.(Try to search "CFRunLoop" in google)
CFRunLoopGetCurrent();
CFRunLoopRun();
CFRunLoopAddSource(CFRunLoopRef rl, CFRunLoopSourceRef source, CFStringRef mode);
CFRunLoopStop(CFRunLoopRef rl);(i usually call this function in the callback method)

Windows: TCP/IP: force close connection: avoid memleaks in kernel/user-level

A question to windows network programming experts.
When I use pseudo-code like this:
reconnect:
s = socket(...);
// more code...
read_reply:
recv(...);
// merge received data
if(high_level_protocol_error) {
// whoops, there was a deviation from protocol, like overflow
// need to reset connection and discard data right now!
closesocket(s);
goto reconnect;
}
Does kernel un-associate and frees all data "physically" received from NIC(since it must really already be there, in kernel memory, waiting for user-level to read it with recv()), when I closesocket()? Well, it logically should since data is not associated with any internal object anymore, right?
Because I don't really want to waste unknown amount of time for clean shutdown like "call recv() until returns error". That does not make sense: what if it will never return error, say, server continues to send data forever and not closes connection, but that is bad behaviour?
I'm wondering about it since I don't want my application to cause memory leaks anywhere. Is this way of forced resetting connection, that still expected to send in unknown amount of data correct?
// optional addition to question: if this method considered correct for windows, can it be considered correct (with change of closesocket() to close() ) for UNIX-compliant OS?
Kernel drivers in Windows (or any OS really), including tcpip.sys, are supposed to avoid memory leaks in all circumstances, regardless of what you do in user mode. I would think that the developers have charted the possible states, including error states, to make sure that resources aren't leaked. As for user mode, I'm not exactly sure but I wouldn't think that resources are leaked in your process either.
Sockets are just file objects in Windows. When you close the last handle to a file, the IO manager sends a IRP_MJ_CLEANUP message to the driver that owns the file to clean up resources associated with it. The receive buffers associated with the socket would be freed along with the file object.
It does say in the closesocket documentation that pending operations are canceled but that async operations may complete after the function returns. It sounds like closing the socket while in use is a supported scenario and wouldn't lead to a memory leak.
There will be no leak and you are under no obligation to read the stream to EOS before closing. If the sender is still sending after you close it will eventually get a 'connection reset'.

Resources