I have some old code that writes binary data to the Response object's BinaryWrite() method (Classic ASP). It sends the data in 4MB chunks to BinaryWrite(), but now I'm wondering whether that ever worked and whether BinaryWrite() is even designed to handle serial chunks of data (or whether it should only be called, at most, once per page request).
I found this link that describes how the "Response Buffering Limit" should be increased, and increasing it seems to have solved the issues I was seeing (without using my chunking code at all).
https://learn.microsoft.com/en-us/troubleshoot/iis/http-500-response-binarywrite
This is the old code in question:
HRESULT STDMETHODCALLTYPE CQVMActiveHost::WriteData (const VOID* pcvData, DWORD cbData, __out DWORD* pcbWritten)
{
HRESULT hr;
DISPPARAMS dispParams = {0};
VARIANT vWrite = {0}, vResult = {0};
Check(LoadResponseObject());
dispParams.cArgs = 1;
dispParams.rgvarg = &vWrite;
if(m_fSupportsBinary)
{
SAFEARRAYBOUND Bound;
DWORD cbRemaining = cbData;
Bound.lLbound = 0;
vWrite.vt = VT_ARRAY | VT_UI1;
while(0 < cbRemaining)
{
PVOID pbPtr;
Bound.cElements = min(cbRemaining, 4 * 1024 * 1024);
vWrite.parray = SafeArrayCreate(VT_UI1, 1, &Bound);
CheckAlloc(vWrite.parray);
SafeArrayAccessData(vWrite.parray, &pbPtr);
CopyMemory(pbPtr, pcvData, Bound.cElements);
SafeArrayUnaccessData(vWrite.parray);
VariantClear(&vResult);
Check(m_pResponse->Invoke(m_dispidBinaryWrite, IID_NULL, LOCALE_SYSTEM_DEFAULT, DISPATCH_METHOD, &dispParams, &vResult, NULL, NULL));
SafeArrayDestroy(vWrite.parray);
vWrite.parray = NULL;
pcvData = reinterpret_cast<const BYTE*>(pcvData) + Bound.cElements;
cbRemaining -= Bound.cElements;
}
vWrite.vt = VT_EMPTY;
}
else
I've seen a couple different behaviors with the old code. In some tests, the first call to BinaryWrite() succeeded, but subsequent calls failed with the "exception occurred" HRESULT. In other tests, the calls seemed to succeed, but the browser didn't receive any data.
Are there any scenarios where it would make sense to make multiple calls to BinaryWrite() with chunked data?
Or should I always increase the "Response Buffering Limit" value to more than 4MB and just make a single call to BinaryWrite() with the full data?
Thanks!
I have to wonder whether the Response.Buffer property was false when I originally wrote the code above. What I've found is the following:
The Response.BinaryWrite() method MAY be called multiple times per page request.
If a large amount of data is to be returned to the client, then split the data into multiple calls to Response.BinaryWrite().
The "Response Buffering Limit" value in IIS (for ASP) is 4MB by default.
If Response.Buffer is true, then multiple calls to Response.BinaryWrite() may be made until the total data reaches the "Response Buffering Limit" value. At that point, Response.Flush() MUST be called. Otherwise, attempting to send more data results in error 0x80020009.
If Response.Buffer is false, then do NOT call Response.Flush(), but do split the data into multiple (smaller) calls to Response.BinaryWrite().
As an example, I was trying to send a 12MB file using multiple calls to Response.BinaryWrite() with each chunk being 4MB. Buffering was enabled, so the first call succeeded, but the next call failed. Raising the "Response Buffering Limit" to 16MB "solved" the issue but increased the buffering allocation for ASP.
Ultimately, I've modified my chunking code to first query the Response.Buffer property. The data is always sent in smaller fragments to Response.BinaryWrite(), but Response.Flush() is also called if buffering is enabled.
Finally, don't set the Content-Length header. The browser may not know how many bytes will be downloaded, but it will receive the file correctly without manually setting that header. Setting that header breaks the download.
And the final ASP script:
function GoDownloadFile (strPath)
{
var idxName = strrchr(strPath, '/');
var strFolder = left(strPath, idxName + 1);
var strFile = right(strPath, len(strPath) - (idxName + 1));
var oFS = Security.GetChannel().OpenFileSystem();
oFS.Folder = strFolder;
Response.ContentType = Host.GetContentType(strFile);
Response.AddHeader("Content-Disposition", "attachment; FileName=\"" + strFile + "\"");
Host.BinaryWrite(oFS.ReadFile(strFile));
}
Related
When the code executes the ReadFile the call never returns. Previously the way I saved the Handle and passed it from function to function was not properly done. I changed the way the handle was stored and it works with all of the other calls in the program except the read. I have looked and compared all of the variables used and they check out.
Here is the code. instance->Master has the handle from the create. I added a GetCommState call before the read and it executes correctly.
'''
ATCA_STATUS swi_silab_receive_byte(ATCASWIMaster_t* instance, uint8_t* data)
{
uint8_t retries = 3;
DWORD NoBytesRead = 0; // Bytes read by ReadFile()
uint8_t SerialBuffer; //Buffer to send and receive data
DWORD Byte_count = (DWORD)sizeof(SerialBuffer);
ATCA_STATUS status;
while ((retries > 0) && (NoBytesRead < 1))
{
//Read data and store in a buffer
status = GetCommState(instance->hMaster, &instance->dcbMaster);
if (status == 0)
printf(" init GetcommState failed\n");
status = ReadFile(instance->hMaster, &SerialBuffer, Byte_count, &NoBytesRead, NULL);
retries--;
}
if (status == FALSE)
{
printf_s("\nError! in ReadFile()\n\n");
return ATCA_TIMEOUT;
}
else
{
printf("Read Success Serial Buffer = %x\n", &SerialBuffer);
*data = SerialBuffer;
//printf("Read Success Data = %x\n", *data);
return ATCA_SUCCESS;
}
}
'''
I am happy to state there is no issue in the code. Thanks to Zhu Song who made a comment about reading the ReadFile remarks. If there is no data to read then readfile will just wait. A check with the logic analyzer showed the write executes but doesn't actually write, hence the read is not actually able to read.
Thanks to everyone who commented
According to ReadFile:
The ReadFile function returns when one of the following conditions occur:
The number of bytes requested is read.
A write operation completes on the write end of the pipe.
An asynchronous handle is being used and the read is occurring asynchronously.
An error occurs.
To cancel all pending asynchronous I/O operations, use either:
CancelIo—this function only cancels operations issued by the calling thread for the specified file handle.
CancelIoEx—this function cancels all operations issued by the threads for the specified file handle.
I am using the win32 function GetRegionData(...) to extract the exact rectangles which make up the invalidated paint region in response to a WM_PAINT message.
The following code works correctly and the second call to GetRegionData succeeds.
DWORD uRegionSize = GetRegionData(hRgn, sizeof(RGNDATA), NULL); // Send NULL request to get the storage size
RGNDATA* pData = (RGNDATA*)(new char[uRegionSize]); // Allocate space for the region data
pData->rdh.dwSize = uRegionSize;
DWORD uSizeCheck = GetRegionData(hRgn, uRegionSize, pData);
if (uSizeCheck != uRegionSize) {
// FAIL!
delete[] pData;
return;
}
...
do stuff with rectangles
...
But when I tried to move the data buffer to a member variable allocated on the stack, GetRegionData fails every time returning 0.
In my header:
char UpdateRegionData[LOTS_MORE_BYTES_THAN_NEEDED];
In my cpp:
DWORD uRegionSize = GetRegionData(hRgn, sizeof(RGNDATA), NULL); // Send NULL request to get the storage size
RGNDATA* pData2 = (RGNDATA*)UpdateRegionData;
pData2->rdh.dwSize = uRegionSize;
DWORD uSizeCheck = GetRegionData(hRgn, uRegionSize, pData2);
if (uSizeCheck != uRegionSize) {
// FAIL!
return;
}
The only thing different between the 2 versions is the memory allocation, but the second one fails. GetLastError() returns code 183 which is ERROR_ALREADY_EXISTS which doesn't seem to make much sense.
Thanks to Raymond for pointing out the size error - that was indeed an error, but it was not cause of the issue. The actual cause was byte alignment. The project I am working on has its byte alignment set to a default of 1. When I specified 4 byte alignment for the buffer using __declspec(align(4)) then the problem was solved.
I have a page blob containing effectively log data. Everything works fine until the log fills up past 2 MB.
When Reading, I'm using the OpenReadAsync method to get a stream from which I read data out of. Prior to calling OpenReadAsync, I set StreamMinimumReadSizeInBytes to 2MB (2 * 1024 * 1024).
After opening the stream, I use the following method to read data out.
public IEnumerable<object> Read(Stream pageAlignedEventStream, long? maxBytes = null)
{
while (pageAlignedEventStream.Position < (maxBytes ?? pageAlignedEventStream.Length))
{
byte[] bytesToReadBuffer = new byte[LongZero.Length];
pageAlignedEventStream.Read(bytesToReadBuffer, 0, LongZero.Length);
long bytesToRead = BitConverter.ToInt64(bytesToReadBuffer, 0);
if (bytesToRead == 0)
{
yield break;
}
if (bytesToRead < 0)
{
throw new InvalidOperationException("Invalid size specification. Stream may be corrupted.");
}
if (bytesToRead > Int32.MaxValue)
{
throw new InvalidOperationException("Payload size is too large.");
}
byte[] payload = new byte[bytesToRead];
int read = pageAlignedEventStream.Read(payload, 0, (int) bytesToRead);
if (read != bytesToRead)
{
// when fails, read == 503, bytesToRead = 3575, position = 2MB (2*1024*14024)
throw new InvalidOperationException("Did not read expected number of bytes.");
}
yield return this.EventSerializer.DeserializeFromStream(new MemoryStream(payload, false));
var paddedSpaceToSkip = PagesRequired(bytesToRead) * PageSizeBytes - bytesToRead - LongZero.Length;
pageAlignedEventStream.Position += paddedSpaceToSkip;
}
yield break;
}
As noted in the comments in the code, the failure happends when the position reaches the 2MB specified. The read fails to pull additional bytes before returning and only reads 503 bytes instead of the expected 3575 bytes.
My expectation was that as I read past the buffer size, it would download more data.
I found a similar issue on Azure Feedback, but linked issue indicates a non-power-of-2 buffersize but 2MB is definitely power of 2.
I could fetch the all data (Size=3MB) that stored in a page blob even though I set StreamMinimumReadSizeInBytes property of CloudPageBlob to 2MB.
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(
CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("mycontainername");
container.CreateIfNotExists();
CloudPageBlob pageBlob = container.GetPageBlobReference("mypageblob");
pageBlob.StreamMinimumReadSizeInBytes = 2 * 1024 * 1024;
Task<Stream> pageAlignedEventStream = pageBlob.OpenReadAsync();
The read fails to pull additional bytes before returning and only reads 503 bytes instead of the expected 3575 bytes.
If that many bytes are not currently available and the end of the stream has been reached, the returned value could be less than the number of bytes requested. Please debug your code to trace the changes of variable of paddedSpaceToSkip and check whether your code logic is as expected.
I need to play raw PCM data (16 bit signed) using CoreAudio on OS X. I get it from network using UDP socket (on sender side data is captured from microphone).
The problem is that all I hear now is some short cracking noise and then only silence.
I'm trying to play data using AudioQueue. I setup it like this:
// Set up stream format fields
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = 44100;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
OSStatus err = noErr;
// create the audio queue
err = AudioQueueNewOutput(&streamFormat, MyAudioQueueOutputCallback, myData, NULL, NULL, 0, &myData->audioQueue);
if (err)
{ PRINTERROR("AudioQueueNewOutput"); myData->failed = true; result = false;}
// allocate audio queue buffers
for (unsigned int i = 0; i < kNumAQBufs; ++i) {
err = AudioQueueAllocateBuffer(myData->audioQueue, kAQBufSize, &myData->audioQueueBuffer[i]);
if (err)
{ PRINTERROR("AudioQueueAllocateBuffer"); myData->failed = true; break; result = false;}
}
// listen for kAudioQueueProperty_IsRunning
err = AudioQueueAddPropertyListener(myData->audioQueue, kAudioQueueProperty_IsRunning, MyAudioQueueIsRunningCallback, myData);
if (err)
{ PRINTERROR("AudioQueueAddPropertyListener"); myData->failed = true; result = false;}
MyAudioQueueOutputCallback is:
void MyAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer)
{
// this is called by the audio queue when it has finished decoding our data.
// The buffer is now free to be reused.
MyData* myData = (MyData*)inClientData;
unsigned int bufIndex = MyFindQueueBuffer(myData, inBuffer);
// signal waiting thread that the buffer is free.
pthread_mutex_lock(&myData->mutex);
myData->inuse[bufIndex] = false;
pthread_cond_signal(&myData->cond);
pthread_mutex_unlock(&myData->mutex);
}
MyAudioQueueIsRunningCallback is:
void MyAudioQueueIsRunningCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueuePropertyID inID)
{
MyData* myData = (MyData*)inClientData;
UInt32 running;
UInt32 size;
OSStatus err = AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &running, &size);
if (err) { PRINTERROR("get kAudioQueueProperty_IsRunning"); return; }
if (!running) {
pthread_mutex_lock(&myData->mutex);
pthread_cond_signal(&myData->done);
pthread_mutex_unlock(&myData->mutex);
}
}
and MyData is:
struct MyData
{
AudioQueueRef audioQueue; // the audio queue
AudioQueueBufferRef audioQueueBuffer[kNumAQBufs]; // audio queue buffers
AudioStreamPacketDescription packetDescs[kAQMaxPacketDescs]; // packet descriptions for enqueuing audio
unsigned int fillBufferIndex; // the index of the audioQueueBuffer that is being filled
size_t bytesFilled; // how many bytes have been filled
size_t packetsFilled; // how many packets have been filled
bool inuse[kNumAQBufs]; // flags to indicate that a buffer is still in use
bool started; // flag to indicate that the queue has been started
bool failed; // flag to indicate an error occurred
bool finished; // flag to inidicate that termination is requested
pthread_mutex_t mutex; // a mutex to protect the inuse flags
pthread_mutex_t mutex2; // a mutex to protect the AudioQueue buffer
pthread_cond_t cond; // a condition varable for handling the inuse flags
pthread_cond_t done; // a condition varable for handling the inuse flags
};
I'm sorry if I posted too much code - hope it helps anyone to understand what exactly I do.
Mostly my code based on this code which is version of AudioFileStreamExample from Mac Developer Library adapted to work with CBR data.
Also I looked at this post and tried AudioStreamBasicDescription desribed there. And tried to change my flags to Little or Big Endian. It didn't work.
I looked at some another posts here and in the other resources while finding similar problem, I checked the order of my PCM data, for example. I just can't post more than two links.
Please anyone help me to understand what I'm doing wrong! Maybe I should abandon this way and use Audio Units right away? I'm just very newbie in CoreAudio and hoped that mid-level of CoreAudio will help me to solve this problem.
P.S. Sorry for my English, I tried as I can.
I hope you've solved this one on your own already, but for the benefit of other people who are having this problem, I'll post up an answer.
The problem is most likely because once an Audio Queue is started, time continues moving forward, even if you stop enqueueing buffers. But when you enqueue a buffer, it is enqueued with a timestamp that is right after the previously enqueued buffer. This means that if you don't stay ahead of the where the audio queue is playing, you will end up enqueuing buffers with a timestamp in the past, therefore the audio queue will go silent and the isRunning property will still be true.
To work around this, you have a couple of options. The simplest in theory would be to never fall behind on submitting buffers. But since you are using UDP, there is no guarantee that you will always have data to submit.
Another option is that you can keep track of what sample you should be playing and submit an empty buffer of silence whenever you need to have a gap. This option works good if your source data has timestamps that you can can use to calculate how much silence you need. But ideally, you wouldn't need to do this.
Instead you should be calculating the timestamp for the buffer using the system time. Instead of AudioQueueEnqueueBuffer, you'll need to use AudioQueueEnqueueBufferWithParameters instead. You just need to make sure the timestamp is ahead of where the queue is currently at. You'll also have to keep track what the system time was when you started the queue, so you can calculate the correct timestamp for each buffer you are submitting. If you have timestamp values on your source data, you should be able to use them to calculate the buffer timestamps as well.
I am trying to understand how NPN_RequestRead should be used when writing an NPAPI plugin. The documentation looked at first pretty clear but I still cannot make the plugin work so far.
Here is my goal: implement a JPEG 2000 plugin using NPAPI. To have a proper implementation I need to access the JPEG 2000 stream using random access. In my case images are huge (100000x100000 RGB), but can efficiently be displayed using the first few bytes (thanks to multiresolution !).
As far I can tell I cannot make the plugin stop the GET. I cannot use local file access in firefox since it appears to be broken. However I can use a local apache2 installation and have the plugin be called with NPP_NewStream( ... seekable=true ) mode:
$ HEAD http://localhost/test.jp2 | grep Accept-Ranges
Accept-Ranges: bytes
Since seekable is set to true, I create the plugin with *stype = NP_SEEK. It seems that from this point I should be able to stop the GET with:
NPError NPP_NewStream(NPP instance, NPMIMEType type, NPStream* stream, NPBool seekable, uint16_t* stype)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
However the requestread returns an error. I've had a little more chance with:
int32_t NPP_Write(NPP instance, NPStream* stream, int32_t offset, int32_t len, void* buffer)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
But still, from the network console I can see that the entire stream has been downloaded.
Does anyone has a minimal example of a working NPAPI using the NPN_RequestRead API ?
You're requesting 0 bytes (.length = 0).
Firefox will therefore skip the range. Since there are no other valid ranges, there are no actual requests and hence Firefox returns an error.
From nsPluginStreamListenerPeer.cpp, unrelated parts stripped:
int32_t requestCnt = 0;
for (NPByteRange * range = aRangeList; range != nullptr; range = range->next) {
// XXX zero length?
if (!range->length)
continue;
// ...
requestCnt++;
}
// ...
*numRequests = requestCnt;
// ...
if (numRequests == 0)
return NS_ERROR_FAILURE;
So, you'll need to actually request something!
(Admittedly, the implementation looks kinda broken/lmited, e.g. you cannot request bytes=0- with it)