NPAPI: Basic usage of NPN_RequestRead - firefox

I am trying to understand how NPN_RequestRead should be used when writing an NPAPI plugin. The documentation looked at first pretty clear but I still cannot make the plugin work so far.
Here is my goal: implement a JPEG 2000 plugin using NPAPI. To have a proper implementation I need to access the JPEG 2000 stream using random access. In my case images are huge (100000x100000 RGB), but can efficiently be displayed using the first few bytes (thanks to multiresolution !).
As far I can tell I cannot make the plugin stop the GET. I cannot use local file access in firefox since it appears to be broken. However I can use a local apache2 installation and have the plugin be called with NPP_NewStream( ... seekable=true ) mode:
$ HEAD http://localhost/test.jp2 | grep Accept-Ranges
Accept-Ranges: bytes
Since seekable is set to true, I create the plugin with *stype = NP_SEEK. It seems that from this point I should be able to stop the GET with:
NPError NPP_NewStream(NPP instance, NPMIMEType type, NPStream* stream, NPBool seekable, uint16_t* stype)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
However the requestread returns an error. I've had a little more chance with:
int32_t NPP_Write(NPP instance, NPStream* stream, int32_t offset, int32_t len, void* buffer)
[...]
NPByteRange range;
range.offset = 0;
range.length = 0;
range.next = NULL;
NPError e = s_pBrowserFunctions->requestread(stream, &range);
But still, from the network console I can see that the entire stream has been downloaded.
Does anyone has a minimal example of a working NPAPI using the NPN_RequestRead API ?

You're requesting 0 bytes (.length = 0).
Firefox will therefore skip the range. Since there are no other valid ranges, there are no actual requests and hence Firefox returns an error.
From nsPluginStreamListenerPeer.cpp, unrelated parts stripped:
int32_t requestCnt = 0;
for (NPByteRange * range = aRangeList; range != nullptr; range = range->next) {
// XXX zero length?
if (!range->length)
continue;
// ...
requestCnt++;
}
// ...
*numRequests = requestCnt;
// ...
if (numRequests == 0)
return NS_ERROR_FAILURE;
So, you'll need to actually request something!
(Admittedly, the implementation looks kinda broken/lmited, e.g. you cannot request bytes=0- with it)

Related

Is there a way to read the NET_BUFFER at once?

I made NDIS 6 network filter driver and am reading the packet.
When I use Intel I350 NIC, 'MmGetMdlByteCount' returns '9014'bytes.
This value is the same as the MTU size, so I can read the data at once.
However, when using the x540 NIC, 'MmGetMdlByteCount' is returned to '2048'bytes.
So I have to read the MDL over and over again. Why is this happening?
Is there a way to read data at once on the X540 NIC?
I want to reduce repetition because I think the consumption time will be longer if I bring the data several times.
Below is a part of my source code.
PVOID vpByTmpData = NULL;
for( pNbMdl = NET_BUFFER_CURRENT_MDL( pNetBuffer );
pNbMdl != NULL && ulDataLength > 0;
pNbMdl = NDIS_MDL_LINKAGE( pNbMdl ) )
{
ulBytesToCopy = MmGetMdlByteCount( pNbMdl );
if( ulBytesToCopy == 0 )
continue;
vpByTmpData = MmGetSystemAddressForMdlSafe( pNbMdl, NormalPagePriority );
if( !vpByTmpData )
{
bRet = FALSE;
__leave;
}
if( ulBytesToCopy > ulDataLength )
ulBytesToCopy = ulDataLength;
NdisMoveMemory( &baImage[ulMemIdxOffset], (PBYTE)(vpByTmpData), ulBytesToCopy);
ulMemIdxOffset += ulBytesToCopy;
}
Please help me.
What you're seeing is a result of how the NIC hardware physically works. Different hardware will use different buffer layout strategies. NDIS does not attempt to force every NIC to use the same strategy, since that would reduce performance on some NICs. Unfortunately for you, that means the complexity of dealing with different buffers gets pushed upwards into NDIS filter & protocol drivers.
You can use NdisGetDataBuffer to do some of this work for you. Internally, NdisGetDataBuffer works like this:
if MmGetSystemAddressForMdl fails:
return NULL;
else if the payload is already contiguous in memory:
return a pointer to that directly;
else if you provided your own buffer:
copy the payload into your buffer
return a pointer to your buffer;
else:
return NULL;
So you can use NdisGetDataBuffer to obtain a contiguous view of the payload. The simplest way to use it is this:
UCHAR ScratchBuffer[MAX_MTU_SIZE];
UCHAR *Payload = NdisGetDataBuffer(NetBuffer, NetBuffer->DataLength, ScratchBuffer, 1, 0);
if (!Payload) {
return NDIS_STATUS_RESOURCES; // very unlikely: MmGetSystemAddressForMdl failed
}
memcpy(baImage, Payload, NetBuffer->DataLength);
But this can have a double-copy in some cases. (Exercise to test your understanding: when would there be a double-copy?) For slightly better performance, you can avoid the double-copy with this trick:
UCHAR *Payload = NdisGetDataBuffer(NetBuffer, NetBuffer->DataLength, baImage, 1, 0);
if (!Payload) {
return NDIS_STATUS_RESOURCES; // very unlikely: MmGetSystemAddressForMdl failed
}
// Did NdisGetDataBuffer already copy the payload into my flat buffer?
if (Payload != baImage) {
// If not, copy from the MDL to my flat buffer now.
memcpy(baImage, Payload, NetBuffer->DataLength);
}
You haven't included a complete code sample, but I suspect there may be some bugs in your code. I don't see any attempt to handle NetBuffer->CurrentMdlOffset. While this is usually zero, it's not always zero, so your code would not always be correct.
Similarly, it looks like the copy isn't correctly constrained by ulDataLength. You would need a ulDataLength -= ulBytesToCopy in there somewhere to fix this.
I'm very sympathetic to how tricky it is to navigate NBLs, NBs, and MDLs -- my first NIC driver included a nasty bug in calculating MDL offsets. I have some MDL-handling code internally -- I will try to clean it up a bit and publish it at https://github.com/microsoft/ndis-driver-library/ in the next few days. I'll update this post if I get it published. I think there's clearly a need for some nice, reusable and well-documented sample code to just copy (subsets of) an MDL chain into a flat buffer, or vice-versa.
Update: Refer to MdlCopyMdlChainAtOffsetToFlatBuffer in mdl.h

playback raw pcm from network using AudioQueue in CoreAudio

I need to play raw PCM data (16 bit signed) using CoreAudio on OS X. I get it from network using UDP socket (on sender side data is captured from microphone).
The problem is that all I hear now is some short cracking noise and then only silence.
I'm trying to play data using AudioQueue. I setup it like this:
// Set up stream format fields
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = 44100;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsBigEndian | kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
streamFormat.mBitsPerChannel = 16;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBytesPerPacket = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mBytesPerFrame = 2 * streamFormat.mChannelsPerFrame;
streamFormat.mFramesPerPacket = 1;
streamFormat.mReserved = 0;
OSStatus err = noErr;
// create the audio queue
err = AudioQueueNewOutput(&streamFormat, MyAudioQueueOutputCallback, myData, NULL, NULL, 0, &myData->audioQueue);
if (err)
{ PRINTERROR("AudioQueueNewOutput"); myData->failed = true; result = false;}
// allocate audio queue buffers
for (unsigned int i = 0; i < kNumAQBufs; ++i) {
err = AudioQueueAllocateBuffer(myData->audioQueue, kAQBufSize, &myData->audioQueueBuffer[i]);
if (err)
{ PRINTERROR("AudioQueueAllocateBuffer"); myData->failed = true; break; result = false;}
}
// listen for kAudioQueueProperty_IsRunning
err = AudioQueueAddPropertyListener(myData->audioQueue, kAudioQueueProperty_IsRunning, MyAudioQueueIsRunningCallback, myData);
if (err)
{ PRINTERROR("AudioQueueAddPropertyListener"); myData->failed = true; result = false;}
MyAudioQueueOutputCallback is:
void MyAudioQueueOutputCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer)
{
// this is called by the audio queue when it has finished decoding our data.
// The buffer is now free to be reused.
MyData* myData = (MyData*)inClientData;
unsigned int bufIndex = MyFindQueueBuffer(myData, inBuffer);
// signal waiting thread that the buffer is free.
pthread_mutex_lock(&myData->mutex);
myData->inuse[bufIndex] = false;
pthread_cond_signal(&myData->cond);
pthread_mutex_unlock(&myData->mutex);
}
MyAudioQueueIsRunningCallback is:
void MyAudioQueueIsRunningCallback(void* inClientData,
AudioQueueRef inAQ,
AudioQueuePropertyID inID)
{
MyData* myData = (MyData*)inClientData;
UInt32 running;
UInt32 size;
OSStatus err = AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &running, &size);
if (err) { PRINTERROR("get kAudioQueueProperty_IsRunning"); return; }
if (!running) {
pthread_mutex_lock(&myData->mutex);
pthread_cond_signal(&myData->done);
pthread_mutex_unlock(&myData->mutex);
}
}
and MyData is:
struct MyData
{
AudioQueueRef audioQueue; // the audio queue
AudioQueueBufferRef audioQueueBuffer[kNumAQBufs]; // audio queue buffers
AudioStreamPacketDescription packetDescs[kAQMaxPacketDescs]; // packet descriptions for enqueuing audio
unsigned int fillBufferIndex; // the index of the audioQueueBuffer that is being filled
size_t bytesFilled; // how many bytes have been filled
size_t packetsFilled; // how many packets have been filled
bool inuse[kNumAQBufs]; // flags to indicate that a buffer is still in use
bool started; // flag to indicate that the queue has been started
bool failed; // flag to indicate an error occurred
bool finished; // flag to inidicate that termination is requested
pthread_mutex_t mutex; // a mutex to protect the inuse flags
pthread_mutex_t mutex2; // a mutex to protect the AudioQueue buffer
pthread_cond_t cond; // a condition varable for handling the inuse flags
pthread_cond_t done; // a condition varable for handling the inuse flags
};
I'm sorry if I posted too much code - hope it helps anyone to understand what exactly I do.
Mostly my code based on this code which is version of AudioFileStreamExample from Mac Developer Library adapted to work with CBR data.
Also I looked at this post and tried AudioStreamBasicDescription desribed there. And tried to change my flags to Little or Big Endian. It didn't work.
I looked at some another posts here and in the other resources while finding similar problem, I checked the order of my PCM data, for example. I just can't post more than two links.
Please anyone help me to understand what I'm doing wrong! Maybe I should abandon this way and use Audio Units right away? I'm just very newbie in CoreAudio and hoped that mid-level of CoreAudio will help me to solve this problem.
P.S. Sorry for my English, I tried as I can.
I hope you've solved this one on your own already, but for the benefit of other people who are having this problem, I'll post up an answer.
The problem is most likely because once an Audio Queue is started, time continues moving forward, even if you stop enqueueing buffers. But when you enqueue a buffer, it is enqueued with a timestamp that is right after the previously enqueued buffer. This means that if you don't stay ahead of the where the audio queue is playing, you will end up enqueuing buffers with a timestamp in the past, therefore the audio queue will go silent and the isRunning property will still be true.
To work around this, you have a couple of options. The simplest in theory would be to never fall behind on submitting buffers. But since you are using UDP, there is no guarantee that you will always have data to submit.
Another option is that you can keep track of what sample you should be playing and submit an empty buffer of silence whenever you need to have a gap. This option works good if your source data has timestamps that you can can use to calculate how much silence you need. But ideally, you wouldn't need to do this.
Instead you should be calculating the timestamp for the buffer using the system time. Instead of AudioQueueEnqueueBuffer, you'll need to use AudioQueueEnqueueBufferWithParameters instead. You just need to make sure the timestamp is ahead of where the queue is currently at. You'll also have to keep track what the system time was when you started the queue, so you can calculate the correct timestamp for each buffer you are submitting. If you have timestamp values on your source data, you should be able to use them to calculate the buffer timestamps as well.

Cannot get OpenAL to play sound

I've searched the net, I've searched here. I've found code that I could compile and it works fine, but for some reason my code won't produce any sound. I'm porting an old game to the PC (Windows,) and I'm trying to make it as authentic as possible, so I'm wanting to use generated wave forms. I've pretty much copied and pasted the working code (only adding in multiple voices,) and it still won't work (even thought the exact same code for a single voice works fine.) I know I'm missing something obvious, but I just cannot figure out what. Any help would be appreciated thank you.
First some notes... I was looking for something that would allow me to use the original methodology. The original system used paired bytes for music (sound effects - only 2 - were handled in code.) A time byte that counted down every time the routine was called, and a note byte that was played until time reached zero. this was done by patching into the interrupt vector, windows doesn't allow that, so I set up a timer that routing that accomplished the same thing. The timer kicks in, updates the display, and then runs the music sequence. I set this up with a defined time so that I only have one place to adjust the timing at (to get it as close as possible to the original sequence. The music is a generated wave form (and I've double checked the math, and even examined the generated data in debug mode,) and it looks good. The sequence looks good, but doesn't actually produce sound. I tried SDL2 first, and it's method of only playing 1 sound doesn't work for me, also, unless I make the sample duration extremely short (and the sound produced this way is awful,) I can't match the timing (it plays the entire sample through it's own interrupt without letting me make adjustments.) Also, blending the 3 voices together (when they all run with different timings,) is a mess. Most of the other engines I examined work in much the same way, they want to use their own callback interrupt and won't allow me to tweak it appropriately. This is why I started working with OpenAL. It allows multiple voices (sources,) and allows me to set the timings myself. On advice from several forums, I set it up so that the sample lengths are all multiples of full cycles.
Anyway, here's the code.
int main(int argc, char* argv[])
{
FreeConsole(); //Get rid of the DOS console, don't need it
if (InitLog() < 0) return -1; //Start logging
UINT_PTR tim = NULL;
SDL_Event event;
InitVideo(false); //Set to window for now, will put options in later
curmusic = 5;
InitAudio();
SetTimer(NULL,tim,_FREQ_,TimerProc);
SDL_PollEvent(&event);
while (event.type != SDL_KEYDOWN) SDL_PollEvent(&event);
SDL_Quit();
return 0;
}
void CALLBACK TimerProc(HWND hWind, UINT Msg, UINT_PTR idEvent, DWORD dwTime)
{
RenderOutput();
PlayMusic();
//UpdateTimer();
//RotateGate();
return;
}
void InitAudio(void)
{
ALCdevice *dev;
ALCcontext *cxt;
Log("Initializing OpenAL Audio\r\n");
dev = alcOpenDevice(NULL);
if (!dev) {
Log("Failed to open an audio device\r\n");
exit(-1);
}
cxt = alcCreateContext(dev, NULL);
alcMakeContextCurrent(cxt);
if(!cxt) {
Log("Failed to create audio context\r\n");
exit(-1);
}
alGenBuffers(4,Buffer);
if (alGetError() != AL_NO_ERROR) {
Log("Error during buffer creation\r\n");
exit(-1);
}
alGenSources(4, Source);
if (alGetError() != AL_NO_ERROR) {
Log("Error during source creation\r\n");
exit(-1);
}
return;
}
void PlayMusic()
{
static int oldsong, ofset, mtime[4];
double freq;
ALuint srate = 44100;
ALuint voice, i, note, len, hold;
short buf[4][_BUFFSIZE_];
bool test[4] = {false, false, false, false};
if (curmusic != oldsong) {
oldsong = (int)curmusic;
if (curmusic > 0)
ofset = moffset[(curmusic - 1)];
for (voice = 1; voice < 4; voice++)
alSourceStop(Source[voice]);
mtime[voice] = 0;
return;
}
if (curmusic == 0) return;
//Only 3 voices for music, but have
for (voice = 0; voice < 3; voice ++) { // 4 set asside for eventual sound effects
if (mtime[voice] == 0) { //is note finished
alSourceStop(Source[voice]); //It is, so stop the channel (source)
mtime[voice] = music[ofset++]; //Get the next duration
if (mtime[voice] == 0) {oldsong = 0; return;} //zero marks end, so restart
note = music[ofset++]; //Get the next note
if (note > 127) { //Old HW data was designed for could only
if (note == 255) note = 127; //use values 128 - 255 (255 = 127)
freq = (15980 / (voice + (int)(voice / 3))) / (256 - note); //freq of note
len = (ALuint)(srate / freq); //A single cycle of that freq.
hold = len;
while (len < (srate / (1000 / _FREQ_))) len += hold; //Multiply till 1 interrup cycle
while (len > _BUFFSIZE_) len -= hold; //Don't overload buffer
if (len == 0) len = _BUFFSIZE_; //Just to be safe
for (i = 0; i < len; i++) //calculate sine wave and put in buffer
buf[voice][i] = (short)((32760 * sin((2 * M_PI * i * freq) / srate)));
alBufferData(Buffer[voice], AL_FORMAT_MONO16, buf[voice], len, srate);
alSourcei(openAL.Source[i], AL_LOOPING, AL_TRUE);
alSourcei(Source[i], AL_BUFFER, Buffer[i]);
alSourcePlay(Source[voice]);
}
} else --mtime[voice];
}
}
Well, it turns out there were 3 problems with my code. First, you have to link the built wave buffer to the AL generated buffer "before" you link the buffer to the source:
alBufferData(buffer,AL_FORMAT_MONO16,&wave_sample,sample_lenght * sizeof(short),frequency);
alSourcei(source,AL_BUFFER,buffer);
Also in the above example, I multiplied the sample_length by how many bytes are in each sample (in this case "sizeof(short)".
The final problem was that you need to un-link a buffer from the source before you change the buffer data
alSourcei(source,AL_BUFFER,NULL);
The music would play, but not correctly until I added that line to the note change code.

AudioQueue, Callback is not hitting uniformly to playback audio

In my app, i would be receiving audio data over socket in Linear PCM Format, in uniform interval of time, 50 ms approx.,
I am using AudioQueue to play the same, i referred most of the code from AudioQueue SpeakHere Example only the difference is i need to run it on the Mac OS,
Following is the relevant piece of code,
Setup AudioBufferDescription format,
FillOutASBDForLPCM (sRecordFormat,
16000,
1,
16,
16,
false,
false
);
Allocate Buffers to hold and play data
for (int i = 0; i < kNumberBuffersPLyer; ++i) {
XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]),
"AudioQueueAllocateBuffer failed");
}
Where bufferByteSize is 640 and Number of buffer is 3
To Start the Queue,
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
Now, the thing is, i was expecting, it should hit the Callback automatically When it played buffer, but it was't happening,
So as and when i am getting buffer, i am enqueue buffer , this is the code
void AudioStream::startQueueIfNeeded(){
SetLooping(true);
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffersPLyer; ++i)
{
AQBufferCallback (this, mQueue, mBuffers[0]);
//enQueueBuffer(this,mQueue,mBuffers[i]);
}
// AudioSessionSetActive( true );
OSStatus errorCode = AudioQueueStart(mQueue,NULL);
mIsDone = false;
mIsStarted = true;
}
i feel the buffer is proper but i can't hear the sound, can anyone guide me, what i am doing wrong.
Thanks in advance
Thanks for have a look on it, the problem was, i was running AudioQueue in C++ based thread class and it was terminated , when i take it out of thread class it worked fine.

Logging pointer's values on Android native code

I'm developing an Android application with native code.
I don't know how to debug a shared library so I decided to log values from a pointer to LogCat.
I have this C++ code:
extern GLfloat* vertPos;
// More code here:
...
int i = 0;
for(i = 0; i < numVertices; i++)
{
__android_log_print(ANDROID_LOG_VERBOSE, "initRendering-Vertices", "%d, %f", i, vertPos[i]);
}
numVertices is equal to 2472 elements.
I'm getting something like that at LogCat:
12-11 08:17:35.354: VERBOSE/initRendering-Vertices(900): i = 614, value = 3.246999
12-11 08:17:35.354: VERBOSE/initRendering-Vertices(900): i = 924, value = -8.000200
I've lost 310 elements.
Is there any other way to see all pointer's elements?
Thanks.
The Android log implementation uses a 64KB circular buffer in the kernel. If you manage to write data to the log faster than "logcat" can read it out, you will start losing lines. Since logcat has to wait on adb to send output over USB to your workstation, it's not that hard to outrun it.
Use fopen to create a file called /sdcard/debug.txt, change the __android_log_print() to fprintf(), and then "adb pull" the output off after the run completes. That will ensure nothing is dropped.

Resources