I'm trying to understand Qt's serial port module and I'm not too familiar with how Qt handles asynchronous I/O. On Windows, the QSerialPort::writeData method places the data to be written in a ring buffer and then starts a single-shot QTimer to actually perform the write when its timeout signal fires:
qint64 QSerialPortPrivate::writeData(const char *data, qint64 maxSize)
{
Q_Q(QSerialPort);
writeBuffer.append(data, maxSize);
if (!writeBuffer.isEmpty() && !writeStarted) {
if (!startAsyncWriteTimer) {
startAsyncWriteTimer = new QTimer(q);
QObjectPrivate::connect(startAsyncWriteTimer, &QTimer::timeout, this, &QSerialPortPrivate::_q_startAsyncWrite);
startAsyncWriteTimer->setSingleShot(true);
}
if (!startAsyncWriteTimer->isActive())
startAsyncWriteTimer->start();
}
return maxSize;
}
The readData method doesn't use a timer in this way, instead calling ReadFileEx directly.
What does the single-shot timer accomplish versus just calling WriteFileEx?
There is a special case for a QTimer with an interval of 0: this timer will fire once control returns to the event loop. The implementation on Unix/Linux does something similar, but not using a QTimer, instead having a subclass of QSocketNotifier that will get called when the port is able to be written to. Both of these implementations mean that you will buffer the data and write it out once you get back to the main event loop.
There are two reasons that I can think of for doing this:
There is something different between the POSIX and Win32 serial APIs that require the code to be structured this way. As far as I am aware, this is not the case
What #Mike said in a comment: this will allow for data to be buffered before it is written
The buffering seems like the most likely reason for this, as doing a syscall for each piece of data that you want to write would be a rather expensive operation.
Related
According to apple's docs it says:
Because your MIDIReadProc callback is invoked from a separate thread,
be aware of the synchronization issues when using data provided by
this callback.
Does this mean, use #synchronize to do thread blocking for safety?
Or does this literally mean synchronization timing issues may happen?
I am currently trying to read a midi file, and use a MIDIReadProc to trigger the note-on / note-off of a software synth based off of midi events. I need this to be extremely reliable and perfectly in-time. Right now, I am noticing that when I consume these midi events and write the audio to a buffer (all done from the MIDIReadProc), the timing is extremely sloppy and not sounding right at all. So I would like to know, what is the "proper" way to consume midi events from a MIDIReadProc?
Also, is a MIDIReadProc the only option for consuming midi events from a midi file?
Is there another option as far as setting up a virtual endpoint that could be directly consumed by my synthesizer? If so, how does that work exactly?
If you presume a function of this format to be the midiReadProc,
void midiReadProc(const MIDIPacketList *packetList,
void* readProcRefCon,
void* srcConnRefCon)
{
MIDIPacket *packet = (MIDIPacket*)packetList->packet;
int count = packetList->numPackets;
for (int k=0; k<count; k++) {
Byte midiStatus = packet->data[0];
Byte midiChannel= midiStatus & 0x0F;
Byte midiCommand = midiStatus >> 4;
//parse MIDI messages, extract relevant information and pass it to the controller
//controller must be visible from the midiReadProc
}
packet = MIDIPacketNext(packet);
}
the MIDI client has to be declared in the controller, interpreted MIDI events get stored into the controller from MIDI callback and read by the audioRenderCallback() on each audio render cycle. This way you can minimize timing imprecisions to the
length of the audio buffer, which you can negotiate during AudioUnit setup to be as short as the system allows for.
A controller can be a #interface myMidiSynthController : NSViewController you define, consisting of a matrix of MIDI channels and a pre-determined maximum-polyphony-per-channel, among other relevant data such as interface elements, phase accumulators for each active voice, AudioComponentInstance, etc... It would be wrong to resize the controller based on the midiReadProc() input. RAM is cheap nowadays.
I'm using such MIDI callbacks for processing live input from MIDI devices. Concerning playback of MIDI files, if you
want to process streams or files of arbitrary complexity, you may also run into surprises. MIDI standard itself
has timing features, which work as good as MIDI hardware allows for. Once you read an entire file into the memory, you can
translate your data into whatever you want and use your own code for controlling sound synthesis.
Please, observe not to use any code which would block the audio render thread (i.e. inside audioRenderCallback()), or would do memory management on it.
You could use AVAudioEngine.musicSequence and prepare your audio unit graph. Then use the MusicSequence API to load your GM file. Like this you don’t need to do the timing by yourself. Note I have not done this myself so far but I understand in theory it should work like this.
After you instantiate your synthesizer audio unit, you attach and connect it to the AVAudioEngine graph.
Does this mean, use #synchronize to do thread blocking for safety?
The opposite of what you’ve said is true: You should certainly not lock in a realtime thread. The #synchronized directive will lock if the resource is already locked. You may consider to use lock-free queues for realtime threads. See also Four common mistakes in audio development.
If you have to use CoreMIDI and MIDIReadProc, you can send MIDI commands to the synthesizer audio unit by calling MusicDeviceMIDIEvent right from your callback.
I am aware that I certainly can't use msleep or usleep or any such function for introducing delays in a kernel ISR routine.
I have a kernel driver which have certain ISRs defined inside it. In one of the ISR block I have to insert a certain delay of order of millisecs. Lets say:
{
//A
//here I need sleep
//B
}
can I use something like:
{
//A
for(i=0;i<1000;i++);
//B
}
Lets say my processor is executing at 1Gbps, will the above for loop give me a delay of 1000 usecs, i.e. 1ms?
You must not sleep inside an interrupt handler.
Furthermore, you should wait for a long time inside an interrupt handler; this would block all proccesses and all other interrupts on the same CPU.
If you driver needs to do two things at different times, it should use a second interrupt or a timer to do the second thing.
I would be interested to hear about the reasons for having an intentional delay in an ISR. Generally speaking, it's a no-no. If you think you need one, then most probably it means that you need to rethink your code design.
As for introducing microscopic delays, one thing that I have used is cpu_relax(). This function is also used in the kernel to implement the above mentioned udelay() and ndelay() for some CPU architectures. I would advise you to take a look and see where and how this function is used in the Linux kernel. That might give you some ideas for your specific situation.
Functions udelay and ndelay implements busy-waiting delays, so you may use them in ISR. As suggested by Tsyvarev.
I've used UMLet to draw some UML diagrams describing various entity relationships for each of the chapters of Linux Device Drivers 3Ed (LDD3), by Corbet, Rubini, Kroah-Hartman. The latest version of the diagrams can be found here:
Linux Device Drivers 3Ed UML Diagrams
I would like to ask help understanding a scheduling problem which is supported by documentation in the Non-Blocking File IO Sequence Diagram(s) at the above link, and in LDD3 on P156-158, and in particular this code snippet from scull_getwritespace() (also see P156, but this code has been updated to use mutex rather than a semaphore):
/* Wait for space for writing; caller must hold device semaphore. On
* error the semaphore will be released before returning. */
static int scull_getwritespace(struct scull_pipe *dev, struct file *filp)
{
while (spacefree(dev) == 0) { /* full */
DEFINE_WAIT(wait);
mutex_unlock(&dev->mutex);
if (filp->f_flags & O_NONBLOCK)
return -EAGAIN;
PDEBUG("\"%s\" writing: going to sleep\n",current->comm);
prepare_to_wait(&dev->outq, &wait, TASK_INTERRUPTIBLE);
if (spacefree(dev) == 0)
schedule();
finish_wait(&dev->outq, &wait);
if (signal_pending(current))
return -ERESTARTSYS; /* signal: tell the fs layer to handle it */
if (mutex_lock_interruptible(&dev->mutex))
return -ERESTARTSYS;
}
return 0;
}
and in particular:
if(spacefree(dev) == 0)
schedule();
The case of interest is this:
spacefree(dev) == 0 is true and the write process is about to call schedule().
before schedule() is called, the read process issues wake_up_interruptible(&dev->outq) having consumed all the buffer data so the write process should be woken up, so it can produce more data. This sets the write process state back to TASK_RUNNING.
the write process calls schedule() and potentially goes to sleep.
The above is done to avoid race conditions.
Here are my questions:
Is it possible to modify the code/operation of the kernel so that I can guarantee that schedule() doesn't go to sleep but returns immediately from the call? I don't understand schedule() in sufficient detail to answer this question and any help would be appreciated. I think the answer is no because the scheduler gets to choose what happens next and there may be software interrupts, tasklets or signals to process.
Is it possible to modify the code so that I can guarantee the write process runs before the read process is re-entered? Again, I think the answer might be no but perhaps there are some possibilities with thread prioritisation.
I find the pictorial representations of the linux kernel entities very helpful in understanding the patterns in the kernel which greatly improves my coding productivity, but they are very tedious to generate by hand. In the interests of saving time has anybody else done something similar with specific reference to LDD3?
Thanks.
Not sure if you have had a chance to took into kernel map http://www.makelinux.net/kernel_map/
I'm handling a non-standard modem via serial port in an overlapped manner. Besides reading from and writing to the telecommunication line, I have to check the control lines like CTS and DSR using the WaitCommEvent() function.
DWORD EvtMask;
/// (some scopes/levels ommitted)
const BOOL syncChange = WaitComEvent(hFile, &EvtMask, &overlapped);
if (!syncChange) {
assert(GetLastError() == ERROR_IO_PENDING);
/// *background activity* probably writing into EvtMask
/// until overlapped.hEvent gets signalled
}
In the (practically all) cases the function call indicates *background activity*, I have to wait on the overlapped.hEvent to happen. Since I'm also waiting for events from alternative sources (like IPC caused by user input, program termination), I use the WaitForMuiltipleObjects() function. But, if the blocking wait is finished for other reasons than control line changes, how can I stop the background activity on EvtMask? The code I'm based on, currently uses SetCommMask(hFile, 0), but I did not find a reliable reference for this being appropriate.
I also observe cases where changes to control lines are not supported properly (driver?, VM?), so I have to do a sliced wait with in-between checking.
What must be done to safely leave the scope where the variable EvtMask is declared?
The code you have is correct, and fully supported by the documentation, which clearly says:
If a process attempts to change the device handle's event mask by using the SetCommMask function while an overlapped WaitCommEvent operation is in progress, WaitCommEvent returns immediately.
I've used this fact on both "real" serial ports, and USB virtual serial port emulations, and it works reliably.
(In my particular case, I was watching for EV_TXEMPTY so that I could guarantee a minimal separation between certain transmissions on the wire)
Under what circumstances can the uart_ops.start_tx() operation be called twice in rapid succession on a linux 2.6 serial driver?
There should be no issue calling it in rapid succession any number of times. If done by competing processors, start_tx() does a spinlock on port->lock. If done sequentially, the uart-specific driver checks to see if it has already been started. (From linux-2.6.27.8/drivers/mmc/card/sdio_uart.c):
if (!(port->ier & UART_IER_THRI)) {
port->ier |= UART_IER_THRI;
sdio_out(port, UART_IER, port->ier);
}
From a higher-level perspective, the serial core checks to see if the transmitter is already started, as well as for appropriateness of starting it (linux-2.6.27.8/drivers/serial/serial_core.c):
static void __uart_start(struct tty_struct *tty)
{
struct uart_state *state = tty->driver_data;
struct uart_port *port = state->port;
if (!uart_circ_empty(&state->info->xmit) && state->info->xmit.buf &&
!tty->stopped && !tty->hw_stopped)
port->ops->start_tx(port);
}
I am working in this area on an older kernel, 2.6.10. I too have seen 2 (or more) calls to the driver's start_tx function given one supposed 'write' by user space. Via stty, I turned off any 'opost' in the tty layer. After that, I saw only a single start_tx for each write. I suspect that the line discipline layer is adding calls to start_tx.
Anecdotal I know, but thought it might help.