why do we need to design a state machine to implement a playback app? - playback

can't we just write some play() stop() pause() function instead?
what's the advantage of using a state machine to do this?

The main advantage in your case is that each state determines what actions are valid and how the state machine reacts on them.
In a very simple model your state machine would have the states
Playing
Pausing
Not Playing
Each of these states defines which actions are valid. It makes no sense to allow a call to Play() in state Playing.
It also makes no sense to allow Stop() in state Not Playing.
The state machine helps you figuring out what actions are valid in which state.
By the way: Every program is kind of a state machine, but not every program is modeled after one. By creating the model your code automatically becomes more structured and readability increases.

Related

How to cleanly tell a task to die in FreeRTOS

I'm making a light with an ESP32 and the HomeKit library I chose uses FreeRTOS and esp-idf, which I'm not familiar with.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit(without waiting for the full fade time) before starting the second copy.
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Here's the pseudocode I'm thinking of:
update_light {
kill_flag = true
wait_for_fade_to_die
xTaskCreate fade
}
fade {
kill_flag = false
loop_1000_times {
(fading code involving local and global variables)
.
.
if kill_flag, vTaskDelete(NULL)
vTaskDelay(2 / portTICK_RATE_MS)
}
}
My main questions are:
Is this the best way to do this or is there a better option?
If this is ok, what is the equivalent of my wait_for_fade_to_die? I haven't been able to find anything from a brief look around, but I'm new to FreeRTOS.
I'm sorry to say that I have the impression that you are pretty much on the wrong track trying to solve your concrete problem.
You are writing that you aren't familiar with FreeRTOS and esp-idf, so I would suggest you first familiarize with freeRTOS (or with the idea of RTOS in general or with any other RTOS, transferring that knowledge to freeRTOS, ...).
In doing so, you will notice that (apart from some specific examples) a task is something completely different than a function which has been written for sequential "batch" processing of a single job.
Model and Theory
Usually, the most helpful model to think of when designing a good RTOS task inside an embedded system is that of a state machine that receives events to which it reacts, possibly changing its state and/or executing some actions whose starting points and payload depends on the the event the state machine received as well as the state it was in when the event is detected.
While there is no event, the task shall not idle but block at some barrier created by the RTOS function which is supposed to deliver the next relevant event.
Implementing such a task means programming a task function that consists of a short initialisation block followed by an infinite loop that first calls the RTOS library to get the next logical event (see right below...) and then the code to process that logical event.
Now, the logical event doesn't have to be represented by an RTOS event (while this can happen in simple cases), but can also be implemented by an RTOS queue, mailbox or other.
In such a design pattern, the tasks of your RTOS-based software exist "forever", waiting for the next job to perform.
How to apply the theory to your problem
You have to check how to decompose your programming problem into different tasks.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
I hope that I understood the goal of your application correctly:
The system is driving multiple light sources of different colours, and some "request source" is selecting the next colour to be displayed.
When a different colour is requested, the change shall not be performed instantaneously but there shall be some "fading" over a certain period of time.
The system (and its request source) shall remain responsive even while a fade takes place, possibly changing the direction of the fade in the middle.
I think you didn't say where the colour requests are coming from.
Therefore, I am guessing that this request source could be some button(s), a serial interface or a complex algorithm (or random number generator?) running in background. It doesnt really matter now.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit (without waiting for the full fade time) before starting the second copy.
What you are essentially looking for is how to change the state (here: the target colour of light fading) at any time so that an old, ongoing fade procedure becomes obsolete but the output (=light) behaviour will not change in an incontinuous way.
I suggest you set up the following tasks:
One (or more) task(s) to generate the colour changing requests from ...whatever you need here.
One task to evaluate which colour blend shall be output currently.
That task shall be ready to receive
a new-colour request (changing the "target colour" state without changing the current colour blend value)
a periodical tick event (e.g., from a hardware or software timer)
that causes the colour blend value to be updated into the direction of the current target colour
Zero, one or multiple tasks to implement the colour blend value by driving the output features of the system (e.g., configuring GPIOs or PWMs, or transmitting information through a serial connection...we don't know).
If adjusting the output part is just assigning some registers, the "Zero" is the right thing for you here. Otherwise, try "one or multiple".
What to do now
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Just don't do that.
Killing a task, even one that didn't prepare for being killed from inside causes a follow-up of requirements to manage and clean-up output stuff by your software that you will end up wondering why you even started using an RTOS.
I do know that starting to design and program in that way when you never did so is a huge endeavour, starting like a jump into cold water.
Please trust me, this way you will learn the basics how to design and implement great embedded systems.
Professional education companies offer courses about RTOS integration, responsive programming and state machine design for several thousands of $/€/£, which is a good indicator of this kind of working knowledge.
Good luck!
Along that way, you'll come across a lot of detail questions which you are welcome to post to this board (or find earlier answers on).

How does an OS or a systems program wait for user input?

I come from the world of web programming and usually the server sets a superglobal variable through the specified method (get, post, etc) that makes available the data a user inputs into a field. Another way is to use AJAX to register a callback method to an event that the AJAX XMLhttpRequest object will initiate once notified by the browser (I'm assuming...). So I guess my question would be if there is some sort of dispatch interface that a systems programmer's code must interact with vicariously to execute in response to user input or does the programmer control the "waiting" process directly? And if there is a dispatch is there a loop structure in an OS that waits for a particular event to occur?
I was prompted to ask this question here because I'm in a basic programming logic class and the professor won't answer such a "sophisticated" question as this one. My book gives a vague pseudocode example like:
//start
sentinel_val = 'stop';
get user_input;
while (user_input not equal to sentinel_val)
{
// do something.
get user_input;
}
//stop
This example leads me to believe 1) that if no input is received from the user the loop will continue to repeat the sequence "do something" with the old or no input until the new input magically appears and then it will repeat again with that or a null value. It seems the book has tried to use the example of priming and reading from a file to convey how a program would get data from event driven input, no?
I'm confused :(
At the lowest level, input to the computer is asynchronous-- it happens via "interrupts", which is basically something external to the CPU (a keyboard controller) sending a signal to the CPU that says "stop what you're doing and accept this data". (It's complex, but this is the general idea). So the CPU stops, grabs the keystroke, and puts it in a buffer to be read, and then continues doing what it was doing before the interrupt.
Very similar things happen with inbound network traffic, and the results of reading from a disk, etc.
At a higher level, it gets more dependent on the operating system or framework that you're using.
With keyboard input, there might be a process (application, basically) that is blocked, waiting for user input. That "block" doesn't mean the computer just sits there waiting, it lets other processes run instead. But when the keyboard result comes in, it will wake up the one who was waiting for it.
From the point of view of that waiting process, they called some function "get_next_character()" and that function returned with the character. Etc.
Frankly, how all this stuff ties together is super interesting and useful to understand. :)
An OS is driven by hardware event (called interrupt). An OS does not wait for an interrupt, instead, it execute a special instruction to put the CPU a nap in a loop. If a hardware event occurs, the corresponding interrupt will be invoked.
It seems the book has tried to use the example of priming and reading from a file
to convey how a program would get data from event driven input, no?
Yes that is what the book is doing. In fact... the unix operating system is built on the idea of abstracting all input and output of any device to look like this.
In reality most operating systems and hardware make use of interrupts that jump to what we can call a sub-routine to perform the low level data read and then return control back to the operating system.
Also on most systems many of the devices work independent of the rest of the operating system and present a high level API to the operating system. For example a keyboard port (or maybe a better example is a network card) on a computer process interrupts itself and then the keyboard driver presents the operating system with a different api. You can look at standards for devices to see what these are. If you want to know the api the keyboard port presents for example you could look at the source code for the keyboard driver in a linix distro.
A basic explanation based on my understanding...
Your get user_input pseudo function is often something like readLine. That means that the function will block until the data read contains a new line character.
Below this the OS will use interrupts (this means it's not dealing with the keyboard unessesarily, but only when required) to allow it to respond when it the user hits some keys. The keyboard interrupt will cause execution to jump to a special routine which will fill an input buffer with data from the keyboard. The OS will then allow the appropriate process - generally the active one - to use readLine functions to access this data.
There's a bunch more complexity in there but that's a simple view. If someone offers a better explanation I'll willingly bow to superior knowledge.

How to deal with a second event-loop with message-dispatch?

I am working on a program which is essentially single-threaded, and its only thread is the main event-loop thread. Consequently, all its data structures are basically not protected by anything like critical region.
Things work fine until it recently integrates some new functions based on DirectShow API. Some DirectShow APIs open a second event-loop and within that second loop it dispatch messages (i.e. invoke other event-handling callbacks unpredictably). So when a second event-handling function is invoked, it might damage the data struct which is being accessed by the function that invokes the DirectShow API.
I have some experience in kernel programming. And what comes in my mind is that, for a single-threaded program, how it should deal with its data structure is very like how kernel should deal with per-CPU data structure. And in kernel, when a function accesses per-CPU data, it must disable the interrupt (very like the message-dispatching in a second event-loop). However, I find there is no easy way to either avoid invoke DirectShow API or to prevent the create of a second event-loop within them, is there any way?
mutexes. semaphores. locking. whatever name you want to call it, that's what you need.
There are several possible solutions that come to mind, depending on exactly what's going wrong and your code:
Make sure your data structures are in a consistent state before calling any APIs that run a modal loop.
If that's not possible, you can use a simple boolean variable to protect the structure. If it's set, then simply abort any attempt to update it or queue the update for later. Another option is to abort the previous operation.
If the problem is user generated events, then disable the problematic menus or buttons while the operation is in progress. Alternatively, you could display a modal dialog.

Animation and logic

I have a question not necessarily specific to any platform or API but more specific to interactions in code between animations.
A game is a good example. Let's say the player dies, and there's a death animation that must finish before the object is removed. This is typical for many cases where some animation has to finish before continuing with what ever action that would normally follow. How would you go about doing this?
My question is about the control and logic of animation. How would you design a system which is capable of driving the animation but at the same time implement custom behavior?
The problem that typically arise, is that the game logic and animation data become codependent. That is, the animation has to callback into code or somehow contain meta data for duration of animation sequences. What's typically even more so a problem, is when an animation, which has to trigger some other code, say after 1.13s spawn a custom sprite, this tend to result in deep nesting of code and animation. A bomb with a timer would be en example of both logic and animation where both things interact, but I want to keep them as separate as possible.
But what would you do to keep animation and code two separate things?
Recently I've been trying out mgrammar, and I'm thinking, a DSL might be the way to go. That would allow the animation or animator, to express certain things in a presumably safe manner which would then go into the content pipeline...
The solution depends on the gameplay you're going for. If the gameplay is 100% code-driven, the anim is driven by the entity's state(state-driven animation). If it's graphics/animation-driven, the anim length determines how long the entity's in that state(anim-driven state).
The latter is typically more flexible in a commercial production environment as the designer can just say "we need that death anim to be shorter" and it gets negotiated. But when you have very precise rules in mind or a simulated system like physics, state-driven animation may be preferable. There is no 100% orthogonal and clean solution for the general case.
One thing that helps to keep it from getting too messy is to consider game AI patterns:
Game AI is typically implemented as some form of finite state machine, possibly multiple state machines or layered in some way(the most common division being a high level scripting format with low level actions/transitions).
At the low level you can say things like "in the hitreact state, playback my hitreact anim until it finishes, then find out from the high-level logic what state to continue from." At the high level there are a multitude of ways to go about defining the logic, but simple repeating loops like "approach/attack/retreat" are a good way to start.
This helps to keep the two kinds of behaviors - the planned activities, and the reactions to new events - from being too intermingled. Again, it doesn't always work out that way in practice, for the same reasons that sometimes you want code to drive data or the other way around. But that's AI for you. No general solutions here!
I think you should separate the rendering from the game logic.
You have at least two different kind of objects:
An entity that holds the unit's data (hit points, position, speed, strength, etc.) and logic (how it should move, what happens if it runs out of hit points, ...).
Its representation, that is the sprite, colors, particles attached to it, sounds, whatever. The representation can access the entity data, so it knows the position, hit points, etc.
Maybe a controller if the entity can be directly controlled by a human or an AI (like a car in a car simulation).
Yes that sounds like the Model-View-Controller architecture. There are lots of resources about this, see this article from deWiTTERS or The Guerrilla Guide to Game Code by Jorrit Rouwé, for game-specific examples.
About your animation problems: for a dying unit, when the Model updates its entity and figures out it has no more hit points, it could set a flag to say it's dead and remove the entity from the game (and from memory). Then later, when the View updates, it reads the flag and starts the dying animation. But it can be difficult to decide where to store this flag since the entity object should disappear.
There's a better way in my humble IMHO. When your entity dies, you could send an event to all listeners that are registered to the UnitDiedEvent that belongs to this specific entity, then remove the entity from the game. The entity representation object is listening to that event, and its handler starts the dying animation. When the animation is over, the entity representation can finally be removed.
The observer design pattern can be useful, here.
Your enemy needs to have multiple states. Alive and dead are not enough. Alive, dying and dead might be. Your enemy processing loop should check its state and perform different operations:
if(state == alive and hp == 0)
{
state = dying
currentanimation = animation(enemy_death)
currentanimation.play()
}
elseif(state == dying and currentanimation.isPlaying == true)
{
// do nothing until the animation is finished.
// more efficiently implemented as a callback.
}
elseif(state == dying and currentanimation.isPlaying == false)
{
state = dead
// at this point either the game engine cleans up the object, or the object deletes itself.
}
I guess I don't really see the problem.
If you had some nest of callback's I could see why thing might be hard to follow, but if you only have one call back for all animation events and one update function which starts animations, it's pretty easy to follow the code.
So what do you gain by separating them?
The animation can call a callback function that you supply, or send a generic event back to the code. It doesn't need anything more than that, which keeps all logic in the code. Just inject the callback or connect the event when the animation is created.
I try as much as possible to keep callbacks out of child animations. Animations should indicate that they are complete, the actions taken on an animations completion should be called from the controller level of the application.
In Actionscript this is the beauty of event dispatching/listening - The controller object can create the aimation and then assign a handler for an event which the animation dispatches when it is complete.
I've used the pattern for several things in Flash projects and it helps keep code independent far better than callbacks.
Especially if you write custom event objects which extend Event to carry the kind of information you need. such as teh MouseEvent that carries localX, localY, and stageX and stageY. I use a custom I've named NumberEvent to broadcast any kind of numerical information around my applications.
in actionscript controler object:
var animObj:AwsomeAnim = AwsomeAnim();
animObj.start();
animObj.addEventListener(AwsomeAnim.COPLETE,_onAnimFinish);
function _onAnimFinish():void
{
// actions to take when animation is complete here
}
In javascript where custom events do not exist. I just have a boolean variable in the animation object, and check it on a timer from the controller.
in javascript controller object:
var animObj = new animObj();// among other things must set this.isComplete = false
animObj.start();
function checkAnimComplete()
{
if(animObj.isComplete == true)
{
animCompleteActions();
}else{
setTimeout(checkAnimComplete,300);
}
}
checkAnimComplete();
function animCompleteActions()
{
// anim complete actions chere
}
For a couple games I made to solve this problem I created two animation classes
asyncAnimation - For fire and forget type animations
syncAnimation - If I wanted to wait for the animation to resolve before returning control
As games usually have a main loop it looked something like this C# style psuedo code
while(isRunning)
{
renderStaticItems();
renderAsyncAnimations();
if (list_SyncAnimations.Count() > 0)
{
syncAnimation = list_SyncAnimations.First();
syncAnimation.render();
if (syncAnimation.hasFinished())
{
list_SyncAnimations.removeAt(0);
// May want some logic here saying if
// the sync animation was 'player dying' update some variable
// so that we know the animation has ended and the
// game can prompt for the 'try again' screen
}
}
else
{
renderInput();
handleOtherLogic(); // Like is player dead, add sync animation if required.
}
}
So what the code does is maintain a list of sync animations that need to be resolved before continuing with the game - If you need to wait for several animations just stack them up.
Also, it might be a good idea to look into the command pattern or provide a callback for when the sync animation has finished to handle your logic - its really up how you want to do it.
As for your "Spawn at sec 1.13" perhaps the SyncAnimation class should have an overridable .OnUpdate() method, which can do some custom logic (or call a script)
It depends what your requirements may be.

How can I implement a blocking process in a single slot without freezing the GUI?

Let's say I have an event and the corresponding function is called. This function interacts with the outside world and so can sometimes have long delays. If the function waits or hangs then my UI will freeze and this is not desirable. On the other hand, having to break up my function into many parts and re-emitting signals is long and can break up the code alot which would make hard to debug and less readable and slows down the development process. Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting? For example, the compiler could reckognize a keyword then implement a return then re-emit signals connected to new slots automatically? Why do I think this would be a great idea ;) Im working with Qt
Your two options are threading, or breaking your function up somehow.
With threading, it sounds like your ideal solution would be Qt::Concurrent. If all of your processing is already in one function, and the function is pretty self-contained (doesn't reference member variables of the class), this would be easy to do. If not, things might get a little more complicated.
For breaking your function up, you can either do it as you suggested and break it into different functions, with the different parts being called one after another, or you can do it in a more figurative way, but scattering calls to allow other processing inside your function. I believe calling processEvents() would do what you want, but I haven't come across its use in a long time. Of course, you can run into other problems with that unless you understand that it might cause other parts of your class to run once more (in response to other events), so you have to treat it almost as multi-threaded in protecting variables that have an indeterminate state while you are computing.
"Is there a special feature in event driven programming which would enable me to just write the process in one function call and be able to let the mainThread do its job when its waiting?"
That would be a non-blocking process.
But your original query was, "How can I implement a blocking process in a single slot without freezing the GUI?"
Perhaps what you're looking for a way to stop other processing when some - any - process decides it's time to block? There are typically ways to do this, yes, by calling a method on one of the parental objects, which, of course, will depend on the specific objects you are using (eg a frame).
Look to the parent objects and see what methods they have that you'd like to use. You may need to overlay one of them to get your exactly desired results.
If you want to handle a GUI event by beginning a long-running task, and don't want the GUI to wait for the task to finish, you need to do it concurrently, by creating either a thread or a new process to perform the task.
You may be able to avoid creating a thread or process if the task is I/O-bound and occasional callbacks to handle I/O would suffice. I'm not familiar with Qt's main loop, but I know that GTK's supports adding event sources that can integrate into a select() or poll()-style loop, running handlers after either a timeout or when a file descriptor becomes ready. If that's the sort of task you have, you could make your event handler add such an event source to the application's main loop.

Resources