Im trying to get the time in a assembler in anylogic, I can use measuretimestart/end features for the services and I can get the distribution graphs, however when I try to get the time for the assembler with the following code: On enter 3: Time=time(); and On exit: timeDS.add(time()-Time); the mean doesn't make sense to me. I really don't know at this stage what element is providing a realistic information in the software.
I'm testing with a 10 weeks delay time each block and no delays in the queues, even removing the selectOutput function. the arrival is a rate of 1 per month, triggering a calls for inject 1 element each, but I wet this non sense mean every time, is there any code to use in this particular situations or how I get the distributions correct?
Thanks!
enter image description here
what are you trying to measure: the time it takes for the assembly to happen? The time it takes from the moment the first part of the assembly arrives until the assembly happens? This answer is only associated to the delay time.
Time seems to be a global variable, so if many assemblies are happening at the same time, your result will be distorted because Time variable changes every time an assembly starts...
Also, the on enter 3 is not what defines the moment in which the assembly process starts. Instead use the "on enter delay" action
you need to define an agent type that will be at the output of the assembler and assign it a time variable then on the "on enter delay" you can do agent.startTime=time() and the "on at exit" you can do "data.add(time()-agent.startTime)
Related
enter image description here
I'm trying to measure time of expend of a piece in an assembler in AnyLogic, I cannot use time measure start/end because the agent/part become a an assembled part.
I've using this two pieces of code without the "agent." I can see a some results but they do not make sense to me, is like is not considering the delay time i'm using in the assembler. I have allocated the codes at the start device and in the sink but same results.
I also tried to adding some agent name but the error changes to the variable canno be resolved or is not a field
Those are the codes I'm using
timeOnGch=time() (In the start element)
timeGchDist.add(time()-timeOnGch) (at the sink)
the suggestion the complete
these are the complete codes Im using
I'm making a light with an ESP32 and the HomeKit library I chose uses FreeRTOS and esp-idf, which I'm not familiar with.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit(without waiting for the full fade time) before starting the second copy.
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Here's the pseudocode I'm thinking of:
update_light {
kill_flag = true
wait_for_fade_to_die
xTaskCreate fade
}
fade {
kill_flag = false
loop_1000_times {
(fading code involving local and global variables)
.
.
if kill_flag, vTaskDelete(NULL)
vTaskDelay(2 / portTICK_RATE_MS)
}
}
My main questions are:
Is this the best way to do this or is there a better option?
If this is ok, what is the equivalent of my wait_for_fade_to_die? I haven't been able to find anything from a brief look around, but I'm new to FreeRTOS.
I'm sorry to say that I have the impression that you are pretty much on the wrong track trying to solve your concrete problem.
You are writing that you aren't familiar with FreeRTOS and esp-idf, so I would suggest you first familiarize with freeRTOS (or with the idea of RTOS in general or with any other RTOS, transferring that knowledge to freeRTOS, ...).
In doing so, you will notice that (apart from some specific examples) a task is something completely different than a function which has been written for sequential "batch" processing of a single job.
Model and Theory
Usually, the most helpful model to think of when designing a good RTOS task inside an embedded system is that of a state machine that receives events to which it reacts, possibly changing its state and/or executing some actions whose starting points and payload depends on the the event the state machine received as well as the state it was in when the event is detected.
While there is no event, the task shall not idle but block at some barrier created by the RTOS function which is supposed to deliver the next relevant event.
Implementing such a task means programming a task function that consists of a short initialisation block followed by an infinite loop that first calls the RTOS library to get the next logical event (see right below...) and then the code to process that logical event.
Now, the logical event doesn't have to be represented by an RTOS event (while this can happen in simple cases), but can also be implemented by an RTOS queue, mailbox or other.
In such a design pattern, the tasks of your RTOS-based software exist "forever", waiting for the next job to perform.
How to apply the theory to your problem
You have to check how to decompose your programming problem into different tasks.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
I hope that I understood the goal of your application correctly:
The system is driving multiple light sources of different colours, and some "request source" is selecting the next colour to be displayed.
When a different colour is requested, the change shall not be performed instantaneously but there shall be some "fading" over a certain period of time.
The system (and its request source) shall remain responsive even while a fade takes place, possibly changing the direction of the fade in the middle.
I think you didn't say where the colour requests are coming from.
Therefore, I am guessing that this request source could be some button(s), a serial interface or a complex algorithm (or random number generator?) running in background. It doesnt really matter now.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit (without waiting for the full fade time) before starting the second copy.
What you are essentially looking for is how to change the state (here: the target colour of light fading) at any time so that an old, ongoing fade procedure becomes obsolete but the output (=light) behaviour will not change in an incontinuous way.
I suggest you set up the following tasks:
One (or more) task(s) to generate the colour changing requests from ...whatever you need here.
One task to evaluate which colour blend shall be output currently.
That task shall be ready to receive
a new-colour request (changing the "target colour" state without changing the current colour blend value)
a periodical tick event (e.g., from a hardware or software timer)
that causes the colour blend value to be updated into the direction of the current target colour
Zero, one or multiple tasks to implement the colour blend value by driving the output features of the system (e.g., configuring GPIOs or PWMs, or transmitting information through a serial connection...we don't know).
If adjusting the output part is just assigning some registers, the "Zero" is the right thing for you here. Otherwise, try "one or multiple".
What to do now
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Just don't do that.
Killing a task, even one that didn't prepare for being killed from inside causes a follow-up of requirements to manage and clean-up output stuff by your software that you will end up wondering why you even started using an RTOS.
I do know that starting to design and program in that way when you never did so is a huge endeavour, starting like a jump into cold water.
Please trust me, this way you will learn the basics how to design and implement great embedded systems.
Professional education companies offer courses about RTOS integration, responsive programming and state machine design for several thousands of $/€/£, which is a good indicator of this kind of working knowledge.
Good luck!
Along that way, you'll come across a lot of detail questions which you are welcome to post to this board (or find earlier answers on).
I am solving some algorithm problems in c# and running them as a console application.
To check for the efficiency of the applications I would like to see what their run times are.
Currently I am printing the time at the start of a program and at the end and calculating the time difference ,but is there a way to reduce Observer' effect ?
Some inbuilt tool/plugin that I am not aware of ?
You should use the Stopwatch class, which is specifically designed to do that.
To avoid measuring the JIT time, you should also run each algorithm at least once before measuring anything so that the JIT has time to run.
When measuring the algorithms, you should run each one hundreds of times and take the average runtime.
The most important source of delay due to observer's effect is printing itself. Another potential delayer is debug message text formatting. So I suggest the following:
If you can anticipate the number of loops and number of stages per
loop, create an array to store timing information. If not, use a
dynamic list.
During execution, store time and aditional information in that array
or list.
If possible, don't store messages along with time but codes, for
example 1=Stage 1, 2=Stage 2, etc.
At the end of the execution, dump all the information to screen or file, and format messages as needed.
Use stopwatch class. Look at at the method below for an example.
using System.Diagnostics;
public void yourAlgorithm()
{
Stopwatch timePerParse = Stopwatch.StartNew();
/** -- TODO --**/
timePerParse.Stop();
Console.WriteLine(timePerParse.ElapsedMilliseconds);
}
In my automated test I have an area that occasionally shows up (and needs to be clicked on when it does show up). This is the perfect place to use an OptionalStep prefix, to prevent the step from failing if the optional area never shows up.
Thing is, I would like the OptionalStep to only wait a second or two before moving on to the rest of the test. Just as I can have object.Exist(2) only wait for 2 seconds, is there a way to have OptionalStep wait for only a couple of seconds?
Some other caveats:
I'd like to keep this as one small line. I know I could create a
multi-line logic test that uses object.Exist(2) inside an If/Then
statement, but I'd rather have the code be small and trim.
I don't want to change the global 20 second timeout just for this one
step.
Since this optional step only shows up in one specific area, it seems
like Recovery Scenarios would not be a good choice to have running
throughout the entire test.
Vitaly's comment would be a good solution as you are possibly unnecessarily over complicating your test.
Also having such a long global timeout is not recommended and should be as low as possible. I usually have it set at around 3 seconds and deal with the synchronisation in the code.
Anything that takes a long period of time should be known about upfront and dealt with in the code. Having a global timeout for everything will cause your test to run unnecessarily slow when most object cannot be found errors occur.
I read somewhere that somebody could access a config value during run time but not during design time. Whats the difference between run time and design time in this context?
Design time is when somebody signs off our word documents and our UML diagrams with a cheery "That looks fine!" Run time is when we execute our code and it fails with a horrible crash and burn.
The advantage of a technique like TDD is that it compresses the gap between design time and run time to the point where they are the same thing. This means we get instant feedback on how our design actually works when translated into code, which should result in a better design and fewer embarrassments when our code goes live. YMMV.
Design time is when you are creating a design based on the requirements, or creating some UML diagrams.
Run time is when you are implementing your design and running the code.
Are you talking about .NET applications? In that case design time probably means something more specific - when your GUI is presented within the Visual Studio designer. This gives you a working view of your application, but it is running in a design time environment. Many .NET controls have a DesignMode property that allows you tell whether the control is running in design time view or not.
design time is when you design some code
run time is when you execute the code you designed
Run time is when your program runs. Design time is when your program is designed.
Design time refers to processes that take place during development, Runtime refers to processes that take place while the application is running.
For instance, constants that are hardcoded in your application are set at design time, such as...
// you need to recompile your solution to change this,
// hence it is said that its value is set at design time.
const string value = "this is set at design time";
Whereas configuration values that are pulled from a config file would be said to be set at runtime. Such as...
// You do not need to recompile your solution to change this,
// hence the value is said to be set at runtime.
string value = ConfigurationManager.GetValue("section", "key");
As a developer, you must aim for the ideal equilibrium between design time (let's take it to mean 'the time you spend designing and developing the app', though it's a bit incorrect) and run time, which I take to mean 'the time the user stands looking at the hourglass waiting for his important report to be rendered'.
Too much focus on 'design time' and you might run out of the scheduled programming time, and your client will pull out of the contract, he'll badmouth you, and kittens will die. Too little, and your program will, as they say, suck. Remember that 'shipping is a feature, one your program should have'.
Unless what they meant by "run time" is "runtime" and that means something else entirely.