enter image description here
I'm trying to measure time of expend of a piece in an assembler in AnyLogic, I cannot use time measure start/end because the agent/part become a an assembled part.
I've using this two pieces of code without the "agent." I can see a some results but they do not make sense to me, is like is not considering the delay time i'm using in the assembler. I have allocated the codes at the start device and in the sink but same results.
I also tried to adding some agent name but the error changes to the variable canno be resolved or is not a field
Those are the codes I'm using
timeOnGch=time() (In the start element)
timeGchDist.add(time()-timeOnGch) (at the sink)
the suggestion the complete
these are the complete codes Im using
Related
I have profiled my Rust code and see one processor-intensive function that takes a large portion of the time. Since I cannot break the function into smaller parts, I hope I can see which line in the function takes what portion of time. Currently I have tried CLion's Rust profiler, but it does not have that feature.
It would be best if the tool runs on MacOS since I do not have a Windows/Linux machine (except for virtualization).
P.S. Visual studio seems to have this feature; but I am using Rust. https://learn.microsoft.com/en-us/visualstudio/profiling/how-to-collect-line-level-sampling-data?view=vs-2017 It has:
Line-level sampling is the ability of the profiler to determine where in the code of a processor-intensive function, such as a function that has high exclusive samples, the processor has to spend most of its time.
Thanks for any suggestions!
EDIT: With C++, I do see source code line level information. For example, the following toy shows that, the "for" loop takes most of the time within the big function. But I am using Rust...
To get source code annotation in perf annotate or perf report you need to compile with debug=2 in your cargo toml.
If you also want source annotations for standard library functions you additionally need to pass -Zbuild-std to cargo (requires nightly).
Once compiled, "lines" of Rust do not exist. The optimiser does its job by completely reorganising the code you wrote and finding the minimal machine code that behaves the same as what you intended.
Functions are often inlined, so even measuring the time spent in a function can give incorrect results - or else change the performance characteristics of your program if you prevent it from being inlined to do so.
Im trying to get the time in a assembler in anylogic, I can use measuretimestart/end features for the services and I can get the distribution graphs, however when I try to get the time for the assembler with the following code: On enter 3: Time=time(); and On exit: timeDS.add(time()-Time); the mean doesn't make sense to me. I really don't know at this stage what element is providing a realistic information in the software.
I'm testing with a 10 weeks delay time each block and no delays in the queues, even removing the selectOutput function. the arrival is a rate of 1 per month, triggering a calls for inject 1 element each, but I wet this non sense mean every time, is there any code to use in this particular situations or how I get the distributions correct?
Thanks!
enter image description here
what are you trying to measure: the time it takes for the assembly to happen? The time it takes from the moment the first part of the assembly arrives until the assembly happens? This answer is only associated to the delay time.
Time seems to be a global variable, so if many assemblies are happening at the same time, your result will be distorted because Time variable changes every time an assembly starts...
Also, the on enter 3 is not what defines the moment in which the assembly process starts. Instead use the "on enter delay" action
you need to define an agent type that will be at the output of the assembler and assign it a time variable then on the "on enter delay" you can do agent.startTime=time() and the "on at exit" you can do "data.add(time()-agent.startTime)
I'm making a light with an ESP32 and the HomeKit library I chose uses FreeRTOS and esp-idf, which I'm not familiar with.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit(without waiting for the full fade time) before starting the second copy.
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Here's the pseudocode I'm thinking of:
update_light {
kill_flag = true
wait_for_fade_to_die
xTaskCreate fade
}
fade {
kill_flag = false
loop_1000_times {
(fading code involving local and global variables)
.
.
if kill_flag, vTaskDelete(NULL)
vTaskDelay(2 / portTICK_RATE_MS)
}
}
My main questions are:
Is this the best way to do this or is there a better option?
If this is ok, what is the equivalent of my wait_for_fade_to_die? I haven't been able to find anything from a brief look around, but I'm new to FreeRTOS.
I'm sorry to say that I have the impression that you are pretty much on the wrong track trying to solve your concrete problem.
You are writing that you aren't familiar with FreeRTOS and esp-idf, so I would suggest you first familiarize with freeRTOS (or with the idea of RTOS in general or with any other RTOS, transferring that knowledge to freeRTOS, ...).
In doing so, you will notice that (apart from some specific examples) a task is something completely different than a function which has been written for sequential "batch" processing of a single job.
Model and Theory
Usually, the most helpful model to think of when designing a good RTOS task inside an embedded system is that of a state machine that receives events to which it reacts, possibly changing its state and/or executing some actions whose starting points and payload depends on the the event the state machine received as well as the state it was in when the event is detected.
While there is no event, the task shall not idle but block at some barrier created by the RTOS function which is supposed to deliver the next relevant event.
Implementing such a task means programming a task function that consists of a short initialisation block followed by an infinite loop that first calls the RTOS library to get the next logical event (see right below...) and then the code to process that logical event.
Now, the logical event doesn't have to be represented by an RTOS event (while this can happen in simple cases), but can also be implemented by an RTOS queue, mailbox or other.
In such a design pattern, the tasks of your RTOS-based software exist "forever", waiting for the next job to perform.
How to apply the theory to your problem
You have to check how to decompose your programming problem into different tasks.
Currently, I have a function that's called whenever the colour of the light should be changed, which just changes it in a step. I'd like to have it fade between colours instead, which will require a function that runs for a second or two. Having this block the main execution of the program would obviously make it quite unresponsive, so I need to have it run as a task.
I hope that I understood the goal of your application correctly:
The system is driving multiple light sources of different colours, and some "request source" is selecting the next colour to be displayed.
When a different colour is requested, the change shall not be performed instantaneously but there shall be some "fading" over a certain period of time.
The system (and its request source) shall remain responsive even while a fade takes place, possibly changing the direction of the fade in the middle.
I think you didn't say where the colour requests are coming from.
Therefore, I am guessing that this request source could be some button(s), a serial interface or a complex algorithm (or random number generator?) running in background. It doesnt really matter now.
The issue I'm facing is that I only want one copy of the fading function to be running at a time, and if it's called a second time before it's finished, the first copy should exit (without waiting for the full fade time) before starting the second copy.
What you are essentially looking for is how to change the state (here: the target colour of light fading) at any time so that an old, ongoing fade procedure becomes obsolete but the output (=light) behaviour will not change in an incontinuous way.
I suggest you set up the following tasks:
One (or more) task(s) to generate the colour changing requests from ...whatever you need here.
One task to evaluate which colour blend shall be output currently.
That task shall be ready to receive
a new-colour request (changing the "target colour" state without changing the current colour blend value)
a periodical tick event (e.g., from a hardware or software timer)
that causes the colour blend value to be updated into the direction of the current target colour
Zero, one or multiple tasks to implement the colour blend value by driving the output features of the system (e.g., configuring GPIOs or PWMs, or transmitting information through a serial connection...we don't know).
If adjusting the output part is just assigning some registers, the "Zero" is the right thing for you here. Otherwise, try "one or multiple".
What to do now
I found vTaskDelete, but if I were to just kill the fade function at an arbitrary point, some variables and the LEDs themselves will be in an unknown state. To get around this, I thought of using a 'kill flag' global variable which the fading function will check on each of its loops.
Just don't do that.
Killing a task, even one that didn't prepare for being killed from inside causes a follow-up of requirements to manage and clean-up output stuff by your software that you will end up wondering why you even started using an RTOS.
I do know that starting to design and program in that way when you never did so is a huge endeavour, starting like a jump into cold water.
Please trust me, this way you will learn the basics how to design and implement great embedded systems.
Professional education companies offer courses about RTOS integration, responsive programming and state machine design for several thousands of $/€/£, which is a good indicator of this kind of working knowledge.
Good luck!
Along that way, you'll come across a lot of detail questions which you are welcome to post to this board (or find earlier answers on).
I'm open to posting the code in this section to work through the optimization but its a bit length and complex, so instead I'm hoping that somebody can assist me with a few debugging questions I have. My goal is to find out what is causing my Apex CPU Time Limit Exceeded issue.
When using the Debug Log in its basic or normal layout I receive the message
Maximum CPU Time: 15062 out of 10,000 ** Close to Limit
I've optimized and re-wrote various loops and queries several times now and in each case this number concludes around there which leads me to believe it is lying to me and that my actual usage far exceeds that number. So on my journey I switched the Log Panels of the Developer Console to Analysis in hopes of isolating exactly what loop, method, or area of the code is giving me a headache.
This leads me to my main question and problem.
Execution Tree, Performance Tree & Executed Units
All show me that my durations UNDER the 10,000ms allowance. My largest consumption is 3,556.19ms which is being used by a wrapper class I created and consumed in the constructor method where there is a fair amount of logic that is constructing a fairly complicated wrapper class that spans over 5-7 custom objects. Still even with those 3,000ms the remainder of the process shows at negligible times bringing my total around 4,000ms. Again my question is.... Why am I unable to see or find what is consuming all my time?
Incorrect Iteration Data
In addition to this, on the Performance tree there is a column of data that shows the number of iterations for each method. I know that my Production Org has 81 objects that would essentially call the constructor for my custom wrapper object. I.E. my Constructor SHOULD be called 81 times, but instead it is called 32 times. So my other question is can I rely on the iteration data in the column? Or because it was iterating so many times does it stop counting at a certain point? Its possible that one of my objects is corrupted or causing an infinite loop somehow, but I don't want to dig through all the data in search of that conclusion if its a known issue that the iteration data is not accurate anyway.
System.Debug in the Production org
The Last question is why my System.Debug() lines are not displaying in my Developer Console on the production org. I've added serveral breadcrumbs throughout the code that would help me isolate just which objects are making it through and which are not, however, I cannot in any layout view system.debug messages outside of my Sandbox.
Sorry for the wealth of questions but I did want to give an honest effort to better understand the debugging process in Salesforce. If this is a lost cause I'm happy to start sharing some code as well but hopefully some debugging tips can get me to the solution.
It's likely your debug log got truncated, see "Each debug log must be 20 MB or smaller. If it exceeds this amount, you won’t see everything you need." in https://trailhead.salesforce.com/en/content/learn/modules/apex_basics_dotnet/debugging_diagnostics
Download the log and search for text similar to "skipped 123456 bytes of detailed log" to confirm, some system.debug statements will just not show up.
You might have to fine-tune the log levels (don't log validation rules and workflows? don't log every single variable assignment with "FINE" level etc). You might have to set all flags to NONE, then track only 1 particular class/trigger that you suspect (see https://help.salesforce.com/articleView?id=code_debug_log_classes.htm&type=5 and https://salesforce.stackexchange.com/questions/214380/how-are-we-supposed-to-use-debug-logs-for-a-specific-apex-class-only)
If it's truncated it's possible analysis tools give up (I had mixed luck with console to be honest, sometimes https://apextimeline.herokuapp.com/ is great to give overview - but it'll also fail to parse a 20 MB log...
When all else fails you can load up the log into Notepad++ (or any editor of your choice), find lines related to method entry/method exit (you might need a regular expression search), take these filtered lines tor excel, play with "text to columns" and just look at timing manually, see if there's a record that causes the spike. Because it could be #10 that's the problem, the fact it exhausts limits on #32 of 81 doesn't mean much. Search like [METHOD_ENTRY|METHOD_EXIT]MyTriggerHandler.onBeforeUpdate could be a good start. But first thing is to make sure log is not truncated.
I am solving some algorithm problems in c# and running them as a console application.
To check for the efficiency of the applications I would like to see what their run times are.
Currently I am printing the time at the start of a program and at the end and calculating the time difference ,but is there a way to reduce Observer' effect ?
Some inbuilt tool/plugin that I am not aware of ?
You should use the Stopwatch class, which is specifically designed to do that.
To avoid measuring the JIT time, you should also run each algorithm at least once before measuring anything so that the JIT has time to run.
When measuring the algorithms, you should run each one hundreds of times and take the average runtime.
The most important source of delay due to observer's effect is printing itself. Another potential delayer is debug message text formatting. So I suggest the following:
If you can anticipate the number of loops and number of stages per
loop, create an array to store timing information. If not, use a
dynamic list.
During execution, store time and aditional information in that array
or list.
If possible, don't store messages along with time but codes, for
example 1=Stage 1, 2=Stage 2, etc.
At the end of the execution, dump all the information to screen or file, and format messages as needed.
Use stopwatch class. Look at at the method below for an example.
using System.Diagnostics;
public void yourAlgorithm()
{
Stopwatch timePerParse = Stopwatch.StartNew();
/** -- TODO --**/
timePerParse.Stop();
Console.WriteLine(timePerParse.ElapsedMilliseconds);
}