time scale in NetLogo / remove turtle - time

I would like to build a system which has time scale of minutes and hours
there are number of turtles which stay in specific patches for different time based on their types then leave.
I use data-and-time as a property of turtle to record arrive time then calculate the leaving time
but i don't know how can I make the turtle reaches this leaving time ( after 4 to 6 hours)
also I got confused to deal with hours and minutes scale ? as the turtles will arrive at any time between the morning and the midday , stay for a period of time based on their type then leave
thanks

I'd suggest equating one tick with one minute, and doing all your calculations in minutes.

Related

Are the decimal components of Unix timestamps and UTC time synced?

Conventional time is meant to stay in sync with the rotation of the earth, and so is shifted with leap years and leap seconds, while Unix time is meant to measure the number of seconds since midnight Jan 1 1970. As such, the two drift apart over time.
But what about the decimals? It seems to me that if you took just the decimal portion of UTC, Unix time, and frankly any other time zone, they should line up except during the exact time a leap second or leap smear is taking place.
Are the decimal components of Unix timestamps and UTC time synced (except during such events)?
The reason leap seconds are issued is because we have 2 different definitions of measuring a second:
As ​1⁄86400 of 1 rotation of the earth (a day)
A more stable definition from the SI standard: https://en.wikipedia.org/wiki/Second
These 2 seconds are not equal length. In science and computing we prefer something very exact, and for clocks we prefer the second to be 1⁄86400 of a day.
To make clocks on computers match up with our expectation of the rotation-based clock, we add or remove seconds in the form of leap seconds.
What's really going on is that the 'length' of these 2 second definitions is different and keeps changing (compared to each other). Once the length has caused 1 the clock to drift far enough we just add a second to our computers to match the other definition.
But this drift is not instant. It happens over time. This means that the both these clocks slowly drift apart.
The suggestion that the 'decimals' are the same doesn't really make that much sense then. The difference between these decimals grow and grow until we have to add or remove a second to make them closer together again. The Earth's rotation isn't suddenly an extra second faster one day.
So when you ask the question: are they synced? It's asking whether the rotation of the earth is synced. We don't yet have the power to make the earth spin slower or faster ;)

Google Sheets Countdown Timer for Aircraft Flight Plan "Time Remaining"

I'm struggling with a formula in Google Sheets that will display a countdown for how much time remains until an aircraft is overdue, based on a filed flight plan.
What I have: 2 key pieces of information about a flight plan:
how much estimated time it will take to make the flight (ETE:
Estimated Time Enroute)
what time the aircraft departed (ATA: Actual Time of Departure)
Constraints: (mandated by company policy)
The ETE must be entered in decimal format, in numbers of hours. A 1hr 30min flight must have an ETE of 1.5, or a 20 minute flight must have an ETE of .3 (rounded to the nearest 10th).
The ATA must be entered in 4-digit 24hr time, but without the colon. 1:30pm must be entered as "1330"
The countdown timer must be displayed in minutes, rounded to the nearest whole number. 1hr 28min must be listed "88"
The countdown should be "live" (this is solved by spreadsheet settings to update "on update or every minute".
The countdown should easily indicate aircraft that have become "overdue" (this will be solved with conditional formatting to highlight negative numbers)
My pseudo formula is essentially just: Now() - (ETE+ATD), but I'm stuck on how to get around the constraints, specifically the three different time formats (decimal hours ETE, 4-digit 24hr time ATA, and remaining time in minutes).
I've set up a dummy sheet here:
https://docs.google.com/spreadsheets/d/165mXKRquI4aBEEap8PIHVrFpAraaapykGqjkDg22qeU/edit?usp=sharing
*I've looked through this Q&A, but it's a GAS solution. I'd much prefer to just have a formula. Preferably an array formula, so that it copies down to however many rows there might end up being.
**Possibly a secondary concern down the road: at the moment, we do not conduct overnight flights, but it's possible in the future. Starting a 3hr flight at 10pm will result in the arrival time being the next day. Hopefully, there is a solution for this.
I suggest some testing before use, but should be worth trying:
=if(now()>today()+1*(left(A2,2)&":"&right(A2,2)),round(24*60*(today()+1*(left(A2,2)&":"&right(A2,2))-now())+B2*60,0),"")
where the ATA value is in A2 and the ETE in B2.
Could be simplified but longer might be easier to adapt for overnight, if required.

Repeating the "go" cycle from scratch every 1000 ticks

I am working on a simplified model of the stock market, and I am still learning to manage time in NetLogo. In my model a day is made of 1000 ticks. In a day several things happen: turtles sell and buy stocks, at some point during the day they set their strategies, various logs are written and then erased at the end of the day.
I would like the model to start again after 1000 ticks, i.e. at the end of the day the model does not stop but starts again, thus simulating more than one single day.
What do you suggest?
Why not just use if ticks mod 1000 = 0 [setup-locations]

Density of time events

I am working on an assignment where I am supposed to compute the density of an event. Let's say that a certain event happens 5 times within seconds, it would mean that it would have a higher density than if it were to happen 5 times within hours.
I have in my possession, the time at which the event happens.
I was first thinking about computing the elapsed time between each two successive events and then play with the average and mean of these values.
My problem is that I do not know how to accurately represent this notion of density through mathematics. Let's say that I have 5 events happening really close to each other, and then a long break, and then again 5 events happening really close to each other. I would like to be able to represent this as high density. How should I go about it?
In the last example, I understand that my mean won't be truly representative but that my standard deviation will show that. However, how could I have a single density value (let's say between 0 and 1) with which I could rank different events?
Thank you for your help!
I would try the harmonic mean, which represents the rate at which your events happen, by still giving you an averaged time value. It is defined by :
I think its behaviour is close to what you expect as it measures what you want, but not between 0 and 1 and with inverse tendencies (small values mean dense, large values mean sparse). Let us go through a few of your examples :
~5 events in an hour. Let us suppose for simplicity there is 10 minutes between each event. Then we have H = 6 /(6 * 1/10) = 10
~5 events in 10 minutes, then nothing until the end of the hour (50 minutes). Let us suppose all short intervals are 2.5 minutes, then H = 6 / (5/2.5 + 1/50) = 6 * 50 / 101 = 2.97
~5 events in 10 minutes, but this cycle restarts every half hour thus we have 20 minutes as the last interval instead of 50. Then we get H = 6 / (5/2.5 + 1/20) = 6 * 20 / 41 = 2.92
As you can see the effect of the longer and rarer values in a set is diminished by the fact that we use inverses, thus less weight to the "in between bursts" behaviour. Also you can compare behaviours with the same "burst density" but that do not happen at the same frequency, and you will get numbers that are close but whose ordering still reflects this difference.
For density to make sense you need to define 2 things:
the range where you look at it,
and the unit of time
After that you can say for example, that from 12:00 to 12:10 the density of the event was an average of 10/minute.
What makes sense in your case obviously depends on what your input data is. If your measurement lasts for 1 hour and you have millions of entries then probably seconds or milliseconds are better choice for unit. If you measure for a week and have a few entries then day is a better unit.

Meaning of axis of figures of simulation or performance modeling papers

I am reading some papers on simulation and performance modeling. The Y axis in some figures is labeled "Seconds per Simulation Day". I am not sure what it actually means. It span from 0, 20, 40 to 120.
Another label is "Simulation years per day". I guess it means the guest OS inside simulation environment thinks it has passed several years while actually it just passed a day in the real world? But I guess simulation should slow down the execution, so I guess inside simulation environment passed several hours while actually it just passed a day in the real world would be more reasonable.
Thanks.
Without seeing the paper, I assume they are trying to compare the CPU time it takes to get to some physical time in a simulation.
So "Seconds per Simulation Day" is likely the walltime it took to get 24 hours in the simulation.
Likewise, "Simulation Years per Day" is physical time of simulation/real life day.
Of course, without seeing the paper it's impossible to know for sure. It's also possible they are looking at CPU-seconds or CPU-days, which would be nCPUs*walltime.
Simulations typically run in discrete time units, called time steps. If you'd like to simulate a certain process that spans certain time in the simulation, you would have to perform certain number of time steps. If the length of a time step is fixed, the number of steps is then just the simulated time divided by the length of the time step. Calculations in each time step take certain amount of time and the total run time for the simulation would equal the number of time steps times the time it takes to perform one time step:
(1) total_time = (simulation_time / timestep_length) * run_time_per_timestep
Now several benchmark parameters can be obtained by placing different parameters on the left hand side. E.g. if you fix simulation_time = 1 day then total_time would give you the total simulation run time, i.e.
(2) seconds_per_sim_day = (1 day / timestep_length) * run_time_per_timestep
Large values of seconds_per_sim_day could mean:
it takes too much time to compute a single time step, i.e. run_time_per_timestep is too high -> the computation algorithm should be optimised for speed;
the time step is too short -> search for better algorithms that can accept larger time steps and still produce (almost) the same result.
On the other hand, if you solve (1) for simulation_time and fix total_time = 1 day, you get the number of time steps that can be performed per day times the length of the time step, or the total simulation time that can be achieved per day of computation:
(3) simulation_time_per_day = (1 day / run_time_per_step) * timestep_length
Now one can observe that:
larger time steps lead to larger values of simulation_time_per_day, i.e. longer simulations can be computed;
if it takes too much time to compute a time step, the value of simulation_time_per_day would go down.
Usually those figures could be used when making decisions about buying CPU time at some computing centre. For example, you would like to simulate 100 years, then just divide that by the amount of simulation years per day and you get how many compute days you would have to pay (or wait) for. Larger values of simulation_time_per_day definitely benefit you in this case. If, on the other hand, you only have 10 compute days at your disposal, then you can compute how long of a simulation could be computed and make some decisions, e.g. more short simulations but with many different parameters vs. less but longer simulations with some parameters that you have predicted to be the optimal ones.
In real life things are much more complicated. Usually computing each time step could take different time (although there are cases where each time step takes exactly the same amount of time as all other time steps) and it would strongly depend on the simluation size, configuration, etc. That's why standardised tests exist and usually some averaged value is reported.
Just to summarise: given that all test parameters are kept equal,
faster computers would give less "seconds per simulation day" and more "simulation years per day"
slower computers would give more "seconds per simulation day" and less "simulation years per day"
By the way, both quantites are reciprocial and related by this simple equation:
simuation_years_per_day = 236,55 / seconds_per_simulation_day
(that is "simulation years per day" equals 86400 divided by "seconds per simulation day" /which gives you the simulation days per day/ and then dividied by 365.25 to convert the result into years)
So it doesn't really matter if "simulation years per day" or "seconds per simulation day" is presented. One just have to chose the representation which clearly shows how much better the newer system is from the previous/older/existing one :)

Resources