Google Sheets Countdown Timer for Aircraft Flight Plan "Time Remaining" - time

I'm struggling with a formula in Google Sheets that will display a countdown for how much time remains until an aircraft is overdue, based on a filed flight plan.
What I have: 2 key pieces of information about a flight plan:
how much estimated time it will take to make the flight (ETE:
Estimated Time Enroute)
what time the aircraft departed (ATA: Actual Time of Departure)
Constraints: (mandated by company policy)
The ETE must be entered in decimal format, in numbers of hours. A 1hr 30min flight must have an ETE of 1.5, or a 20 minute flight must have an ETE of .3 (rounded to the nearest 10th).
The ATA must be entered in 4-digit 24hr time, but without the colon. 1:30pm must be entered as "1330"
The countdown timer must be displayed in minutes, rounded to the nearest whole number. 1hr 28min must be listed "88"
The countdown should be "live" (this is solved by spreadsheet settings to update "on update or every minute".
The countdown should easily indicate aircraft that have become "overdue" (this will be solved with conditional formatting to highlight negative numbers)
My pseudo formula is essentially just: Now() - (ETE+ATD), but I'm stuck on how to get around the constraints, specifically the three different time formats (decimal hours ETE, 4-digit 24hr time ATA, and remaining time in minutes).
I've set up a dummy sheet here:
https://docs.google.com/spreadsheets/d/165mXKRquI4aBEEap8PIHVrFpAraaapykGqjkDg22qeU/edit?usp=sharing
*I've looked through this Q&A, but it's a GAS solution. I'd much prefer to just have a formula. Preferably an array formula, so that it copies down to however many rows there might end up being.
**Possibly a secondary concern down the road: at the moment, we do not conduct overnight flights, but it's possible in the future. Starting a 3hr flight at 10pm will result in the arrival time being the next day. Hopefully, there is a solution for this.

I suggest some testing before use, but should be worth trying:
=if(now()>today()+1*(left(A2,2)&":"&right(A2,2)),round(24*60*(today()+1*(left(A2,2)&":"&right(A2,2))-now())+B2*60,0),"")
where the ATA value is in A2 and the ETE in B2.
Could be simplified but longer might be easier to adapt for overnight, if required.

Related

Summing times in Google sheets

I have a sheet where I record my working hours (this is more for me to remind me to stop working than anything else). For every day, I have three possible shifts - early, normal & late, and I have a formula which will sum up any times put into these columns and give me the daily total hours.
To summarise the duration of time spent working in a day, I use the following formula: =(C41-B41)+(E41-D41)+12+(G41-F41) which is:
early end time minus early start time
normal end time minus normal start time PLUS 12 hours
late end time minus late start time
Which gives me output like this:
What I cannot seem to achieve is, the ability to sum the daily totals into something which shows me the total hours worked over 1-week. If I attempt to sum the daily totals together for the example image shown, I get some wild figure such as 1487:25:00 when formatting as 'Duration' or 23:25:00 when formatted as 'Time'!
All my cells where I record the hours worked are formatted as 'Time'
When using arithmetic operations on date values in Google Sheets, it's important to remember that the internal representation of a date is numeric, and understood as the number of days since January 1, 1970.
What follows from that, is that if you want to add 12 hours to a time duration, you should not write "+12" because that will in fact add 12 days. Instead add "+12/24". In other words, try the following formula instead of the one you are using now:
=(C41-B41)+(E41-D41)+(12/24+G41-F41)

Even prize distribution

I'm currently facing interesting algorithm problem and I am looking for ideas or possible solutions. Topic seems to be common so maybe it's known and solved but I'm unable to find it.
So lets assume that I'm running shop and
I'm making lottery for buying customers. Each time they buy something they can win prize.
Prizes are given to customers instantly after buying.
I have X prizes and
I will be running lottery for Y days
Paying customer (act of buying, transaction) should have equal chance to win prize
Prizes should be distributed till last day (at last day there should be left some prizes to distribute)
There can not be left prizes at the end
I do not have historical data of transactions per day (no data from before lottery) to estimate average number of transactions (yet lottery could change number of transactions)
I can gather data while lottery is running
It this is not-solvable, what is closest solution?
Instant prize distribution have to stay.
Possible Solution #1
Based on #m69 comment
Lets says there are 6 prizes (total prizes) and 2 days of lottery.
Lets define Prizes By Day as PBD (to satisfy requirement have prizes till last day).
PBD = total prizes / days
We randomly choose as many as PBD events every day. Every transaction after this event is winning transaction.
Can be optimized to no to use last hour of last day of lottery to guarantee giving away all of prizes.
Pluses
Random. Simple, elegant solution.
Minuses
Seems that users have no equal chance to win.
Possible Solution #2
Based on #Sorin answer
We start to analyze first time frame (example 1 hour). And we calculate chance to win as:
where:
Δprizes = left prizes,
Δframes = left frames
What you're trying to do is impossible. Once you've gave away the last prize you can't prove any guarantee for the number of customers left, so not all customers will have equal chance to win a prize.
You can do something that approximates it fairly well. You can try to estimate the number of customers you will have, assume that they are evenly distributed and then spread the prizes over the period while the contest is running. This will give you a ratio that you can use to say if a given customer is a winner. Then as the contest progresses, change the estimates to match what you see, and what prizes are left. Run this update every x (hours/ minutes or even customer transaction) to make sure the rate isn't too low and every q prizes to make sure the rate isn't too high. Don't run the update too often if the prizes are given away or the algorithm might react too strongly if there's a period with low traffic (say overnight).
Let me give you an example. Say you figure out that you're going to see 100 customers per hour and you should give prizes every 200 customers. So roughly 1 every 2 hours. After 3 hours you come back and you see you saw 300 customers per hour and you've given out 4 prizes already. So you can now adjust the expectation to 300 customers per hour and adjust the distribution rate to match what is left.
This will work even if your initial is too low or too high.
This will break badly if your estimate is too far AND you updates are far in between (say you only check after a day but you've already given away all the prizes).
This can leave prizes on the table. If you don't want that you can reduce the amount of time the program considers the contest as running so that it should finish the prizes before the end of the contest. You can limit the number of prizes awarded in a given day to make the distribution more uniform (don't set it to X/Y, but something like X/Y * .25 so that there's some variation), and update the limit at the end of the day to account for variation in awards given.

Best practices for overall rating calculation

I have LAMP-based business application. SugarCRM to be more precise. There are 120+ active users at the moment. Every day each user generates some records that are used in complex calculation to get so called “individual rating”.
It takes for about 6 seconds to calculate one “individual rating” value. And there was not a big problem before: each user hits the link provided to start “individual rating” calculations, waits for 6-7 seconds, and get the value displayed.
But now I need to implement “overall rating” calculation. That means that additionally to “individual rating” I have to calculate and display to the user:
minimum individual rating among ALL the users of the application
maximum individual rating among ALL the users of the application
current user position in the range of all individual ratings.
Say, current user has individual rating equal to 220 points, minimum value of rating is 80, maximum is 235 and he is on 23rd position among all the users.
What are (imho) the main problems to be solved?
If one calculation lasts for 6 seconds, that overall calculations will take more than 10 minutes. I think it’s no good to make the application almost unaccessible for this period. And what if the quantity of users will rise in the nearest future 2-3 times?
Those calculations could be done as nightly job but all the users are in different timezones. In Russia difference between extreme timezones is 9 hours. So people in west part of Russia are still working in “today”. While people in eastern part is waking up to work in “tomorrow”. So what is the best time for nightly job in this case?
Are there any best practices|approaches|algorithms to build such rating system?
Given only the information provided, the only options I see:
The obvious one - reduce the time taken for a rating calculation (6 seconds to calculate 1 user's rating seems like a lot)
If possible, have intermediate values which you only recalculate some of, as required (for example, have 10 values that make up the rating, all based on different data, when some of the data changes, flag the appropriate values for recalcuation). Either do this recalculation:
During your daily recalculation or
When the update happens
Partial batch calculation - only recalculate x of the users' ratings at chosen intervals (where x is some chosen value) - has the disadvantage that, at all times, some of the ratings can be out of date
Calculate if not busy - either continuously recalculate ratings or only do so at a chosen interval, but instead of locking the system, have it run as a background process, only doing work if the system is idle
(Sorry, didn't manage with "long" comment posting; so decided to post as answer)
#Dukeling
SQL query that takes almost all the time for calculation mentioned above is just a replication of business logic that should be executed in PHP code. The logic was moved into SQL with the hope to reduce calculation time. OK, I’ll try both to optimize SQL query and play with executing logic in PHP code.
Suppose after that optimized application calculates individual rating for just 1 second. Great! But even in this case the first user logged into system should awaits for 120 seconds (120+ users * 1 sec = 120 sec) to calculate overall rating and gets its position in it.
I’m thinking of implementing the following approach:
Let’s have 2 “overall ratings” – “today” and “yesterday”.
For displaying purposes we’ll use “yesterday” overall rating represented as huge already sorted PHP array.
When user hits calculation link he started “today” calculation but application displays him “yesterday” value. Thus we have quickly accessible “yesterday” rating and each user randomly launches rating calculation that will be displayed for them tomorrow.
User list are partitioned by timezones. Each hour a cron job started to check if there’re any users in selected timezone that don’t have “today” individual rating calculated (e.g. user didn’t log into application). If so, application starts calculation of individual rating and puts its value in “today” (still invisible) ovarall rating array. Thus we have a cron job that runs nightly for each timezone-specific user group and fills the probable gaps in case users didn’t log into system.
After all users in all timezones had been worked out, application
sorts “today” array,
drops “yesterday” one,
rename “today” in “yesterday” and
initialize new “today”.
What do you think of it? Is it reasonable enough or not?

Meaning of axis of figures of simulation or performance modeling papers

I am reading some papers on simulation and performance modeling. The Y axis in some figures is labeled "Seconds per Simulation Day". I am not sure what it actually means. It span from 0, 20, 40 to 120.
Another label is "Simulation years per day". I guess it means the guest OS inside simulation environment thinks it has passed several years while actually it just passed a day in the real world? But I guess simulation should slow down the execution, so I guess inside simulation environment passed several hours while actually it just passed a day in the real world would be more reasonable.
Thanks.
Without seeing the paper, I assume they are trying to compare the CPU time it takes to get to some physical time in a simulation.
So "Seconds per Simulation Day" is likely the walltime it took to get 24 hours in the simulation.
Likewise, "Simulation Years per Day" is physical time of simulation/real life day.
Of course, without seeing the paper it's impossible to know for sure. It's also possible they are looking at CPU-seconds or CPU-days, which would be nCPUs*walltime.
Simulations typically run in discrete time units, called time steps. If you'd like to simulate a certain process that spans certain time in the simulation, you would have to perform certain number of time steps. If the length of a time step is fixed, the number of steps is then just the simulated time divided by the length of the time step. Calculations in each time step take certain amount of time and the total run time for the simulation would equal the number of time steps times the time it takes to perform one time step:
(1) total_time = (simulation_time / timestep_length) * run_time_per_timestep
Now several benchmark parameters can be obtained by placing different parameters on the left hand side. E.g. if you fix simulation_time = 1 day then total_time would give you the total simulation run time, i.e.
(2) seconds_per_sim_day = (1 day / timestep_length) * run_time_per_timestep
Large values of seconds_per_sim_day could mean:
it takes too much time to compute a single time step, i.e. run_time_per_timestep is too high -> the computation algorithm should be optimised for speed;
the time step is too short -> search for better algorithms that can accept larger time steps and still produce (almost) the same result.
On the other hand, if you solve (1) for simulation_time and fix total_time = 1 day, you get the number of time steps that can be performed per day times the length of the time step, or the total simulation time that can be achieved per day of computation:
(3) simulation_time_per_day = (1 day / run_time_per_step) * timestep_length
Now one can observe that:
larger time steps lead to larger values of simulation_time_per_day, i.e. longer simulations can be computed;
if it takes too much time to compute a time step, the value of simulation_time_per_day would go down.
Usually those figures could be used when making decisions about buying CPU time at some computing centre. For example, you would like to simulate 100 years, then just divide that by the amount of simulation years per day and you get how many compute days you would have to pay (or wait) for. Larger values of simulation_time_per_day definitely benefit you in this case. If, on the other hand, you only have 10 compute days at your disposal, then you can compute how long of a simulation could be computed and make some decisions, e.g. more short simulations but with many different parameters vs. less but longer simulations with some parameters that you have predicted to be the optimal ones.
In real life things are much more complicated. Usually computing each time step could take different time (although there are cases where each time step takes exactly the same amount of time as all other time steps) and it would strongly depend on the simluation size, configuration, etc. That's why standardised tests exist and usually some averaged value is reported.
Just to summarise: given that all test parameters are kept equal,
faster computers would give less "seconds per simulation day" and more "simulation years per day"
slower computers would give more "seconds per simulation day" and less "simulation years per day"
By the way, both quantites are reciprocial and related by this simple equation:
simuation_years_per_day = 236,55 / seconds_per_simulation_day
(that is "simulation years per day" equals 86400 divided by "seconds per simulation day" /which gives you the simulation days per day/ and then dividied by 365.25 to convert the result into years)
So it doesn't really matter if "simulation years per day" or "seconds per simulation day" is presented. One just have to chose the representation which clearly shows how much better the newer system is from the previous/older/existing one :)

Best approach: transfer daily values from one year to another

I will try to explain what I want to accomplish. I am looking for an algorithm or approach, not the actual implementation in my specific system.
I have a table with actuals (incoming customer requests) on a daily basis. These actuals need to be "copied" into the next year, where they will be used as a basis for planning the amount of requests in the future.
The smallest timespan for planning, on a technical basis, is a "period", which consists of at least one day. A period always changes after a week or after a month. This means, that if a week is both in May and June, it will be split in two periods.
Here's an example:
2010-05-24 - 2010-05-30 Week 21 | Period_Id 123
2010-05-31 - 2010-05-31 Week 22 | Period_Id 124
2010-06-01 - 2010-06-06 Week 22 | Period_Id 125
We did this to reduce the amount of data, because we have a few thousand items that have 356 daily values. For planning, this is reduced to "a few thousand x 65" (or whatever the period count is per year). I can aggregate a month, or a week, by combining all periods that belong to one month. The important thing about this is, I could still use daily values, then find the corresponding period and add it there if necessary.
What I need, is an approach on aggregating the actuals for every (working)day, week or month in next years equivalent period. My requirements are not fixed here. The actuals have a certain distribution, because there are certain deadlines and habits that are reflected in the data. I would like to be able to preserve this as far as possible, but planning is never completely accurate, so I can make a compromise here.
Don't know if this is what you're looking for, but this is a strategy for calculating the forecasts using flexible periods:
First define a mapping for each day in next year to the corresponding day in this year. Then when you need a forecast for period x you take all days in that period and sum the actuals for the matching days.
With this you can precalculate every week/month but create new forecasts if the contents of periods change.
Map weeks to weeks. The first full week of this year to the first full week of the next. Don't worry about "periods" and aggregation; they are irrelevant.
Where a missing holiday leaves a hole in the data, just take the values for the same day of the previous week or the next week, and do the same at the beginning/end of the year.
Now for each day of the week, combine the results for the year and look for events more than, say, two standard deviations from the mean (if you don't know what that means then skip this step), and look for correlations with known events like holidays. If a holiday doesn't show an effect in this test then ignore it. If you find an effect, shift it to compensate for the different date next year. Don't worry about higher-order effects, you don't have enough data to pin them down.
Now draw in periods wherever you like and aggregate all you want.
Don't make any promises about the accuracy of these predictions, there's no way to know it. Don't worry about whether this is the best possible way; it isn't, but it's as good as any you're likely to find. You can spend as much more time and effort fine-tuning this as you wish; it might raise expectations but it's not likely to make the results much more accurate-- it's about as likely to make them worse.
There is no A-priori way to answer that question. You have to look at your data, and decide what the important parameters (day of week, week number, month, season, temperature outside?) using the results.
For example, if many of your customers are jewish/muslim, then the gregorian calendar, and ISO-week numbers and all that won't help you much, because jewish/muslim holidays (and so users behaviour) are determined using other calendars.
Another example - Trying to predict iPhone search volume according to last year's search doesn't sound like a good idea. It seems that the important timescales are much longer than a year (the technology becoming mainstream over the years) and much shorter than a year (Specific events that affect us for days-weeks).

Resources