I'm building a calendar scheduling application for, let's say a plumbing company. The company has one or more plumbers, who each have a schedule of appointments at different times throughout the day. So Josh's schedule on May 30th might include a 30-minute appointment at 10 AM, a 45-minute appointment at 1 PM, and an hour-long appointment at 3 PM, while Maria has a completely different schedule that day. Now say a customer wants to book an appointment with this company, and my program has already calculated the time this new appointment will take. I want my program to return a list of possible appointment times for any plumber(s). Is there a standard algorithm for this type of problem?
I'd prefer language-agnostic, general steps just to be more helpful to anyone who might be in a similar situation with a different language, though I'm using PHP and PostgreSQL if there's a specific language feature suited to this.
Here's what I've tried so far:
Get all available shifts for every plumber on the requested day
Get all appointments already made on that day
Do a sort of boolean subtraction to cut the appointments out of the shifts, leaving gaps in each plumber's schedule
Get rid of all schedule gaps that are smaller than the requested appointment length (I also calculate drive times here so I know how far appointments need to be from one another)
Return those gaps to the customer, trimmed to the appointment length, as appointment possibilities
I've learned that the problem with this approach is that it doesn't understand what to do with a gap much larger than the requested appointment. If you have a gap from 10 AM to 6 PM but you want an hour-long appointment, it will only suggest 10 AM to 11 AM. This approach doesn't allow for time-of-day choices, either. In that same scenario, what if the customer wants a morning appointment? Then it should only suggest 10-11 and 11-12. Or if they want an evening appointment, it should only suggest 5-6 PM. This approach also doesn't consider two plumbers working together. If we assume that two workers = half the time, then maybe the algorithm should look for the same 30 minutes available in both Josh and Maria's schedules along with 60-minute gaps in either plumber's schedule. Lastly, this algorithm feels very inefficient.
By the way, I've looked at several other questions here and around the Internet about how to solve similar situations, but I'm finding that most (if not all) of those questions involve optimizing a schedule. That might be valuable for other parts of this program, but for now, let's assume that the existing appointments are fixed and unchangeable. We're just looking to fit a new appointment into an existing schedule. I know this is possible because applications like Calendly have similar inputs and outputs.
In short, is there a better way of meeting these goals:
Suggest available gaps in one plumber's schedule given a time interval
If possible, only return appointment possibilities in the given time of day (morning = 4-12, afternoon = 12-5, evening = 5-10, night = 10-4, or any), and if not possible, continue with the algorithm as if no time of day had been specified
Suggest smaller gaps where n plumbers might do the job in 1/n time (there aren't that many plumbers, so setting a limit on this isn't necessary). This isn't as important as the other criteria, so if this isn't possible or would make the algorithm far more complex, then don't worry about it.
Split big appointment gaps into smaller gaps so we can suggest 4 hour-long gaps in between 10 AM and 2 PM. Obviously we can't suggest all possible hour-long segments of that gap because they'd be infinite
Thank you in advance.
There is no need for any sophisticated algorithm. There is only a small number of possible appointment times throughout a day, let's say every 30 minutes or so. Iterate over all possible times: 06:00, 06:30, 07:00, ... 20:00. Check each time if it matches the requirements, that check can either return a yes/no result, or a number that say how good a match that time is. You end up with a list of possible appointment times, pick the best one or all of them.
In TFS, the Remaining Wokrk field is a Double.
How can I set minutes for it?
For quarters of hours this is "easy" (or less difficult)
1h30 = 1,5
1h45 = 1,75
But, Ex:
10min = 0,17
20min = 0,33
It's hard!
Suggestions? Or am I overreacting?
Why would you need such a level of precision? I personally only deal in hours, although I know people who do deal in half hours.
I rarely have any task in development that will take less than 30 minutes, and if I did, I would just round it up. I would not expect anyone to have a lot of tasks taking under 30 minutes, if you do, perhaps your tasks are too granular. My tasks are often 2/3 hours in size and I just change them to 0, when complete, or update remaining hours at the end of the day, or if I realise I have underestimated and need to add more on. I do not perform periodic updates through the day because they don't really benefit anyone.
Remaining work in TFS is expected to be used as part of the burn down chart to show the estimated number of hours remaining, so long as you are tracking on the line it doesn't really matter.
I all comes down to how you have to work, if you are free to adopt the agile principles, then just stick with hours/halves, if you have management that require to the minute remaining, then you will have to go with the way you have in your question.
I agree with Dave, no need to track at the minute granularity.
And if for some reason you really do need to track at that level, there is nothing saying that Remaining Work has to be entered in hours. It's just a numeric field, you could enter hours, minutes, days, whatever, so long as everybody consistently uses the same unit (I think there may be a couple reports that show it as hours, but as far as TFS is concerned it doesn't have to be hours if you don't want it to be).
I'm writing a media player and want to mark media files as listened in order to filter them.
However, I'm lacking a good idea for when to mark a song/video as listened/watched.
Movies tends to create the largest problem. You might not watch the credits in the last two minutes, and you might skip around.
I guess I could track the total amount of played seconds, but this causes problems if the first half is watched twice for some reason. Keeping track of which parts of the movies has been played seems like a huge mess.
One of the best solutions I have come up with is to mark the movie/song as listened/watched if the use has played more than X seconds in the last 10% of the file. Then I would be reasonable to assume he has listened to most of it and/or listened/watched what he wanted.
However, all the solutions above are bad, and I would really like some input
What about another approach?
If the user doesn't hit the next/prev/random button or closes in more than half of the media, then that file is listened/watched. You may need to track the time watched, and taking care of time overlapping (watching the first two minutes and then watching again the first one minute doesn't mean the user watched 3 minutes of your file).
In my opinion, I'd play more with the skipping / closing feature, rather than checking if the user listened to X more than Y in the last 10%. You could anyway move to the last 10% and then the media would be marked as viewed.
However, my solution is not as accurate as it should be, and maybe there isn't one. Maybe you should look into the UX site.
A delay will always occur between a user action and an application response.
It is well known that the lower the response delay, the greater the feeling of the application responding instantaneously. It is also commonly known that a delay of up to 100ms is generally not perceivable. But what about a delay of 110ms?
What is the shortest application response delay that can be perceived?
I'm interested in any solid evidence, general thoughts and opinions.
The 100 ms threshold was established over 30 yrs ago. See:
Card, S. K., Robertson, G. G., and Mackinlay, J. D. (1991). The information visualizer: An information workspace. Proc. ACM CHI'91 Conf. (New Orleans, LA, 28 April-2 May), 181-188.
Miller, R. B. (1968). Response time in man-computer conversational transactions. Proc. AFIPS Fall Joint Computer Conference Vol. 33, 267-277.
Myers, B. A. (1985). The importance of percent-done progress indicators for computer-human interfaces. Proc. ACM CHI'85 Conf. (San Francisco, CA, 14-18 April), 11-17.
What I remember learning was that any latency of more than 1/10th of a second (100ms) for the appearance of letters after typing them begins to negatively impact productivity (you instinctively slow down, less sure you have typed correctly, for example), but that below that level of latency productivity is essentially flat.
Given that description, it's possible that a latency of less than 100ms might be perceivable as not being instantaneous (for example, trained baseball umpires can probably resolve the order of two events even closer together than 100ms), but it is fast enough to be considered an immediate response for feedback, as far as effects on productivity. A latency of 100ms and greater is definitely perceivable, even if it's still reasonably fast.
That's for visual feedback that a specific input has been received. Then there'd be a standard of responsiveness in a requested operation. If you click on a form button, getting visual feedback of that click (eg. the button displays a "depressed" look) within 100ms is still ideal, but after that you expect something else to happen. If nothing happens within a second or two, as others have said, you really wonder if it took the click or ignored it, thus the standard of displaying some sort of "working..." indicator when an operation might take more than a second before showing a clear effect (eg. waiting for a new window to pop up).
New research as of January, 2014:
http://newsoffice.mit.edu/2014/in-the-blink-of-an-eye-0116
...a team of neuroscientists from MIT has found that the human brain
can process entire images that the eye sees for as little as 13
milliseconds...That speed is far faster than the 100 milliseconds
suggested by previous studies...
At the San Francisco Opera house, we routinely setup precise delay setting for each of our speakers. We can detect 5 millisecond changes in delay times to our speakers. When you make such subtle changes, you change where the sound sources from. Often times we want sound to sound as if it's coming from someplace other than were the speakers are. Precise delay adjustments make this possible. Sound delays of 15 milliseconds are very obvious even to untrained ears because it radically shifts where the sound sources from. A simple test is to prove this is to play sound through multiple speakers, and have the subject close their eyes and point to where the sound is coming from. Now make a slight change in the delay time to one of the speakers of just a few milliseconds, and have the person point again to where the sound is coming from. Making changes in delay times is acoustically very similar to moving the actual speakers.
I don't think anecdotes or opinions are really valid for answers here. This question touches on the psychology of user experience and the sub-conscious mind. The human brain is powerful and fast and mere milliseconds do count and are registered. I am no expert but I know there is much science behind e.g. what Matt Jacobsen mentioned. Check out Google's study here http://services.google.com/fh/files/blogs/google_delayexp.pdf for an idea of how much it can affect site traffic.
Here's another study by Akami - 2 second response time
http://www.akamai.com/html/about/press/releases/2009/press_091409.html (From https://ux.stackexchange.com/questions/5529/once-apon-a-time-there-was-a-10-seconds-to-load-a-page-rule-what-is-it-nowa )
Does anyone have any other studies to share?
Persistence of vision is around 100ms so it should be a reasonable visual feedback delay. 110ms should make no difference, as it is an approximate value. In practice you won't notice a delay below 200ms.
Out of my memory, studies have shown that users lose patience and retry an operation after around 2s of inactivity (in the absence of feedback), e.g. clicking on a confirm or action button. So plan on using some kind of animation if the action takes longer than 1s.
I worked on an application that had a explicit business goal of being blindingly fast, and we had a max allowed server time of 150ms for processing a full web page.
No solid evidence but for our own application, we allow a maximum of one second between a user action and feedback. If it does take longer, a "waiting box" should be shown.
A user should see "something" happening within a second of causing an action.
Use the dual of test for visual spatial resolution ( two parallel black bars, with an equal width, and an equal gap between them. Reduce angular subtense until they appear to be one line, ie scale down or simply move away. The point at which it seems to merge into one line shows the threshold).
Use function gen to blink an LED on for an interval, then off, then on, then off --- same time delay each interval, but repeat the pattern while gradually decreasing that delay, thus same as above, but time in place of space.
Imagine an oscilloscope image like so:
_________/^d^\_d_/^d^\_________
I note that at 41 ms interval, I perceive one longer blink only, but at 42 ms, I just perceive it as extremely rapid double blink. Thus, threshold is ~42ms. Probably varies depending on person, age, condition etc.
This is close to 24 fps, which is probably why cinema works at that presentation rate.
Reaction time to see something, then decide to react, say by clicking mouse etc, is longer much longer again. Thus, it's not surprising that experiments requiring a reaction response to measure yield a longer time, but that longer delay wasn't what you were asking for, and the above experiment is easy and illuminating!
But note also -- smoothly moving animations require the visual cortex to work harder, delaying visual comprehension. This delay is 'hidden' from perception, so longer delays (several hundred ms) can be 'hidden' by just providing something thats difficult to see because moving.
The effect that hides it is called Chronostasis. Basically, glancing somewhere 'new' requires the visual cortex to work harder to 'de-render' / 'recognise' the scene. This takes a remarkably long time, during which your consciousness is essentially 'paused'.
Once looking at a mostly-constant scene, only changes need this processing, so smaller/faster changes are possible and your perceptual experience resumes, and faster/smaller movements are detectable.
The detection of changes visually is processed basically on your retina. Your eyes also have a natural 'bandpass' response -- stare unblinkingly at anything for sufficient time, and at sufficient distance for saccades to be unable to change the image much, and you will find your visual feed fading out to 'grey'. This is what gives us our 'white balance', and is somewhat similar to the automatic gain control on analogue radio/tv.
The point is, that your eyes themselves have a time constant to respond, but this is actually dependant on the strength of the stimulus. (brightness of the LED, for our case).
Too bright, and the ability of your retinal cells to 'relax' back from the brightness, ie, respond to the 'sudden dark', is compromised.
The effect which keeps you seeing bright things after the light has stopped is called 'persistence of vision', and old cathode-ray picture tubes more or less depend heavily on it for them to work at all.
This is the one that's usually 100 ms or so, but it's not a 'sharp' interval -- more like a exponential roll-off, and again -- changes duration depending on how bright the stimulus is relative to how dark-adjusted (ie, sensitive) the eye is at that moment.
For duller, faster changes, especially changes outside your fovea, you will perceive even higher rates easily. Eg, flickering lights. Those outer parts of your retina (most of the area, actually) are adapted to detecting movement, and bringing it to your attention. So it makes sense that although lacking spatial resolution, they have greater time resolution / shorter response rate.
But this also means animating things usually requires even finer time steps, otherwise 'jumpiness' is perceptible, mostly due to that faster response.
Note all the scaling/sliding full screen animations iOS uses -- these essentially exploit chronostasis to hide technically unavoidable loading delays, giving the perception that those products respond instantly and smoothly at all times.
So, show something different within 42 ms -> instant response.
Keep animating otherwise useless hard-to-see-properly visuals continuously at high frame rates, then stop suddenly when done -> hides the delay so long as enough is visually busy, and the delay isn't too long. (probably 250ms is pushing the friendship).
This also seems to tee up with other's perceptions of input lag, for example : http://danluu.com/input-lag/
100ms is totally wrong. You can prove this yourself using your fingers, a desk, and a watch with visible seconds. Synchronising to the watch's seconds, drum out beats on the desk continuously such that 16 beats are drummed out every second. I chose 16 because it is natural to drum out multiples of two, so it's like four strong beats with three weak beats in between. Adjacent beats are clearly discernible by their sound. The beats are separated by about 60ms, so even 60 ms is actually still too high. Therefore the threshold is way below 100ms, especially if sound is involved.
For instance, a drum app or a keyboard app needs a delay of more like 30ms, or else it gets really annoying, because you hear the sound coming from the physical button / pad / key well before the sound comes out of the speakers. Software like ASIO and jack were made specifically to deal with this issue, so no excuses. If your drum app has a 100ms delay, I will hate you.
The situation for VoIP and high powered gaming is actually worse, because you need to react to events in real time, and in music, at least you get to plan ahead at least a little. For an average human reaction time of 200ms, a further 100ms delay is an enormous penalty. It noticeably changes the conversational flow of VoIP. In gaming, 200ms reaction time is generous, especially if the players have a lot of practice.
For a reasonably current scholarly article, try out How Much Faster is Fast Enough? User Perception of
Latency & Latency Improvements in Direct and Indirect Touch (PDF). While the main focus was on JND (Just Noticeable Difference) of delay, there is some good background on on absolute delay perception and they also acknowledge and account for 60Hz monitors (16.7 ms repaint times) in their second experiment.
I am a cognitive neuroscientist who studies visual perception and cognition.
The paper by Mary Potter mentioned above regards the minimum time required to categorize a visual stimulus. However, understand that this is under laboratory conditions in the absence of any other visual stimuli, which certainly would not be the case in the real world user experience.
The typical benchmark for a stimulus-response / input-stimulus interaction, that is, the average amount of time for an individuals minimum reaction speed or input-response detection is ~200ms. to be certain there is no detectable difference, this threshold could be lowered to around 100ms. Below this threshold, the temporal dynamics of your cognitive processes take longer to compute the event than the event itself, so there is nearly no chance of any ability to detect or differentiate it. You could go lower to say 50 ms, but it really wouldn't be necessary. 10 ms and you've gone into the territory of overkill.
For web applications 200ms is considered as unnoticable delay, while 500ms is acceptable.