How to add a time tracking feature to an iBeacon project? - ibeacon

How do you add a time tracking feature to an iBeacon Project? Additionally, are there any projects with available code that have already implemented this?
I want to implement time tracking for a small project. I want to place a beacon scanner on my door and check at what time I leave my house and when I arrive back at my house.

This is a little more complex than it sounds. The two simplest approaches do not work:
Every time your phone detects the beacon assume you are going out or coming back.
Every time your phone stops detecting the beacon, assume you are gone.
Approach #1 only works in a very large building where you are always outside radio range unless you are at the door. This is almost never true, so it fails. In addition, this technique also fails if you simply go to the door without exiting or entering.
Approach #2 is more reliable, but only works in a very small building where the beacon is always within radio range. This is unlikely in anything larger than a single room. You can improve this by deploying many beacons to ensure coverage.
No solution will ever be perfect. But you can combine the two approaches to come up with a reasonable guess of when the phone entered and exited the building based on the time of last detection and how long it is reasonably true that someone could be “inside” the the building without detecting the beacon. The best algorithm depends on the specifics of the building and the use case.

Related

A C++11 based signals/slots with ordering

I may be a bit in over my head here, but if you never try new things - you'll never learn I suppose. I'm working with some multi-touch stuff and have built myself a small but functional GUI library. Up until now I've been used boosts Signals2 library to have my detected gestures be distributed to all active GUI elements (whether on screen or not). I'm a big fan of avoiding pre-mature optimization, so things have been honky-dory until now.
I've used vs2013's profiler to find out that when the user goes touch crazy (the device supports up to 41 simultaneous touches), then my system grinds to a halt, and Signals2 is the culprit. Keep in mind that each touch can trigger a number of Gestures which are all communicated to every GUI element that have registered to interact with this type of Gesture.
Now there are a number of ways to deal with this bottleneck:
Have GUI elements work more cleverly and disconnect them when they're off-screen.
Optimize the signals/slots system so the calls are resolved faster.
Prioritization of events.
I'm not a big fan of ever having to deal with 3 - if avoidable - as it'll directly impact the responsiveness of my application. Nr. 1 should probably be implemented, but I'm more interested in getting to Nr. 2 first.
I don't really need any big fancy stuff. The Signals/Slots system I'd need really only needs to do the core emission stuff along with these 2 feature:
Slots must be able to return a value ending the emission - effectively cancelling any subsequent handling of a signal.
Slots must be order-able - and fairly efficient at this. GUI elements that are interacted with will pop-up above others, so this type of change in order is bound to happen quite often.
I stumbled across this really interesting implementation
https://testbit.eu/2013/cpp11-signal-system-performance/
which seems to have everything except the 'ordering' I need. I've only looked over the code once, and it does seem a little intimidating. If I were to try and add ordering capabilities, I'd prefer not to change too much stuff around if necessary. Does anyone have experience with this stuff? I'm fairly certain that a linked list is not optimal for constant removal and insertion, but then again, it probably needs to be optimized the most for constant emission calls.
Any thoughts are most welcome!
--- Update ---
I've spend a little time adding the features I needed to the code put into the public domain above and pasted the complete (and somewhat hacky version) here:
SimpleSignal with added features
In short, I've added:
Blockable Connections (Implemented via simple IF statement)
Depth/Order parameter (Linear-search insertion)
With those additions, keep in mind it has these current issues:
Blocked connections are simply skipped, not actively removed from the data-structure, so having a lot of blocked connections will affect run-time performance.
Depth is only maintained during insertion. So if you'd like to change the depth you'll have to disconnect and reconnect your slot.
Since the SignalLink interface has become exposed (as a result of my block implementation), it's less safe from a user perspective. It's way easier for you to shoot yourself in the foot with this version by messing with existing references and pointers.
This implementation hasn't been as thoroughly tested as the original I'm sure. I did try out the new functionality a bit. User beware.

Calculating Project Programming Times [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
As a lead developer I often get handed specifications for a new project, and get asked how long it'll take to complete the programming side of the work involved, in terms of hours.
I was just wondering how other developers calculate these times accurately?
Thanks!
Oh and I hope this isn't considered as a argumentitive question, I'm just interested in finding the best technique!
Estimation is often considered a black art, but it's actually much more manageable than people think.
At Inntec, we do contract software development, most if which involves working against a fixed cost. If our estimates were constantly way off, we would be out of business in no time.
But we've been in business for 15 years and we're profitable, so clearly this whole estimation thing is solvable.
Getting Started
Most people who insist that estimation is impossible are making wild guesses. That can work sporadically for the smallest projects, but it definitely does not scale. To get consistent accuracy, you need a systematic approach.
Years ago, my mentor told me what worked for him. It's a lot like Joel Spolsky's old estimation method, which you can read about here: Joel on Estimation. This is a simple, low-tech approach, and it works great for small teams. It may break down or require modification for larger teams, where communication and process overhead start to take up a significant percent of each developer's time.
In a nutshell, I use a spreadsheet to break the project down into small (less than 8 hour) chunks, taking into account everything from testing to communication to documentation. At the end I add in a 20% multiplier for unexpected items and bugs (which we have to fix for free for 30 days).
It is very hard to hold someone to an estimate that they had no part in devising. Some people like to have the whole team estimate each item and go with the highest number. I would say that at the very least, you should make pessimistic estimates and give your team a chance to speak up if they think you're off.
Learning and Improving
You need feedback to improve. That means tracking the actual hours you spend so that you can make a comparison and tune your estimation sense.
Right now at Inntec, before we start work on a big project, the spreadsheet line items become sticky notes on our kanban board, and the project manager tracks progress on them every day. Any time we go over or have an item we didn't consider, that goes up as a tiny red sticky, and it also goes into our burn-down report. Those two tools together provide invaluable feedback to the whole team.
Here's a pic of a typical kanban board, about halfway through a small project.
You might not be able to read the column headers, but they say Backlog, Brian, Keith, and Done. The backlog is broken down by groups (admin area, etc), and the developers have a column that shows the item(s) they're working on.
If you could look closely, all those notes have the estimated number of hours on them, and the ones in my column, if you were to add them up, should equal around 8, since that's how many hours are in my work day. It's unusual to have four in one day. Keith's column is empty, so he was probably out on this day.
If you have no idea what I'm talking about re: stand-up meetings, scrum, burn-down reports, etc then take a look at the scrum methodology. We don't follow it to the letter, but it has some great ideas not only for doing estimations, but for learning how to predict when your project will ship as new work is added and estimates are missed or met or exceeded (it does happen). You can look at this awesome tool called a burn-down report and say: we can indeed ship in one month, and let's look at our burn-down report to decide which features we're cutting.
FogBugz has something called Evidence-Based Scheduling which might be an easier, more automated way of getting the benefits I described above. Right now I am trying it out on a small project that starts in a few weeks. It has a built-in burn down report and it adapts to your scheduling inaccuracies, so that could be quite powerful.
Update: Just a quick note. A few years have passed, but so far I think everything in this post still holds up today. I updated it to use the word kanban, since the image above is actually a kanban board.
There is no general technique. You will have to rely on your (and your developers') experience. You will have to take into account all the environment and development process variables as well. And even if cope with this, there is a big chance you will miss something out.
I do not see any point in estimating the programming times only. The development process is so interconnected, that estimation of one side of it along won't produce any valuable result. The whole thing should be estimated, including programming, testing, deploying, developing architecture, writing doc (tech docs and user manual), creating and managing tickets in an issue tracker, meetings, vacations, sick leaves (sometime it is batter to wait for the guy, then assigning task to another one), planning sessions, coffee breaks.
Here is an example: it takes only 3 minutes for the egg to become roasted once you drop it on the frying pen. But if you say that it takes 3 minutes to make a fried egg, you are wrong. You missed out:
taking the frying pen (do you have one ready? Do you have to go and by one? Do you have to wait in the queue for this frying pen?)
making the fire (do you have an oven? will you have to get logs to build a fireplace?)
getting the oil (have any? have to buy some?)
getting an egg
frying it
serving it to the plate (have any ready? clean? wash it? buy it? wait for the dishwasher to finish?)
cleaning up after cooking (you won't the dirty frying pen, will you?)
Here is a good starting book on project estimation:
http://www.amazon.com/Software-Estimation-Demystifying-Practices-Microsoft/dp/0735605351
It has a good description on several estimation techniques. It get get you up in couple of hours of reading.
Good estimation comes with experience, or sometimes not at all.
At my current job, my 2 co-workers (that apparently had a lot more experience than me) usually underestimated times by 8 (yes, EIGHT) times. I, OTOH, have only once in the last 18 months gone over an original estimate.
Why does it happen? Neither of them, appeared to not actually know what they are doing, codewise, so they were literally thumb sucking.
Bottom line:
Never underestimate, it is much safer to overestimate. Given the latter you can always 'speed up' development, if needed. If you are already on a tight time-line, there is not much you can do.
If you're using FogBugz, then its Evidence Based Scheduling makes estimating a completion date very easy.
You may not be, but perhaps you could apply the principles of EBS to come up with an estimation.
This is probably one of the big misteries in the IT business. Countless failed software projects have shown that there is no perfect solution to this yet, but the closest thing to solving this I have found so far is to use the adaptive estimation mechanism built in to FogBugz.
Basically you are breaking your milestones into small tasks and guess how long it will take you to complete them. No task should be longer than about 8 hours. Then you enter all these tasks as planned features into Fogbugz. When completing the tasks, you track your time with FogBugz.
Fogbugz then evaluates your past estimates and actual time comsumption and uses that information to predict a window (with probabilities) in which you will have fulfilled your next few milestones.
Overestimation is rather better than underestimation. That's because we don't know the "unknown" and (in most cases) specifications do change during the software development lifecycle.
In my experience, we use iterative steps (just like in Agile Methodologies) to determine our timeline. We break down projects into components and overestimate on those components. The sum of these estimations used and add extra time for regression testing and deployment, and all the good work....
I think that you have to go back from your past projects, and learn from those mistakes to see how you can estimate wisely.

Which are the advantages of splitting the developer's time between two projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have two projects, with identical priorities and work hours demand, and a single developer. Two possible approaches:
Deliver one project first.
Split the developer's time and deliver both later.
I can't see any reason why people would choose the second approach. But they do. Can you explain me why?
It seems to me that this decision often comes down to office politics. One business group doesn't want to feel any less important than another, especially with identical priorities set at the top. Regardless as to how many different ways you explain why doing both at the same time is a bad idea, it seems as though the politics get in the way.
To get the best product to the users, you need to prevent developer thrashing. When the developers are thrashing, the risk of defects and length of delivery times begin to increase exponentially.
Also, if you can put your business hat on, you can try to explain to them that right now, nobody is getting any value from what the completed products will deliver. It makes more sense for the business to get the best ROI product out the door first to begin recouping the investment ASAP, while the other project will start as soon as the first is finished.
Sometimes you need to just step away from the code you have been writing for 11 hours in order to stay maximally productive. After you have been staring at the minutiae of a system you have been implementing for a long time it can become difficult to see the forest for the trees, and that is when you start to make mistakes that are hard to un-make.
I think it is best to have 2-3 current projects; one main one and 1-2 other projects that aren't on such a strict timeline.
If both projects have the same priority for the company, one obvious reason is for project managers to give higher management the illusion that both of the projects are taken care of.
Consider that the two projects could belong to different customers (or be requested by different people from higher management).
No customer wants to be told to wait while a different customer's project is given priority.
"We'll leave the other one for later" is, a lot of times, not an acceptable answer, even though this leads to delays for both projects.
I believe this is related to the notion of "Perceived Responsiveness" in a software program. Even if something takes more time to do, it looks faster when it appears to be doing something, instead of idly waiting for some other stuff to complete.
It depends on the dependencies involved. If you have another dependency upon the project that can be fulfilled when the project is not 100% complete, then it may make sense to split the developer's time. For example, if your task is large, it may make sense to have the primary developer do a design, then move on to a second task while a teammember reviews the design the primary developer came up with.
Furthermore, deserializing developers from a single task can help to alleviate boredom. Yes, there is potentially significant loss in the context switch, but if it helps keep the dev sharp, it's worth it.
if you go by whats in the great and holy book 'peopleware', you should keep your programmer on one project at a time.
the main reason for this is that divided attention will reduce productivity.
unfortunately, because so many operational managements are good businessman rather then good managers, they may think that multitasking or working on both projects somehow means more things are getting done (which is impossible, a person can only physically exists in one stream of the space-time continuum at one time).
hope that helps :)
LM
I think the number 1 reason from a management standpoint is for perceived progress. If you work on more than one project at the same time stakeholders are able to see progress immediately. If you hold one project off then the stakeholders of that project may not like that nothing is being worked on.
Working on more than 1 project also minimizes risks somewhat. For example if you work on one project first and that project takes longer than expected you could run into issues with the second project. Stakeholder also most likely want their project done now. Holding one off due to another project can make them reconsider going ahead with the project.
Depending on what the projects are you might be able to leverage work done in one for the other. If they are similar then doing both at the same time could be of benefit. If you do them in sequence only the subsequent projects can benefit from the previous ones.
Most often projects are not a constant stream of work. Sometimes developers are busy and sometimes not. If you only work on 1 project at a time a developer and other team members would likely be doing nothing while the more 'administrative' tasks are taking place. Managing the time over more than one project allows teams to get more done in a shorter timeframe.
As a developer I prefer working on multiple projects as long as the timelines are reasonable. As long as I'm not being asked to do both at the same time with no change in the schedule I am fine. Often if I'm stuck on one project I can work on the other. It depends on the projects though.
I'd personally prefer the former but management might want to see progress in both projects. You might also recognise inaccurate estimates earlier if you are doing some work on both, enabling you to inform the customer earlier.
So from a development perspective 1 is the best option but from a customer service point of view 2 is probably better.
It's managing your clients expectations; if you can tell both clients you are working on their project but it will take a little longer due to other projects then to say we are putting your project off till we finish this other project the client is going to jump ship and find someone that can start working on their project now.
It's a plaecbo effect - splitting a developer between two projects in the manner you've described gives people/"the business" the impression that work is being completed on both projects (at the same rate/cost/time), whilst in reality it's probably a lot more inefficient, since context switching and other considerations carries a cost (in time and effort).
On one hand, it can get the ball rolling on things like requirement clarifications and similar tasks (so the developer can switch to the alternate project when they are blocked) and it can also lead to early input from other business units/stakeholders etc.
Ultimately though, if you have one resource then you have a natural bottleneck.
The best thing you can do for that lone developer is to intercept people( from distracting that person), and try to carry some of the burdon around requirements, chasing clarifications and handling user feedback etc.
The only time I'd ever purposely pull a developer off their main project is if they would be an asset to the second project, and the second project was stalled for some reason. If allowing a developer to split a portion of their time could help jump-start a stalled project, I'd do that. This has happened to me with "expert" developers - the ones who have a lot more experience/specialized skills/etc.
That being said, I would try to keep the developer on two projects for as little time as possible, and bring them back to their "main" project. I prefer to allow people to focus on one task at a time. I feel that my job as a manager is to balance and shift people's priorities and focus - and developers should just develop as much as possible.
There are three real-life advantages of splitting developers' time between projects that cannot be ignored:
Specialisation: doing or consulting on work that requires similar specialised knowledge in both projects.
Consistency and knowledge sharing: bringing consistency into the way two separate products are built and work, spreading knowledge accross the company.
Better team utilisation: on a rare occasion when one of the projects is temporarily on hold waiting for some further input.
Splitting time between several projects is beneficial when it does not involve a significant change in context.
Having a developer to work single-handedly on multiple software development projects negates the benefit of specialisation (there isn't any in the case), consistency and knowledge sharing.
It leaves just the advantage of time utilisation, however if contexts differ significantly and there is no considerable overlap between projects the overhead of switching will very likely exceed any time saved.
Context switching is a very interesting beast: contrary to its name implying a discreet change the process is always gradual. There are various degrees of having context information in one’s head: 10% context (shallow), 90% (deep). It takes less time to shallow-switch as opposed to fully-switch; however there is a direct correlation between the amount of context loaded (concentration on the task) and output quality.
It’s possible to fill your time entirely working on multiple distinct projects relying on shallow-switching (to reduce the lead time), but the output quality will inevitably suffer. At some point it’s not only “non-functional” aspects of quality (system security, usability, performance) that will degrade, but also functional (system failing to accomplish its job, functional failures).
By splitting the time between two projects, you can reduce the risk of delaying one project because of another.
Let's assume the estimate for both projects is 3 months each. By doing it serially, one after the other, you should be able to deliver the first project after 3 months, the second project 3 months later (i.e. after 6 months). But, as things go in software development, chances are that the first project encounters some problems so it takes 12 months instead. Or, even worse, goes into the "in use, but never quite finished" purgatory. The second project starts late or even never!
By splitting resources, you avoid this problem. If everything goes well with the second project, you are able to deliver it after 6 months, no matter how well the first project does.
The real life situations where working on multiple projects can be an advantage is in the case where the spec is unclear (every time) and the customer is often unavailable for clarification. In those cases you can just switch to the other project.
This will cause some task switching and should be avoided in a perfect world, but then again...
This is basically my professional life in a nutshell :-)

Coming up with time/staff required estimates for converting a desktop app into a client-server app

My boss is bidding on a project to convert a desktop application into one that runs online as a client-server application. The original app has a little more than a quarter of a million lines of C++ (MFC) code that's not cleanly divided between engine and front-end.
I need to come up with estimates of how long it will take and how many people would be needed for such a project. We don't have anyone on staff capable of doing this project, and we don't have any Windows machines, so we'd have to subcontract/outsource.
Alternatively, what arguments can I use to convince my boss that this project is a bad idea?
I would take a very long time, as most likely, you would have to change languages. It would basically be a complete rewrite. You may even just be better off defining what the old system did, and rewrite it from scratch, only referring to the old code when you need to duplicate business logic.
Without a clear division between the engine and the front-end, it's a bad idea. Giving over the code to a contractor or such would require a large amount of training just to understand the code, including parts that are meant to be rewritten as web apps.
You would first need to estimate how long it would take to clearly define that boundary, and then you can work out how long the rest of the project would take. That way you can train the contractor etc. on the parts that relate to the presentation tier and the API for the engine without explaining how the engine works (which should be irrelevant to the presentation components).
give the info to qualified consultants/outsourcers (send it to me, for example) and consider the ROI based on their estimates
it definitely sounds like you don't have the ability to do this efficiently in-house, but that doesn't mean that it is a bad idea - it depends on the ROI
for example, if your boss thinks this thing could generate $1M in revenue in the first year and a contractor could do the conversion in 6 months for $250K, the ROI (300% in 18 months) is probably good enough to do it
if the numbers are reversed, then it is obviously a Bad Idea ;-)
This sounds like a job for COCOMO II. COCOMO has been used successfully to estimate size and cost and staff size but it could be a pain. It requires a lot of details and time. But it does answer the question of how to go about estimating the size of the project and the number of people required and the time it will take.
The only way to properly estimate this is to give it to someone who 1. will be working on it; and 2. understands the language it's in as well as the language you are converting to.
Regardless, it is ALWAYS a BAD IDEA to bid on projects for which you have no experience. Every single time I've seen someone do that they badly underbid it. Not just in terms of the amount of work required, but also in the learning curve necessary to even begin to understand the underlying processes.
Especially beware of bidding a project you have no experience for then explicitly telling a subcontractor what their budget is to get a project done. I would never work under those conditions, again, because 100% of the time the person doing the bidding is going to get it way wrong.
One more thing, if it's a desktop app, with a backend, then it's already client server... Are you meaning to take an existing client/server app and convert it to a web app?

How do "spikes" figure in the schedule / estimation game? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Might be subjective and/or discussion.. but here goes.
I've been asked to estimate a feature for the next big thing at work. I break it down.. use story points come up with a estimate. The feature however calls for interfacing with GoDiagrams a third party diagramming component in addition to various other company initiatives.. (a whole set of 2008_Limited_Edition frameworks/services:). I've been tracking myself using a burn-up chart and I find that I'm unable to sustain my pace primarily due to "spikes".. Definition
I estimate for 2 points a week and then I find myself working weekends (well trying to.. end up neither here nor there) because I can't figure out where to hook in so that I can preview user-actions, show a context menu, etc. In the end I spend time making spikes that throw my schedule off-track... and decreases its value.. doesn't give the right picture.
Spikes are needed to drive nails through the planks of ignorance. But how are they factored into the estimation equation? Doing all required spikes before the feature seems wrong.. (might turn out to be YAGNI) Doing it in between disrupts my flow. Right now it's during pre-iteration planning.. but this is pushing the touchline out on a weekly basis.
I guess you are constantly underestimating
what you do already know about the 3rd party component
how long it takes you to create usable/helpful spikes for unknown areas
1. Get better at estimating those two things.
So, it's all about experience. No matter what methodology you use, they will help you to use your experience better, not replace it.
2. Try not to get lose track when working on those spikes.
They should be short, time boxed sessions. They are not about playing around with all the possible features listed on the marketing slides.
Give them focus, two or three options to explore. Expect them to deliver one concrete result.
Update(Gishu): To summarize
Spikes need to be explicit tasks defined in the iteration planning step.
If spikes exceed the timebox period, stop working on it. Shelve the associated task. Complete the other tasks in the current iteration bucket. Return to the shelved task or add a more elaborate/broken down spike to the next iteration along with the associated task. Tag a more conservative estimate to the generation 1 spike the next time.
If you run out of time in your timeboxed spike, you should still stop and complete your other committed work. You should then add another spike to your next iteration to complete the necessary work you need to complete in order to accurately estimate the task resulting from the spike.
If there is a concern over spiking things for too long and this becoming a problem - this is one reason I like 1 week iterations. :-)
#pointernil..
It's more of no estimation coupled with a Indy-Jones Head-First approach to tackling a story. I estimate stories by their content.. currently I don't take into account the time required to find the right incantation for the control library to play nice. That sometimes takes more time than my application logic.. So to rephrase the Original question, should spikes be separate tasks in the iteration plan, added on a JIT basis before you start working on a particular story?
My Spikes are extremely focussed.. I just can't wait to get back to the "real" problems. e.g. 'How do I show a context menu from this control?' I may be guilty of not reading the entire 150+ page manual or code samples.. but then time is scarce. The first solution that solves the problem gets the nod and I move on. But when you're unable to find that elusive event or NIH pattern of notification used by the component, spikes can be time-consuming. How do I timebox something that is unknown? e.g. My timebox has elapsed and I still have no clue for plugging-in my custom context menu. How do I proceed? Keep hacking away?
Maybe this comes in the "Buffering Uncertainity" scheme of things.. I'll look if I find something useful in Mike Cohn's book.
I agree with pointernil. The only issue is that your estimates are incorrect. Which is no big drama, unless you've just blown out a 3 million dollar project of course :-)
If it happens once, its a learning experience. If it happens again and the result is better, then you've got another learning experience under your belt. If you are constantly underestimating and your percentages are getting worse, you need to wisen up a bit. No methodology will get you out of this.
Spikes just need to be given the time that they need. The one thing I've seen happen repeatedly in my experience is that people expect to be able to nail a technology within a couple of hours, or a day. That just doesn't happen in real life. The simplest issue, even a bug caused by a typo, can have a developer pulling their hair our for huge chunks of time. Be honest about how competent yourself or your staff really are, and put it in the budget.

Resources