How can I keep track of my CircleCI build minutes usage? - continuous-integration

I'm currently trialling the free tier of CircleCI on their Linux build machines. They claim to offer:
1,500 build minutes per month
1 container
1 concurrent build
I've added some repos and done some builds, but there does not seem to be any way to keep track of my minutes usage. In the short term I doubt I will go over this generous limit, but it might be possible in the future, and I'd like to keep an eye on that before I put too much effort into building around this CI system.
The only thing that looks relevant is the screen Settings > Plan > Overview, which looks like this:
It says my usage data will be updated within 30 days. Does that mean I have to wait for the end of the billing period every month to see what my usage was? I want to see my usage data in real time.

Aha, I found an old answer here, in a customer support forum. From an employee from October 2016. It seems to be the case that, at least previously, this feature was not supported:
From time to time, we look at the logs and send emails to organizations that do not have paid plans and whose aggregate builds are more than 1,500 minutes in a given month. To date, we haven't had the need (or desire) to shut anyone down for going over. Most folks either upgrade or realize that they unintentionally left a webhook running on a project that builds constantly (which uses AWS resources that we still incur costs against!)
As an organization, CircleCI cares that users get value out of their CI system. If you are getting value and using us for work that matters to you, hopefully you'll consider our paid options (you will get more containers for concurrency AND parallelism, you'll get engineer support, and features like Insights!).
Finally - we do have plans to show linux minutes in-app in the future but it's not on the current quarter's roadmap.
However, a day or so after I took the screenshot in the question, the data has now populated:
I will keep an eye on this to see if it is indeed real-time, but it looks correct to me.
Update
The usage facility has been non-operational for two weeks, and others have reported problems as well.
While this may be fixed in due course, it may be worth bearing in mind that it seems to be fragile, so open your Network tab if you're having problems!

Related

Getting hours worked during a sprint in Team Services

I'm over a team of developers whose primary responsibility is the sustainment of our production applications. Once a month we work on a support sprint for an app, fixing any reported bugs and adding new enhancements to that application. My leadership has asked for a reporting of the hours worked by each developer during the sprint, which I know I can get from looking at each task, but is there a way to generate some kind of report from Team Services that would have that information in it?
The work item experience in VSTS isn't intended to be a time tracking mechanism, it's intended to be a tool for teams to manage work in an Agile fashion. Agile isn't about hours worked, it's about delivering value and meeting commitments.
You'll need to evaluate a third-party extension, such as TimeTracker. I have no experience with it, so I can't vouch for how good it is (or isn't!).
There is not a built-in report can be generated to get what you want. Since TFS/VSTS is an effort tracking system and not a time tracking system it is a loosing battle to try and track billable time as part of TFS/VSTS.
Agile projects don't focus on how long individual tasks take -- they focus on how much value the development team is providing over the course of a set period of time. One thing might be estimated low, one task might be estimated high, but it ultimately doesn't matter as long as the team delivers what they committed to deliver.
About Scrum, suggest you take a look at this related question: Reporting on completed sprint
If you insist on achieving this, you could use some 3-rd party extension such as Imaginet Time Sheet & Timetracker. For example, about time tracker, it is fully integrated into TFS, which allows tracking directly against workitems and offers data export as well as automatic filling of the fields mentioned above. With the help of the extension, project manager could have all times based on rules automatically reported back into MS Project.

Frequently our GitLab is getting slow

Frequently last few days onwards our GitLab(CE) running slowly. We have a hook for the CI with Jenkins. We had installed the GitLab by OmniAuth. I don't have any more ideas regarding this because we didn`t do anything new in our instances.
We are the newbie to GitLab environment. We are working in the GitLab since December 2016 and also we never faced this kind of issue before. I hope that I will fix this problem with you people. Kindly help me to fix the issue.
Follow the below image for our Gitlab details.
How could I overcome from this issue?
These are just some as-is suggestions offered without warranty, but they may help guide you to solving the problem.
Occam's Razor
You mentioned that these issues appear to have just started most recently. This means that the VERY FIRST place to look is what may have changed around the time that these issues occurred. If you have change control for your infrastructure, start there. Make absolutely sure nobody has changed anything around the time these issues started happening. Check your logs for any warnings that may have started showing up. If your OS has a security log or logs configuration changes, check those. If you don't have good visibility/audit-ability into your environment, this may be hard, but if you can identify something that changed around the same time as these issues started occurring, that is most often going to be your problem.
Specificity
It may be helpful for you to describe what you mean by it getting slow. Is it a specific operation that is slow? Or is it all activity? If it's something specific, like triggering a Jenkins job, then you can start to isolate your search there.
It can also help to run top on your server to get a picture of what might be causing the issues. There might be a specific process running on the machine that is dominating everything else and eating all of the resources.
Hardware
First thing I would check is to make sure your hardware configuration matches the 'Hardware requirements' guidelines on gitlab's website:
https://docs.gitlab.com/ce/install/requirements.html#requirements
Based on what you've posted, the CPU and memory on your system seem adequate for several thousand users, so I'm going to assume this isn't a problem, but in case you do have thousands of users, I will add some brief information on this. Your disk configuration (other than size) is not presented in the information above, so we don't know if that is sufficient or not.
I would recommend running vmstat on the server (since it's GitLab, I am assuming this is running on Linux, since they do not recommend Windows installations) to get some basic information about what is going on. The vmstat command will give you several columns of information. To the very left there should be a column 'r'. This is the 'run queue', or the number of processes that are waiting to be run on a CPU. If the value in that column is large compared to the number of cores the system has, you probably have a CPU bottleneck. The next column, 'b', is processes that are blocked. If this is large, you probably don't have a CPU bottleneck. To the right, there are CPU columns: us, sy, id, or something along these lines. These columns are a breakdown of where the CPU is spending its time, either in the application code (us), in the OS code (sy), or waiting (id). High percentage numbers in us generally indicate that you either are running healthily or have a CPU bottleneck. High percentage numbers in sy are usually going to indicate some kind of contention, possibly a configuration issue like having too many worker threads configured for the number of CPUs you have. A high percentage number in id usually indicates that the system either isn't doing much, or can't do much because it's waiting on something like disk or an external database.
So if the 'b' and/or 'id' columns in your vmstat output have high numbers, we may want to consider the possibility of there being an I/O bottleneck. Here are a couple introductory articles on evaluating Linux IO for bottlenecks that might help you determine if this is the case:
https://bartsjerps.wordpress.com/2011/03/04/io-bottleneck-linux/
http://www.linux-mag.com/id/2001/
These articles should get you pointed in the right direction to help you decide if your disks aren't fast enough.
One thing to note, if you're seeing what appears to be a CPU bottleneck (high r values, high us values), make sure that situation makes sense for the number of users you have. The CPU bottleneck may be caused a virtualization issue, or some OS issue causing the CPU to perform poorly, not just by the CPU hardware itself being insufficient.
Topology
One thing mentioned in the gitlab requirements I linked to above is that it is not recommended to run GitLab runner on the same box as GitLab itself. This is something I would say is true for any CI software working with GitLab. If you're running GitLab Runner or Jenkins on the same box as GitLab itself, you should consider moving those to their own hardware.
If you have thousands of users, you may wish to get in contact with GitLab themselves and have consulting on how to get an enterprise-grade cluster stood up and what that looks like. There are people who are experts in the specific hardware configurations that make sense for a very large GitLab installation, and I am not one of them. However, if you don't have a large number of users, the hardware you have is probably not the issue.
Software
If you're running things like vmstat and iostat and you're not finding any specific hardware bottleneck, there may be a configuration issue. Make sure you have a good number of Unicorn Workers configured, so that the box can properly utilize your hardware.
External bottlenecks
Make sure things like network speed on the server are sufficient for its needs. Make sure users trying to reach the server aren't being bottlenecked by a misconfigured network. If you're using OmniAuth, make sure the provider is performing correctly. For example, if you're using some external authentication, and that isn't scaling/performing well, you'll get bad performance in GitLab as well. These are especially important to look at if you're not seeing much hardware utilization using the methods above.
Two aspects which can help accelerate GitLab are, in the latest April 2020 12.10 version:
the application server which switches back Puma
the caching of Git info/refs
The last point is:
When fetching changes from a Git repository, the server advertises a list of all the branches and tags in the repository, known as refs.
In some instances, we have observed up to 75% of all requests to the GitLab web server are requests for the refs.
In the best case, when all the refs are packed, this is an inexpensive operation.
However, when there are unpacked refs, Git must iterate over the unpacked refs. This causes additional disk I/O, which is slow when using high latency storage like NFS.
In GitLab 12.10, info/refs are cached to improve the performance of ref advertisement and decrease the pressure on Gitaly in situations where refs are fetched very frequently.
In testing this feature on GitLab.com, we observed read operations outnumber write operations 10 to 1, and saw median latency decrease by 70%.
For GitLab instances using NFS for Git storage, we expect even greater improvements.
See Documentation and Issue.

What’s the ROI of Continuous Integration?

Currently, our organization does not practice Continuous Integration.
In order for us to get an CI server up and running, I will need to produce a document demonstrating the return on the investment.
Aside from cost savings by finding and fixing bugs early, I'm curious about other benefits/savings that I could stick into this document.
My #1 reason for liking CI is that it helps prevent developers from checking in broken code which can sometimes cripple an entire team. Imagine if I make a significant check-in involving some db schema changes right before I leave for vacation. Sure, everything works fine on my dev box, but I forget to check-in the db schema changescript which may or may not be trivial. Well, now there are complex changes referring to new/changed fields in the database but nobody who is in the office the next day actually has that new schema, so now the entire team is down while somebody looks into reproducing the work you already did and just forgot to check in.
And yes, I used a particularly nasty example with db changes but it could be anything, really. Perhaps a partial check-in with some emailing code that then causes all of your devs to spam your actual end-users? You name it...
So in my opinion, avoiding a single one of these situations will make the ROI of such an endeavor pay off VERY quickly.
If you're talking to a standard program manager, they may find continuous integration to be a little hard to understand in terms of simple ROI: it's not immediately obvious what physical product that they'll get in exchange for a given dollar investment.
Here's how I've learned to explain it: "Continuous Integration eliminates whole classes of risk from your project."
Risk management is a real problem for program managers that is outside the normal ken of software engineering types who spend more time writing code than worrying about how the dollars get spent. Part of working with these sorts of people effectively is learning to express what we know to be a good thing in terms that they can understand.
Here are some of the risks that I trot out in conversations like these. Note, with sensible program managers, I've already won the argument after the first point:
Integration risk: in a continuous integration-based build system, integration issues like "he forgot to check in a file before he went home for a long weekend" are much less likely to cause an entire development team to lose a whole Friday's worth of work. Savings to the project avoiding one such incident = number of people on the team (minus one due to the villain who forgot to check in) * 8 hours per work day * hourly rate per engineer. Around here, that amounts to thousands of dollars that won't be charged to the project. ROI Win!
Risk of regression: with a unit test / automatic test suite that runs after every build, you reduce the risk that a change to the code breaks something that use to work. This is much more vague and less assured. However, you are at least providing a framework wherein some of the most boring and time consuming (i.e., expensive) human testing is replaced with automation.
Technology risk: continuous integration also gives you an opportunity to try new technology components. For example, we recently found that Java 1.6 update 18 was crashing in the garbage collection algorithm during a deployment to a remote site. Due to continuous integration, we had high confidence that backing down to update 17 had a high likelihood of working where update 18 did not. This sort of thing is much harder to phrase in terms of a cash value but you can still use the risk argument: certain failure of the project = bad. Graceful downgrade = much better.
CI assists with issue discovery. Measure the amount of time currently that it takes to discover broken builds or major bugs in the code. Multiply that by the cost to the company for each developer using that code during that time frame. Multiply that by the number of times breakages occur during the year.
There's your number.
Every successful build is a release candidate - so you can deliver updates and bug fixes much faster.
If part of your build process generates an installer, this allows a fast deployment cycle as well.
From Wikipedia:
when unit tests fail or a bug emerges, developers might revert the codebase back to a bug-free state, without wasting time debugging
developers detect and fix integration problems continuously - avoiding last-minute chaos at release dates, (when everyone tries to check in their slightly incompatible
versions).
early warning of broken/incompatible code
early warning of conflicting changes
immediate unit testing of all changes
constant availability of a "current" build for testing, demo, or release purposes
immediate feedback to developers on the quality, functionality, or system-wide impact
of code they are writing
frequent code check-in pushes developers to create modular, less
complex code
metrics generated from automated testing and CI (such as metrics for code coverage, code
complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team
well-developed test-suite required for best utility
We use CI (Two builds a day) and it saves us a lot of time keeping working code available for test and release.
From a developer point of view CI can be intimidating when Automatic Build Result, sent by email to all the world (developers, project managers, etc. etc.), says:
"Error in loading DLL Build of 'XYZ.dll' failed." and you are Mr. XYZ and they know who you are :)!
Here's my example from my own experiences...
Our system has multiple platforms and configurations with over 70 engineers working on the same code base. We suffered from somewhere around 60% build success for the less commonly used configs and 85% for the most commonly used. There was a constant flood of e-mails on a daily basis about compile errors or other failures.
I did some rough calculations and estimated that we lost an average of an hour a day per programmer to bad builds, which totals nearly 10 man days of work every day. That doesn't factor in the costs that occur in iteration time when programmers refuse to sync to the latest code because they don't know if it's stable, that costs us even more.
After deploying a rack of build servers managed by Team City we now see an average success rate of 98% on all configs, the average compile error stays in the system for minutes not hours and most of our engineers are now comfortable staying at the latest revision of the code.
In general I would say that a conservative estimate on our overall savings was around 6 man months of time over the last three months of the project compared with the three months prior to deploying CI. This argument has helped us secure resources to expand our build servers and focus more engineer time on additional automated testing.
Our biggest gain, is from always having a nightly build for QA. Under our old system each product, at least once a week, would find out at 2AM that someone had checked in bad code. This caused no nightly build for QA to test with, the remedy was to send release engineering an email. They would diagnose the problem and contact a dev. Sometimes it took as long as noon before QA actually had something to work with. Now, in addition to having a good installer every single night, we actually install it on VM's of all the different supported configurations everynight. So now when QA comes in, they can start testing within a few minutes. Now when you think of the old way, QA came in grabbed the installer, fired up all the vms, installed it, then started testing. We save QA probably 15 minutes per configuration to test on, per QA person.
There are free CI servers available, and free build tools like NAnt. You can implement it on your dev box to discover the benefits.
If you're using source control, and a bug-tracking system, I imagine that consistently being the first to report bugs (within minutes after every check-in) will be pretty compelling. Add to that the decrease in your own bug-rate, and you'll probably have a sale.
The ROI is really an ability to provide what the customer wants. This is of course very subjective but when implemented with involvement of the end customer, you would see that customers starts appreciating what they are getting and hence you tend to see less issues during User Acceptance.
Would it save cost? may be not,
would the project fail during UAT? definitely NO,
would the project be closed in between? - high possibility when the customers find that this would not deliver the
expected result.
would you get real-time and real data about the project - YES
So it helps in failing faster, which helps mitigate risks earlier.

Implementing a 30 day time trial [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Question for indie Mac developers out there:
How do I implement a 30-day time trial in a non-evil fashion? Putting a counter in the prefs is not an option, since wiping prefs once a month is not a problem for an average user. Putting the counter in a hidden file somewhere sounds a bit dodgy - as a user I hate when apps sprinkle my hard drive with random files. Any ideas?
This issue comes up repeatedly on the cocoa-dev mailing list and the consensus answer is always do the simplest thing possible. Determined hackers will break all but the most over-engineered solution. And they're unlikely to pay for the software anyways. Go for the 80/20 solution: the easy solution that gets 80% effect for 20% effort. In this case, putting something in ~/Library/Application Support/your.app.com/. You might name the file something innocent if you want to obfuscate things just a bit. Using the user defaults is easy too.
Whatever you do, don't use the MAC address or an other hardware ID. Users with a network home directory (e.g. in a shared lab setting) will hate you. Using hardware IDs is just evil.
If someone is in love with your program so much that they're willing to break your trial limits, let them. The free software costs you nothing and their good will (and maybe recommendation to others) is worth a lot.
Finally, write software that people want to use and price it for its value. If your price is a good value and people want to use it, most people will pay for it.
I would suggest to implement few of things which are less intrusive and may avoid a normal user to either uninstall or buy at one month period.
Use a special series of trial-serial number which stores expiry date in it. You can use encrpytion to store expiry date within the serial number.
Now create a configuration file which stores data in the encypted format and contain the serial number.
Additionally implement these things in the config file.
Make a note of time/date every time user starts the application.
Note the duration of the time application was open.
By doing the logging of timestamp you can avoid these workarounds:
If user reverses the computer date, you would know that app was already run on that day. Say user ran app on 1 and 3 day of month. Now after 30 days reverses the date and sets it to 2nd of month. Now by config file you would know that app already ran on 1 and 3 so user has messed up dates on the computer.
Let’s say every time user starts your app by first setting date to 5th of the month. By logging your application running time you would see that if the total hours in a day exceed 24 then user is fooling around.
Ensure that your app doesn’t run without the config file. So essentially you send the encrypted serial number in a file or maybe upon entering the serial number you can create file. Since the serial number already has the expiry date user can’t reuse the serial number also.
I would not suggest the internet way because people get pissed off when app tries to connect to server every time. Plus, one may get suspicious that you trying to send some personal data of users to your servers.
One thing I would like to say: No matter how strong the anti-piracy technique you use, someone is bound to break it. You are not making your app for those guys. You are making your app for people who would like your software and will buy it happily. So have the anti-piracy in limits without losing the genuine customers by making your application too intrusive during the trial period. One thought also says, if your software is getting cracked that means it’s getting popular also. Again opinions may differ and would not like to digress on these issues.
Consider this. How many potential users of your software are out there, just itching to use it solidly for the next 30 days?
I suspect the far more normal case is: Users encounter a new software package that solves a problem they've had on a site like lifehacker.com. the software gets downloaded, played with briefly, then put aside. Perhaps its mp3 ripping software and they don't have any cd's to rip at that time. Or they're just busy that day, but they'll get round to reviewing that software 'soon'.
30 days pass. Probably more. Only Then do they buy a CD, encounter some sort of 'problem' and remember, 'aha, theres that trial version I downloaded! Where did I put it again?'
It doesn't matter. Without ever being used, the 'trial' has timed out.
I can't count the number of software tools that have fallen into that bucket for me. The day a piece of software is recommended to me, the day I see a positive review on lifehacker, is NEVER the day I actually have a need - or even the time - to use / analyse the program I've downloaded and intalled.
Having the software expire after 30 calendar days is bad because what if someone downloads it, runs it once, and then decides they'll evaluate it a month later? Next time they launch it, a month later, it'll say it's expired.
I'd go with having it limited to 14 launches, or something like 120 minutes of use.
As for implementation, a file (hidden or not) in the user's Preferences folder, with an obfuscated name, seems like the best way to go. The file isn't randomly placed on the hard drive, but the user can't easily figure out which file to delete.
The least evil way is to just ask the user to delete the program after one month or pay for it ;)
We did it for one of our client application. Granted it was done in .NET for Windows, but the same principles can be applied in MAC.
Like eckesickle mentioned, if your user have access to the internet (or should), then you can have a web service that will register some unique id from the host computer with the starting date trial (MAC adress is a good one). With this, the user cannot really cheat the program unless he chances his network card every month.
Now, if the user doesn't have access to the Internet for some reason, you can either shut down the program until he connect to it or use a grace period. This file records the last time the app is opened. When the Internet is not accessible, we stop writing the time (we still write something in it so the user doesn't notice the file is not updated).
Should a user notice that this file contains the information and delete it (or change it using a copy he has), then you need a way to counter that. You can have some other value in another config file (encrypted always) and check for consistency. What you do if you discover that the user is trying to cheat is up to you, but we force the user to connect to the internet for it to work.
It might be overkill for a program, but it definitly works.
At the time of download, provide them with a trial serial number. When they enter the serial number, have it connect to your server and gets expiration information (stored and encrypted locally to prevent any additional "phone home" calls).
By doing it this way, you make it fairly hard for them to get around your 30-day window, as the expiration date is permanently stored on the server. You could set it up so deleting the key and re-entering it would cause your application to connect to your server again and download the same expiration date as before.
Or you can do it like WinZip does (or used to do it?): Provide a 30-day trial and just pop-up a screen at every load that shows how long you've been using it and links to purchase.
I used to offer a 30-day lite edition of my iOS app that embedded the install date and various record dates in the export data file that the user could download to his/her computer.
If the user was a cheapskate and just reinstalled the lite edition and tried to re-import the data, logic would notice that at least one of the date was older than 30 days and the app would set its install date to the earliest such date from the file, rendering it expired again.
In the full paid edition, this logic didn't exist and the data file could be imported easily.
It was a pain supporting people in this data migration (since apps are completely sandboxed from one another) and some other users felt the lite edition was enough for them so they never upgraded.
I've since stopped offering my lite edition and just reduced the price of the full edition. Now potential customers just have to pay a small amount or go find some competing software.
All in all, that was the best strategy for getting paying users.
Read an UUID from some hardware component and make a check against your web service to see if your software has already been installed for 30-days upon program launch?

How to effectively measure developer's work hours? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I have a few software developers working for my projects and I would like to provide them a way to register time they spent on real development.
There is good will to register development hours, no force, but we try to avoid techniques like excel sheets register because this is so uncomfortable.
I can track svn commits, but this is unreliable. Developers also helps supporting different projects during the day, so assuming they work on one project by whole day is not true.
I've seen utilities that popups a message every hour to confirm the project you're working on but this is annoying.
Some kind of active-window-title-anaylzer might help (you can get solution name from there in the case of Visual Studio) but I have no experience with such idea.
If you have any experience with programmers/designers work hours registration, please share with me.
Thanks
Its a good question, and the very best way you can measure hours spent on a development project is not to measure hours spent at all.
You say there is good will to register hours, but I have my doubts. Realistically, from a developers point of view, excessive time management is a distraction for most (perhaps ALL developers on the planet).
I can understand why time is measured so excessively on ODesk. There is a good reason for this, because project time is paid up front by the client to ODesk, and the developer needs to prove to ODesk hours worked. Payment is also guaranteed, and its unlikely oDesk providers and developers ever meet in real life, so there is no trust.
Since its unlikely you're paying your developers upfront, and its more likely you have better trust established, you need to perhaps switch your attention from a management strategy that's cludgy and annoying, to something way more useful.
Yes ladies and gentleman I am talking about Scrum. Throw any notion of keeping hour per hour tabs of your developers out the window (they'll love you for it). And instead introduce Scrum Management into the scenario.
Create some sprints (milestones) for your product development, and in that have a list of iterations (batched deliverable s), try keep your iterations down to a weekly period.
Create a product backlog, and make sure you're aware who exactly is working on what. Find someone on the team to act as a scrum master on your behalf, or to take on this responsibility yourself. Make sure you have daily feedback meetings, keep them short and focused, to identify any risks that might get in the way of deliverable s. Let the developers more or less drive the timing process, and get realistic estimates for tasks.
Read a book or 2 on scrum, and get others and team members involved in the learning curve. Tweak the base scrum methodology to best suit your particular style of management, and I assure you, you will have a very happy team.
Measure your time in man days, and try avoid getting on the back of a developer for hour by hour progress...
There're various time tracking software tools as you have probably already seen from doing google searches. But in all honestly you are asking for the Holy Grail of time tracking. For example do you consider a developer staring at her code and thinking as development time? She might stare at the screen for 1 hour and only type for 10 minutes. In this case it looks like they don't work much when really they worked for 1 hour and 10 minutes. I don't mean to say what you asking for isn't a valid thing to want it just seems to be one of those problems that doesn't have a perfect solution.
Good luck.
I think you're asking the wrong question and are headed for a slippery slope. There's so much that goes into development that has nothing to do with actually coding.
I think a better solution if you're deadset on tracking something is to track the time spent on activities NOT related to development. Of course there's some grey area there too. For example, a meeting to discuss user requirements should probably be counted towards development even though no coding will get done.
you need something like this dashboard to measure time on task. The only way to know the real software developer time is to have then track it. That way when they switch tasks they stop the clock so to speak. I think the hardest thing would be getting the developers to use it as a measure of how long they work on a certain project or even code module, etc. If you can then use those metrics to reduce distractions and other time sucks, you might at least be able to get a decent swag about how much time they spend coding versus email versus talking to other developers, etc.
If you're trying to measure what the developer has as their active window, you have to assume goodwill, because any decent developer can sneak around that if you're trying to turn the screws on them. I spend about a third of my "development time" in Firefox looking at references, for example.
Maybe ask the developers just to keep a log, so you know where their time is going? Whereas that's not ideal, you're never going to do much better than that.
If you are trying to measure the time spent on distractions and disturbances and other task, would it not be in your developers interest to give you this information willingly?
You said somewhere that you are implementing scrum.
If you really have to either take it up at the daily scrum, making it a part of the ritual, or add a very short daily meeting at the end of the day. Let the developers guesstimate how much of the day was spent on distractions and disturbances and other tasks. To me that feels like it will be as close to "correct" as any other way of measuring, considering the difficulties involved.
So, instead of having the developers note down the time, make it the scrummaster's job to sum it up, and make this as painless as possible for the devs. Make sure that the developers gain something tangible from doing it as well, otherwise it is going to end up on the impediment list awfully quickly.
As Dean J implied, you have to trust the developers anyway.
it depends on your IDE - if you are using Eclipse then I recommend using Mylyn plugin. You can measure the time spent on each task. Tasks can be fetched from every famous task repositories i.e. Tuleap. details here
User only need to put the task in Active mode - and deactivate when task is finished to able to stop the timer. I think Mylyn will support such process - if a status of a task changes then this would trigger the active mode (if closed then deactivate the task)
Sometimes development involve using a browser, or a terminal. Eclipse can be used as a browser and as a terminal as well - so developer does not need to leave Eclipse - so almost every activity can be measured related to a task.
Try DotProject or XPlanner
An active window analyzer won't give you reliable results, because your developers will swap between programs (Outlook, File Explorer, version control, internet browser, etc.). Your proposed analyzer won't log that time, although it will very likely be part of the development time that the developer puts into the project (software development is not just coding in VS all the time).
Trying to measure a developer's work hours is the wrong notion. A proper question to ask is what is a programmer's effectiveness. That can't be measured by coding time, time spent sitting in front of a computer, or the like.
As Joel Spolsky put it well in a blog on software craftsmanship, software development "...is not a manufacturing process."
A related but somewhat different discussion appears in this SO article on an invasive programmer productivity measurement tool.
I definitely would not recommend you to use any time measuring software where the developer is forced. It is a massive distraction to developers' concentration.
Instead, the following simple techniques can be used:
Spreadsheet: For smaller projects or team of developers
There's probably nothing easier than to create and share a spreadsheet online, add project tasks to it, assign tasks to a developer, let the developers estimate hours for theirs tasks, let them update their task status(es) (very rough value between 0% and 100%) as they want, let them specify time (hours) it really took to complete the task.
So in the spreadsheet you may have columns:
task name, assigned to, estimated hours, real hours, done (%)
Google Drive spreadsheet may be the answer.
This is a very simple and fast method which distracts developer(s) minimally.
Scrum: For medium to large projects or team of developers
Tasks and the Scrum information are recorded and kept on a board in the office and/or a special Scrum application can be used. A good web Scrum application is Pivotal Tracker which I would recommend for any size of project or team.
More about Scrum:
http://en.wikipedia.org/wiki/Scrum_(development)
In both cases, the product owners (clients or those who deal with clients), project managers and all developers can better and faster:
communicate
estimate
see progress of the team and all individuals involved

Resources