As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
do you use a tool? or just manually make them?
Google charts api/server can make one fairly easily
You specify everything in the URL so it's easy to update:
http://chart.apis.google.com/chart?
chs=600x250& // the size of the chart
chtt=Burndown& // Title
cht=lc& // The chart type - "lc" means a line chart that only needs Y values
chdl=estimated|actual& // The two legends
chco=FF0000,00FF00& // The colours in hex of the two lines
chxr=0,0,30,2|1,0,40,2& // The data range for the x,y (index,min,max,interval)
chds=0,40 // The min and max values for the data. i.e. amount of features
chd=t:40,36,32,28,24,20,16,12,8,4,0|40,39,38,37,36,35,30,25,23,21,18,14,12,9,1 // Data
The URL above plots in intervals of 2 - so work every 2 days. You'll need a bigger size chart for every day. To do this make the data have 30 values for estimated and actual, and change the "chxr" so the interval is 1, not two.
You can plot only the days done more clearly with the "lxy" chart type (the first image). This needs you to enter the X data values too (so a vector). Use -1 for unknown.
We tend to just use a simple shared excel sheet with a graph on one tab and a pivot table on another.
I have not used myself, but http://apps.vanpuffelen.net/charts/burndown.jsp presents an api that is even simpler than the google charts api. Example:
http://apps.vanpuffelen.net/charts/burndown.jsp?days=17,18,19,22,23,24,25,26,29,30&work=125,112,104,99,95
gives the following graph:
I did use TargetProcess but I now prefer a more tactile method so I draw it manually on a whiteboard.
VersionOne makes the burndown sheets nicely.
Like answered in this post What tools provide burndown charts to Bugzilla or Mylyn?
www.in-sight.io (previously burndowncharts) is a great SCRUM analytics tool.
Here are some dashboard examples:
Sprint metrics
Team metrics
We use something locally based on http://opentcdb.org/ but that does scrum tracking, and draws pretty graphs.
We use the community edition of RallyDev and it makes nice burndown charts. The problem is that our team has not yet been able to do a solid job of entering in data to keep the burndown information meaningful.
We also use the community edition of RallyDev that has nice charts. I find it to be an excellent tool once you work out which bits of it you really want to use. There is a huge amount of fields and functionality that most people wouldn't use which could be a confusing problem for bigger teams.
We used to use the tools at rallydev.com, but in time we found the tool to simply be to cumbersome for what we wanted.
In time, I moved to just a simple Excel spreadsheet. Every morning before stand up I counted the hours remaining and added the trend line next to an "ideal" burndown line on the chart. I posted it on the wall where we held our morning standups.
We you Team Foundation Server and with conchango's scrum templates using the built in burndown through this nice little scrum dashboard
http://www.codeplex.com/scrumdashboard
I used google docs and excel. Templates for both are available at the bottom of this article (and include a number of nice features like automatic calculation of the efficiency factor)
Burn Down Chart Tutorial: Simple Agile Project Tracking
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I'm working in a team that's been consistently and fairly successfully working in an agile approach, and this has been working great for the current project until now, for our initial work, as we incrementally build the product.
We're now moving into the next phase of this though, and the management are keen for us to set some specific deadlines ourselves, for when we'll be in a position to demo and sell this to real customers, on the order of months.
We have a fairly well organised large backlog for each of the elements of functionality we'd like to include, and a good sense of the prioritisation of these individual bits of functionality.
The naive solution is to get the minimum list of stories that would provide a demo-able product, estimate all of those individually, and add them up and combine with our velocity to get a date, and announce we'll be demoing from then. That leaves no leeway though, and seems likely to result in a mad crunch as we get up to deadline time, which I desperately want to avoid.
As an improvement, I'd like to add in some ratio of more optional stories to act as either contingency or bonus improvements, depending on how we progress, but we don't have any idea what ratio would be sensible, or whether this is the standard approach.
I'm also concerned by having to estimate the whole of our backlog all in one go up-front, as that seems very time consuming, and it seems likely that we'll discover more information in the months before we get to that story, which will affect our estimates.
Are there recommended approaches to dealing with setting deadlines to allow for an agile development process? Most of the information I've seen seems to be around handling the situation once you've got a fixed deadline to hit instead. I'd also be interested in any relevant literature or interesting blog posts that cover this issue.
Regarding literature: the best book I know regarding the estimation in software is "Software Estimation: Demystifying the Black Art" by Steve McConnel. It covers your case. Plus, it describes the difference between estimation and commitment (set-deadline, in other words) and explains how to derive the second from the first reliably.
The naive solution is to get the minimum list of stories that would
provide a demo-able product, estimate all of those individually, and
add them up and combine with our velocity to get a date, and announce
we'll be demoing from then. That leaves no leeway though, and seems
likely to result in a mad crunch as we get up to deadline time, which
I desperately want to avoid.
This is the solution I have used in the past. Your initial estimate is going to be off a bit so add some slack via a couple of additional sprints before setting your release date. If you get behind you can make it up in the slack. If not, your product backlog gives you additional features that you can include in the release if you so choose. This will be dependent on your velocity metric for the team though. Adjust your slack based on how accurate you feel this metric is for the current team. Once you have a target release you can circle back to see if you have any known resource constraints that might affect that release.
The approach you describe is likely to be correct. You may want to estimate for all desirable features, and prioritise UI elements (because investors and customers basically just see the shiny UI), and then your deadline will be that estimated date for completion; then add on some slack in the form of scaling your estimates. Use the ratio between current productivity and your worst period to create a pessimistic estimate. You can use that same ratio to scale shorter estimates (e.g. for your estimate to the minimum feature set).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I hope this does not get closed because it is related to algorithms that I have not been able to figure out(its also pretty long because I'm so confused about how its being done). Basically many years back I used to work at a mutual fund and we used different tools to select optimize portfolios as well as hedge existing ones. We would take these results and make our own modifications then sell them to clients. After my company downsized, I decided I wanted to give it a try(to create the software and include my customizations) but I have no clue how combinations are actually generated for the software.
After 6 months of trying, I'm accepting that my approach is impossible. I was trying to use combination algorithms like from Knuth's book, as well as doing bit combinations to try to find every possible portfolio(I limited it to 30 stocks) on the NYSE(5,000+ stocks). But as per everyone I have spoken to this will take me billions of billions of years to just get one days results(for me on a GPU i stopped it after 2 days of straight processing).
So what am I missing? We would enter our risk tolerance and view of the market(stock market growth expectations, inflation expectations, fed funds expectations,etc..) and it would give us the ideal portfolio(in theory..) within a few seconds/minutes. With thousands of possibilities and quadrillion possible combinations of weights of stocks, how are they able to calculate results so quickly(or even at all)? As the admin of the system, I know we downloaded a file everyday(less than 100 mb and loaded in a mssql database probably just market data..so its not like we had every possibility. Using my approach above I would get a 5 gig file in a min of doing my version of Knuth's combination algo) and the applications worked offline(so it must have been doing it locally on the desktop/laptop cpu not on a massive supercomputer somewhere and took a min or two to run..15 minutes was the longest for a global fund which includes every stock in the world). Its so confusing because their work required correlation of the entire fund(I don't think they were just sending the top stocks they pre-calculated because everyone got different results). So if I wanted a 30 stock fund that gave me 2% returns and had a negative correlation with the market, and was 60% hedged how could the software generate that portfolio out of billions of possibilities so quickly? note, I'm not asking about the math or the finance part, I'm asking how it was able to generate 30 stocks from the entire market that gave 2% returns when in order to do that it would need to know the returns of all 30 stock portfolio(That alone would make it run for billions of years, right? the other restrictions make it more complex).
So How is this being done programmatically? I'm starting to believe they are not using Knuth's combination algorithm to generate every possibility yet their results don't seem randomly selected and individually selecting the stocks seems to miss the correlation part. How can so many investment softwares do things like this?
Such algorithms almost certainly don't generate every possibility - as you rightly observe that would be impractical.
Portfolio selection is however very easy to do with other techniques that will give you a very good answer. The two most likely are:
If you make simplifying assumptions around risk / return you can solve for an optimal portfolio mathematically (see http://en.wikipedia.org/wiki/Capital_asset_pricing_model for some of the maths)
A genetic algorithm which does mutation / crossover operations on randomised sample portfolios will find a very good solution pretty fast. You can combine this with Monte-Carlo style modelling approaches to understand the range of possible outcomes.
Personally, I'd probably suggest the genetic algorithm approach - although it's not as mathematically pure, it will give you good answers and should able to handle any constraints you want to throw at it quite easily (e.g. max number of stocks in a portfolio)
Modern Portfolio theory is a subject in its own right, with books such as "Modern Portfolio Theory and Investment Analysis", and an introduction at http://en.wikipedia.org/wiki/Modern_portfolio_theory.
One way to get problems you can actually solve is to treat it as a mathematical optimization problem. If you have a vector which gives you the amount of each stock you buy, then - under various assumptions - the return is a linear function of this vector, and the risk is a quadratic function of this vector. Maximising the return for given risk, or minimising the risk for given return, is a well-understood mathematical problem, even for very large numbers of stocks - http://en.wikipedia.org/wiki/Quadratic_programming.
One practical problem with this is that the answer you get will probably tell you buy some fraction of almost all the stocks on the market. My guess is that real life programs use some "secret sauce" heuristic that doesn't guarantee the perfect answer, subject to a constraint on the number of stocks you are actually prepared to buy, but works pretty well in practice. Returning the perfect answer appears to be a hard problem - see e.g. http://arxiv.org/ftp/arxiv/papers/1105/1105.3594.pdf
For a project we are using the Telerik RadChart control to display a graph on a website. At the moment the X-axis follows a normal scale but we would like to change that to a logarithmic scale. As far as we can tell the control does not allow that.
Does anyone know of an alternative that would support this?
TIA,
David
Example of what we would like to achieve.
"Unfortunately the current version of RadChart does not support logarithmic X-Axis. We have already logged such a request in our public issue tracking system, you can find it here, however, taking a decision for implementing a certain feature depends on multiple conditions including complexity, existing features impact, demand rate, etc. It is not yet in our immediate plans, still, I would encourage you to use the above link to vote and track the issue."
Regards,
Nikolay
the Telerik team
Taken from here, as it was posted this month.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I was working on a project for fun but it eventually led me to a difficult unrelated problem that I can't solve. So this is a two part question
Are there any formal methodologies for solving difficult algorithmic problems (how to rule out different ideas, etc) that can be applied to this problem?
Any suggestions on a solution for this problem?
Most algorithms that I run into in daily life usually seem straight forward and just involve problem breakdown/simplification and can almost always be solved with brute force in the worst case scenario. This problem is different.
A
Acme company manufactures all different kinds of components
(everything from basic circuit boards to complete computers)
Each component is defined to be made of parts or other components
Acme receives parts from suppliers at irregular intervals
Each component has a priority (maybe it is more important to build
motherboards than monitors)
Priorities can change based on constraints (such as if there are
less than 3 monitors in stock, wait for parts to build monitors)
Q: What should Acme build?
I think a possible way to solve this would be to have a tree of all the different components organized such that the highest priority items are at the top. Traverse each node based on priority and at each node, check if we need to wait for parts for this component or if we can build the next lower priority item. I could probably even move components up and down the priority tree based on what has been built. Fiddling around with that idea, I think I could get something to work. There could be many valid solutions to the problem but I only need to return one valid set.
However, if I want to make the problem a bit more real world, I have additional features
B.
Each component requires parts + tools + human resources to build.
I think this can be solved similarly to the first case with tools and human resources just being consumable parts. But now it gets hard
C
Each part, tool, human resource has some sort of quality
associated with it
Lower quality takes more time to produce
Time is defined based on some relation between parts, tools, and human
resources...so if you have good human resources and good tools, time
will be less than if you have bad resources and bad tools
there may be a limited number of tools/people so they need to be scheduled
The problem can be adjusted to what should we built over the next month or something
How would seasoned algorithm developers go about solving this?
It sounds like each item has an absolute cost associated with it, which includes costs of other items, resources and priority. Being able to calculate this cost of each item and choosing the item with the lowest or highest cost (depending on how you look at it) would yield the building order for items. However, I'm afraid that a formula to calculate such a "master" cost varies from case to case and only knowing the full specifications would help in building the formula. Even so, you might need to adjust it on the fly to optimize efficiency.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
Like, let's say I had a tree structure, then I would use, naturally a tree control, since that GUI element maps perfectly to the structure.
But what I have is a graph, potentially too wide to fit in one web page. I can't think of examples of GUIs that really match the structure. Some ideas I have that don't quite fit are, the web itself, with hyperlinks, the browser back button, and the forward button. But that just shows you one node at a time. I would like to display as many nodes as I can, and allow navigation to a new area of the graph. Something like Google maps might be a good model, in that you have full freedom to scroll in any direction.
Facebook used to do this, way back in the day, to visualize your friends. They drew nodes as little boxes, with lines connecting them, as expected. They drew the graphs into an SVG image, so you could easily zoom in and out.
Another option might be to draw into a <canvas> tag and scale that somehow. I imagine that's possible, but I don't know much about <canvas>
Another option would be to draw it into an inline frame or other box that allows the user to scroll horizontally and vertically.
Basically the best thing I've seen for this sort of thing would be either Flash or Java that let you drag the nodes around, and it would auto stretch, move, expand based on tension values on the edges.
Brief googlage exposes this. I tried the Java application version, seems to work on a basic level, but perhaps overkill for your needs. :) Check out the AJAX version, maybe?
Perhaps check out ways to drag with jQuery.
For a nice dynamic example of a huge graph displayed by small regions, you can take a look at: Visuwords. They present themselves as a graphical dictionary which shows multiple links between words.
I don't think there is an existing widget which will fit your need like a tree control would for a tree strcuture.
that said I definitely recommand one of the yWorks products.
Get a feel for what it can do, by trying out yEd, then check the documentation which seems very good. they also propose an ajax and flex fronted for their library, which will give you a web compatible representation, including google maps like drag and dropping see graph viewer demo: http://live.yworks.com/yfiles-ajax/Examples/Graph_Viewer.html )
I don't work for them, I am not even a paying customer, but I was very impressed by yEd.
You could try prefuse. It is an open source interactive data visualization toolkit. It has a very nice collection of layouts that can be used for displaying node graphs along with other visualization effects. The toolkit comes in two different flavours, one in Java and another in Actionscript, which can be used in Flash/Flex.
GraphViz is the best I've seen. Works on the desktop and on the web, with many examples of both here.