Vowpal Wabbit: limits on the size of the action set for Contextual Bandits? - vowpalwabbit

For the contextual bandit framework of Vowpal Wabbit, are there any limits on how large the number of actions can be? I'm assuming that currently there is no support for problems with an infinity-sized action set (e.g. an l2 ball in Rn). But are there any limits on how large a finite set of actions can be? Or is that limited only by the hardware the library runs on?
What I can think of in terms of potential problems/concerns are floating point errors (for example for predicting the PMF over the set of actions), slow predictions/updates, and specific exploration policies/policy evaluation approaches not playing well with a large action space.
Edit: number of actions I'm considering is in the range of 1000-100,000

I'm assuming that currently there is no support for problems with an infinity-sized action set
Correct, this isn't supported at the moment.
But are there any limits on how large a finite set of actions can be? Or is that limited only by the hardware the library runs on?
I don't believe there are concrete/artificial limits for action set size, so hardware is probably the limit. Internally, the action ID is a 32 bit number so there is definitely a limit at 2^32. As far as other issues, if you face anything like that please feel free to open an issue and we can work with you to get them sorted out. It's definitely the kind of thing that should be fixed.

Related

Which distributions can be used to produce starting times of jobs if there is no observation real state?

I need to produce some data which has starting times of each job (# of jobs: 30), I do not have chance to get real data so how can I generate data which shows similarities with a data distribution. In this case, which distribution should be good to go on?
A common technique used in simulation models where you don't have any data yet (e.g., data is very expensive, or it's a prospective system that does not even exist yet so where would you get the data from?) is to use a triangular distribution parameterized by subject matter experts (or your own best guesses) about the smallest, largest, and most common value you might see.
A relatively new, but quite powerful extension to this would be to vary the parameter choices in a designed set of experiments to see how much it matters if your guesstimates are off. A well-designed experiment would allow you to assess and characterize how much your results change as a function of the parameter values.
A more comprehensive variant would be to incorporate the distribution choice itself (triangle vs exponential vs anything else you think is plausible) into the design, to see whether that makes much of a difference. In the happy event that it doesn't, you can freely use a simple and convenient distribution choice such as the triangle; if it makes a big difference, you now have certain knowledge that you should get your hands on real data ASAP, because without that data based knowledge you're operating in a garbage-in-garbage-out mode. This also assumes that you control for, say, the first two moments as you switch between distribution choices so that your experiments are testing the shape of the distribution rather than the effect of mean and variance of the distribution.
If designed experiments tell you it doesn't much matter, that's wonderful news. If it does matter, you now know more about the system than you did before and know where to focus your efforts going forward.

Evaluating a specific Information retrieval system with P#1

I am working on a information retrieval system which aims to select the first result and to link it to other database. Indeed, our system is based on a Keyword description of a video and try to interlink the video to a DBpedia entity which has the same meaning of the description. In the step of evaluation, i noticid that the majority of evaluation set the minimum of the precision cut-off to 5, whereas in our system is not suitable. I am thinking to put an interval [1,5]: (P#1,...P#5).Will it be possible? !!
Please provide your suggestions and your reference to some notes.. Thanks..
You can definitely calculate P#1 for a retrieval system, if you have truth labels. (In this case, it sounds like they would be [Video, DBPedia] matching pairs generated by humans).
People generally look at this measure for things like Question-Answering or recommendation systems. The only caveat is that you typically wouldn't use it to train a learning to rank system or any other learning system -- it's not "continuous enough" a near miss (best at rank 2) and a total miss (best at rank 4 million) get equivalent scores, so it can be hard to smoothly improve a system by tuning weights in such a case.
For those kinds of tasks, using Mean Reciprocal Rank is pretty common, if you need something tunable. Also NDCG tends to be okay, too, since it has an exponential discounting factor.
But there's nothing in the definition of precision that prevents you from calculating it at rank 1. It may be more correct to describe it as a "success#1" feature, since you're going to get 0/1 or 1/1 as your two options.

Ramp up/down algorithm for user load testing

I'm working on a user load testing application for web servers and I'm trying to implement a feature for automatically ramping up the maximum number of "users" that a server can handle. I want to spawn test users until some threshold for the average response time and/or http request failure ratio is met, and then I want to kill/spawn users until a stable state just below the thresholds is found.
Essentially, I want to find the maximum stable number of concurrent users that still meets the requirements, as fast as possible.
I can of course figure out an algorithm for this myself but I'm thinking that there might be existing ramp up/ramp down algorithms that I could use. If anyone has knowledge on this I would love if you could point me in the right direction!
Thanks!
This depends a lot on what's going on and if the system begins to decay gradually or if there is a discrete drop in performance (e.g. "healthy" -> "dead").
In the second case, there's no feedback to indicate whether or not you're approaching the boundary, so you will need to first find a point that exceeds the threshold and jump between that and the largest value that doesn't exceed the threshold. You might simplify this with 2 (or more) separate servers. Splitting in the middle is pretty much the fastest way feasible, though if you have 10 servers, you could divide into 10 steps at each iteration.
If you get some feedback, then you're looking for a method that incorporates this. You may find that the Nelder-Mead algorithm is suitable. It's fairly easy to implement, but you'll likely find implementations in any language of interest.

Initial Genetic Programming Parameters

I did a little GP (note:very little) work in college and have been playing around with it recently. My question is in regards to the intial run settings (population size, number of generations, min/max depth of trees, min/max depth of initial trees, percentages to use for different reproduction operations, etc.). What is the normal practice for setting these parameters? What papers/sites do people use as a good guide?
You'll find that this depends very much on your problem domain - in particular the nature of the fitness function, your implementation DSL etc.
Some personal experience:
Large population sizes seem to work
better when you have a noisy fitness
function, I think this is because the growth
of sub-groups in the population over successive generations acts
to give more sampling of
the fitness function. I typically use
100 for less noisy/deterministic functions, 1000+
for noisy.
For number of generations it is best to measure improvements in the
fitness function and stop when it
meets your target criteria. I normally run a few hundred generations and see what kind of answers are coming out, if it is showing no improvement then you probably have an issue elsewhere.
Tree depth requirements are really dependent on your DSL. I sometimes try to do an
implementation without explicit
limits but penalise or eliminate
programs that run too long (which is probably
what you really care about....). I've also found total node counts of ~1000 to be quite useful hard limits.
Percentages for different mutation / recombination operators don't seem
to matter all that much. As long as
you have a comprehensive set of mutations, any reasonably balanced
distribution will usually work. I think the reason for this is that you are basically doing a search for favourable improvements so the main objective is just to make sure the trial improvements are reasonably well distributed across all the possibilities.
Why don't you try using a genetic algorithm to optimise these parameters for you? :)
Any problem in computer science can be
solved with another layer of
indirection (except for too many
layers of indirection.)
-David J. Wheeler
When I started looking into Genetic Algorithms I had the same question.
I wanted to collect data variating parameters on a very simple problem and link given operators and parameters values (such as mutation rates, etc) to given results in function of population size etc.
Once I started getting into GA a bit more I then realized that given the enormous number of variables this is a huge task, and generalization is extremely difficult.
talking from my (limited) experience, if you decide to simplify the problem and use a fixed way to implement crossover, selection, and just play with population size and mutation rate (implemented in a given way) trying to come up with general results you'll soon realize that too many variables are still into play because at the end of the day the number of generations after which statistically you will get a decent result (whatever way you wanna define decent) still obviously depend primarily on the problem you're solving and consequently on the genome size (representing the same problem in different ways will obviously lead to different results in terms of effect of given GA parameters!).
It is certainly possible to draft a set of guidelines - as the (rare but good) literature proves - but you will be able to generalize the results effectively in statistical terms only when the problem at hand can be encoded in the exact same way and the fitness is evaluated in a somehow an equivalent way (which more often than not means you're ealing with a very similar problem).
Take a look at Koza's voluminous tomes on these matters.
There are very different schools of thought even within the GP community -
Some regard populations in the (low) thousands as sufficient whereas Koza and others often don't deem if worthy to start a GP run with less than a million individuals in the GP population ;-)
As mentioned before it depends on your personal taste and experiences, resources and probably the GP system used!
Cheers,
Jan

How to detect anomalous resource consumption reliably?

This question is about a whole class of similar problems, but I'll ask it as a concrete example.
I have a server with a file system whose contents fluctuate. I need to monitor the available space on this file system to ensure that it doesn't fill up. For the sake of argument, let's suppose that if it fills up, the server goes down.
It doesn't really matter what it is -- it might, for example, be a queue of "work".
During "normal" operation, the available space varies within "normal" limits, but there may be pathologies:
Some other (possibly external)
component that adds work may run out
of control
Some component that removes work seizes up, but remains undetected
The statistical characteristics of the process are basically unknown.
What I'm looking for is an algorithm that takes, as input, timed periodic measurements of the available space (alternative suggestions for input are welcome), and produces as output, an alarm when things are "abnormal" and the file system is "likely to fill up". It is obviously important to avoid false negatives, but almost as important to avoid false positives, to avoid numbing the brain of the sysadmin who gets the alarm.
I appreciate that there are alternative solutions like throwing more storage space at the underlying problem, but I have actually experienced instances where 1000 times wasn't enough.
Algorithms which consider stored historical measurements are fine, although on-the-fly algorithms which minimise the amount of historic data are preferred.
I have accepted Frank's answer, and am now going back to the drawing-board to study his references in depth.
There are three cases, I think, of interest, not in order:
The "Harrods' Sale has just started" scenario: a peak of activity that at one-second resolution is "off the dial", but doesn't represent a real danger of resource depletion;
The "Global Warming" scenario: needing to plan for (relatively) stable growth; and
The "Google is sending me an unsolicited copy of The Index" scenario: this will deplete all my resources in relatively short order unless I do something to stop it.
It's the last one that's (I think) most interesting, and challenging, from a sysadmin's point of view..
If it is actually related to a queue of work, then queueing theory may be the best route to an answer.
For the general case you could perhaps attempt a (multiple?) linear regression on the historical data, to detect if there is a statistically significant rising trend in the resource usage that is likely to lead to problems if it continues (you may also be able to predict how long it must continue to lead to problems with this technique - just set a threshold for 'problem' and use the slope of the trend to determine how long it will take). You would have to play around with this and with the variables you collect though, to see if there is any statistically significant relationship that you can discover in the first place.
Although it covers a completely different topic (global warming), I've found tamino's blog (tamino.wordpress.com) to be a very good resource on statistical analysis of data that is full of knowns and unknowns. For example, see this post.
edit: as per my comment I think the problem is somewhat analogous to the GW problem. You have short term bursts of activity which average out to zero, and long term trends superimposed that you are interested in. Also there is probably more than one long term trend, and it changes from time to time. Tamino describes a technique which may be suitable for this, but unfortunately I cannot find the post I'm thinking of. It involves sliding regressions along the data (imagine multiple lines fitted to noisy data), and letting the data pick the inflection points. If you could do this then you could perhaps identify a significant change in the trend. Unfortunately it may only be identifiable after the fact, as you may need to accumulate a lot of data to get significance. But it might still be in time to head off resource depletion. At least it may give you a robust way to determine what kind of safety margin and resources in reserve you need in future.

Resources