How to scale an algorithm/service/system with multiple machines? - algorithm

I had some interviews recently and it's quite normal to be asked some scale problems.
For example, you have a long list of words(dict) and list of characters as the inputs, design an algorithm to find out a shortest word which in dict contains all the chars in the char list. Then the interviewer asked how to scale your algorithm into multiple machines.
Another example is you have been designed a traffic light control system for an intersection in a city. How do you scale this control system to the whole city which has many intersections.
I always have no idea about this kind of "scale" problems, welcome any suggestions and comments.

Your first question is completely different from your second question. In fact the control of traffic lights in cities is a local operation. There are boxes nearby that you can tune and optical sensor on top of the light that detects waiting cars. I guess if you need to optimize for some objective function of flow, you can route information to a server process, then it can become how to scale this server process over multiple machines.
I am no expert in design of distributed algorithm, which spans a whole field of research. But the questions in undergrad interviews usually are not that specialized. After all they are not interviewing a graduate student specializing in those fields. Take your first question as an example, it is quite generic indeed.
Normally these questions involve multiple data structures (several lists and hashtables) interacting (joining, iterating, etc) to solve a problem. Once you have worked out a basic solution, scaling is basically copying that solution on many machines and running them with partitions of the input at the same time. (Of course, in many cases this is difficult if not impossible, but interview questions won't be that hard)
That is, you have many identical workers splitting the input workload and work at the same time, but those workers are processes in different machines. That brings the problem of communication protocol and network latency etc, but we will ignore these to get to the basics.
The most common way to scale is let the workers hold copies of smaller data structures and have them split the larger data structures as workload. In your example (first question), the list of characters is small in size, so you would give each worker a copy of the list, and a portion of the dictionary to work on with the list. Notice that the other way around won't work, because each worker holding a dictionary will consume a large amount of memory in total, and it won't save you anything scaling up.
If your problem gets larger, then you may need more layer of splitting, which also implies you need a way of combining the outputs from the workers taking in the split input. This is the general concept and motivation for the MapReduce framework and its derivatives.
Hope it helps...

For the first question, how to search words that contain all the char in the char list that can run on the same time on the different machine. (Not yet the shortest). I will do it with map-reduce as the base.
First, this problem is actually can run on different machine at the same time. This is because for each word in the database, you can check it on another machine (so to check another word, you didn't have to wait for the previous word or the next word, you can literally send each word to different computer to be checked).
Using map-reduce, you can map each word as a value and then check it if it contain every char in the char list.
Map(Word, keyout, valueout){
//Word comes from dbase, keyout & valueout is input for Reduce
if(check if word contain all char){
sharedOutput(Key, Word)//Basically, you send the word to a shared file.
//The output shared file, should be managed by the 'said like' hadoop
}
}
After this Map running, you get all the Word that you want from the database locate in shared file. As for the reduce step, you can actually used some simple step to reduce it based on it length. And tada, you get the shortest one.
As for the second question, multi threading come to my mind. It's actually a problem that not relate to each other. I mean each intersection has its own timer right? So to be able handle tons of intersection, you should use multi threading.
The simple term will be using each core in the processor to control each intersection. Rather then go loop through all intersection on by one. You can alocate them in each core so that the process will be faster.

Related

Creating the N most accurate sparklines for M sets of data

I recently constructed a simple name popularity tool (http://names.yafla.com) that allows users to select names and investigate their popularity over time and by state. This is just a fun project and has no commercial or professional value, but solved a curiosity itch.
One improvement I would like to add is the display of simple sparklines beside each name in the select list, showing the normalized national popularity trends since 1910.
Doing an image request for every single name -- where hypothetically I've preconstructed the spark lines for every possible variant -- would slow the interface too much and yield a lot of unnecessary traffic as users quickly scroll and filter past hundreds of names they aren't interested in. Building sprites with sparklines for sets of names is a possibility, but again with tens of thousands of names, in the end the user's cache would be burdened with a lot of unnecessary information.
My goal is absolutely tuned minimalism.
Which got me contemplating the interesting challenge of taking M sets of data (occurrences over time) and distilling that to the most proximal N representative sparklines. For this purpose they don't have to be exact, but should be a general match, and where I could tune N to yield a certain accuracy number.
Essentially a form of sparkline lossy compression.
I feel like this most certainly is a solved problem, but can't find or resolve the heuristics that would yield the algorithms that would shorten the path.
What you describe seems to be cluster analysis - e.g. shoving that into Wikipedia will give you a starting point. Particular methods for cluster analysis include k-means and single linkage. A related topic is Latent Class Analysis.
If you do this, another option is to look at the clusters that come out, give them descriptive names, and then display the cluster names rather than inaccurate sparklines - or I guess you could draw not just a single line in the sparkline, but two or more lines showing the range of popularities seen within that cluster.

Neural network and algorithm(s), predicting future outcome from past

I was working on a algorithm, where I am given some input and I am given output for them, and given the output for 3 months (give or take) I need a way to find/calculate what might be the future output.
Now, this problem given can be related to stock exchange, we are given certaing constraints and certain outcomes, and we need to find the next.
I stumbled upon neural network stock market prediction, you can Google it, or you can read about it here, here and here.
To get started at making the algorithm, I couldn't figure out what should be the structure of layers.
The given constraint are:
The output would always be integer.
The output would always be between 1 and 100.
There is no exact input for say, just like stock market, we just know that the stock price would fluctuate btw 1 and 100, so we might (or not?) consider this as the only input.
We have record for last 3 months (or more).
Now, my first question is, how many nodes do I take for input?
The output is just one, fine. But as I said, should I take 100 nodes for input layer (given that the stock price would always be integer and would always be btw 1 and 100?)
What about hidden layer? How many nodes there? Say, if I take 100 nodes there too, I don't think that would train the network much, because what I think is that for each input we need to take into account all previous input also.
Say, we are calulating output for 1st day of 4th month, we should have 90 nodes in hidden/middle layer (imagining each month is 30 days for simplicity). Now there are two cases
Our prediction was correct and outcome was same as we predicted.
Our prediction failed, and the outcome was different than what we predicted.
Whatever the case be, now when we are calculating the output for 2nd day of 4th month, we need not only those 90 input(s) but also that last result (and not the prediction, be it the same!) too, so we now have 91 nodes in our middle/hidden layer.
And so on, it would keep increasing the number of nodes each day, AFAICT.
So, my other question is how do I define/set the number of nodes in hidden/middle layer if its dynamically changing.
My last question is, is there any other particular algorithm out there (for this kinda thing/stuff) that I am not aware of? That I should be using instead of messing around with this neural networking stuff?
Lastly, is there anything, that I might be missing that might cause me (rather the algo I am making) to predict the output, I mean any caveats, or anything that might make it go wrong that I might be missing?
There is much to tell as an answer to your question. In fact, your question addresses the problem of time series forecasting in general, and neural networks application for this task. I'm writing here only several most important keys, but after reading this you should possibly dig into Google's results for the query time series prediction neural network. There is a lot of works where the principles are covered in details. A variety of software implementations (with source codes) do also exist (here is just one of examples with codes in c++).
1) I must say that the problem is 99% about data preprocessing and choosing correct input/output factors, and only 1% about concrete instrument to use, whether neural networks or something other. Just as a side note, neural networks can internally implement most of other data analysis methods. For example, you can use neural network for Principal Component Analysis (PCA) which is closely related to SVD, mentioned in another answer.
2) It's very rare that input/output values are strictly fit a specific region. Real life data can be considered as unbounded in absolute values (even if its changes seem producing a channel, it can be broken down just in a moment), but neural network can operate in a stable conditions only. This is why the data is normally converted into increments first (by calculating deltas between i-th point and i-1, or taking log from their ratio). I suggest you do it with your data anyway, though you declare it's inside [0, 100] region. If you don't do it, neural network will most likely degenerate to a so called naive predictor which produce a forecast with each next value equal to previous.
The data then is normalized into [0, 1] or [-1, +1]. The second is appropriate for the case of time series prediction where +1 denotes move up, and -1 - move down. Use hypertanh activation function for neurons in your net.
3) You should feed NN with an input data obtained from a sliding window of dates. For example, if you have a data for a year and every point is a day, you should choose the size of window - say, a month - and slide it day by day, from the past to the future. The day just at the right bound of the window is the target output for NN. This is a very simple approach (there are much more complicated), I mention it just because you ask how to handle data which does continuously arrive. The answer is - you don't need to change/enlarge your NN every day. Just use a constant structure with a fixed window size and "forget" (do not provide to the NN) the oldest point. It's important that you do not treat all the data you have as a single input, but divide it into many small vectors and train NN on them, so the net can generalize data and find regularity.
4) The size of sliding window is your NN input size. The output size is 1. You should play with hidden layer size to find better performance. Start with a value which somethat between input and output, for example sqrt(in*out).
According to lastest researches, Recurrent Neural Networks seem operating better for tasks of time series forecasting.
I agree with Stan when he says
1) I must say that the problem is 99% about data preprocessing
I've applied Neural Networks for 25+ years to various aerospace applications including helicopter flight control - setting up the input/output data set is everything - all else is secondary.
I'm amazed, in smirkman's comment that Neural Networks were quickly dropped "as they produced nothing worthwhile" - that tells me that whoever was working with Neural Networks had little experience with them.
Given that the topic discusses neural network stock market prediction - I'll say that I've made it work. Test results are downloadable from my website at www.nwtai.com.
I don't give away how it was done but there's enough interesting data that should make you want to explore using Neural Networks more seriously.
This kind of problem was particularly well researched by thousands of people who wanted to win the 1M$ NetFlix prize.
Earlier submissions were often based on K Nearest Neigbours. Later submissions were made using Singular Value Decomposition, Support Vector Machines and Stochastic Gradient Descent. The winner used a blend of several techniques.
Reading the excellent Community forums will give you many insights about the best methods to predict the future from the past. You'll also find loads of source code for the different methods.
Amusingly, neural networks were quickly dropped, as they produced nothing worthwhile (and I personally have yet to see a non-trivial NN produce anything of value).
If you are starting out, I'd suggest SVD as a first path; it's quite easy to make and often produces surprising insights into data.
Good luck!

What is the difference between an on-line and off-line algorithm?

These terms were used in my data structures textbook, but the explanation was very terse and unclear. I think it has something to do with how much knowledge the algorithm has at each stage of computation.
(Please, don't link to the Wikipedia page: I've already read it and I am still looking for a clarification. An explanation as if I'm twelve years old and/or an example would be much more helpful.)
Wikipedia
The Wikipedia page is quite clear:
In computer science, an online algorithm is one that can process its
input piece-by-piece in a serial fashion, i.e., in the order that the
input is fed to the algorithm, without having the entire input
available from the start. In contrast, an offline algorithm is given
the whole problem data from the beginning and is required to output an
answer which solves the problem at hand. (For example, selection sort
requires that the entire list be given before it can sort it, while
insertion sort doesn't.)
Let me expand on the above:
An offline algorithm requires all information BEFORE the algorithm starts. In the Wikipedia example, selection sort is offline because step 1 is Find the minimum value in the list. To do this, you need to have the entire list available - otherwise, how could you possibly know what the minimum value is? You cannot.
Insertion sort, by contrast, is online because it does not need to know anything about what values it will sort and the information is requested WHILE the algorithm is running. Simply put, it can grab new values at every iteration.
Still not clear?
Think of the following examples (for four year olds!). David is asking you to solve two problems.
In the first problem, he says:
"I'm, going to give you two balls of different masses and you need to
drop them at the same time from a tower.. just to make sure Galileo
was right. You can't use a watch, we'll just eyeball it."
If I gave you only one ball, you'd probably look at me and wonder what you're supposed to be doing. After all, the instructions were pretty clear. You need both balls at the beginning of the problem. This is an offline algorithm.
For the second problem, David says
"Okay, that went pretty well, but now I need you to go ahead and kick
a couple of balls across a field."
I go ahead and give you the first ball. You kick it. Then I give you the second ball and you kick that. I could also give you a third and fourth ball (without you even knowing that I was going to give them to you). This is an example of an online algorithm. As a matter of fact, you could be kicking balls all day.
I hope this was clear :D
An online algorithm processes the input only piece by piece and doesn't know about the actual input size at the beginning of the algorithm.
An often used example is scheduling: you have a set of machines, and an unknown workload. Each machine has a specific speed. You want to clear the workload as fast as possible. But since you don't know all inputs from the beginning (you can often see only the next in the queue) you can only estimate which machine is the best for the current input. This can result in non-optimal distribution of your workload since you cannot make any assumption on your input data.
An offline algorithm on the other hand works only with complete input data. All workload must be known before the algorithm starts processing the data.
Example:
Workload:
1. Unit (Weight: 1)
2. Unit (Weight: 1)
3. Unit (Weight: 3)
Machines:
1. Machine (1 weight/hour)
2. Machine (2 weights/hour)
Possible result (Online):
1. Unit -> 2. Machine // 2. Machine has now a workload of 30 minutes
2. Unit -> 2. Machine // 2. Machine has now a workload of one hour
either
3. Unit -> 1. Machine // 1. Machine has now a workload of three hours
or
3. Unit -> 2. Machine // 1. Machine has now a workload of 2.5 hours
==> the work is done after 2.5 hours
Possible result (Offline):
1. Unit -> 1. Machine // 1. Machine has now a workload of one hour
2. Unit -> 1. Machine // 1. Machine has now a workload of two hours
3. Unit -> 2. Machine // 2. Machine has now a workload of 1.5 hours
==> the work is done after 2 hours
Note that the better result in the offline algorithm is only possible since we don't use the better machine from the start. We know already that there will be a heavy unit (unit 3), so this unit should be processed by the fastest machine.
An offline algorithm knows all about its input data the moment it is invoked. An online algorithm, on the other hand, can get parts or all of its input data while it is running.
An
algorithm is said to be online if it does not
know the data it will be executing on
beforehand. An offline algorithm may see
all of the data in advance.
An on-line algorithm is one that receives a sequence of requests and performs an immediate action in response to each request.
In contrast,an off-line algorithm performs action after all the requests are taken.
This paper by Richard Karp gives more insight on on-line,off-line algorithms.
We can differentiate offline and online algorithms based on the availability of the inputs prior to the processing of the algorithm.
Offline Algorithm: All input information are available to the algorithm and processed simultaneously by the algorithm. With the complete set of input information the algorithm finds a way to efficiently process the inputs and obtain an optimal solution.
Online Algorithm: Inputs come on the fly i.e. all input information are not available to the algorithm simultaneously rather part by part as a sequence or over the time. Upon the availability of an input, the algorithm has to take immediate decision without any knowledge of future input information. In this process, the algorithm produces a sequence of decisions that will have an impact on the final quality of its overall performance.
Eg: Routing in Communication network:
Data Packets from different sources come to the nearest router. More than one communication links are connected to the router. When a new data packet arrive to the router, then the router has to decide immediately to which link the data packet is to be sent. (Assume all links are routed to the destination, all links bandwidth are same, all links are the part of the shortest path to the destination). Here, the objective is to assign each incoming data packet to one of the link without knowing the future data packets in such a way that the load of each link will be balanced. No links should be overloaded. This is a load balancing problem.
Here, the scheduler implemented in the router has no idea about the future data packets, but it has to take scheduling decision for each incoming data packets.
In the contrast a offline scheduler has full knowledge about all incoming data packets, then it efficiently assign the data packets to different links and can optimally balance the load among different links.
Cache Miss problem: In a computer system, cache is a memory unit used to avoid the speed mismatch between the faster processor and the slowest primary memory. The objective of using cache is to minimize the average access time by keeping some frequently accessed pages in the cache. The assumption is that these pages may be requested by the processor in near future. Generally, when a page request is made by the processor then the page is fetched from the primary or secondary memory and a copy of the page is stored in the cache memory. Suppose, the cache is full, then the algorithm implemented in the cache has to take immediate decision of replacing a cache block without knowledge of future page requests. The question arises: which cache block has to be replaced? (In worst case, it may happen that you replace a cache block and very next moment, the processor request for the replaced cache block).
So, the algorithm must be designed in such a way that it take immediate decision upon the arrival of an incoming request with out advance knowledge of entire request sequence. This type of algorithms are known as ONLINE ALGORITHM

Database for brute force solving board games

A few years back, researchers announced that they had completed a brute-force comprehensive solution to checkers.
I have been interested in another similar game that should have fewer states, but is still quite impractical to run a complete solver on in any reasonable time frame. I would still like to make an attempt, as even a partial solution could give valuable information.
Conceptually I would like to have a database of game states that has every known position, as well as its succeeding positions. One or more clients can grab unexplored states from the database, calculate possible moves, and insert the new states into the database. Once an endgame state is found, all states leading up to it can be updated with the minimax information to build a decision trees. If intelligent decisions are made to pick probable branches to explore, I can build information for the most important branches, and then gradually build up to completion over time.
Ignoring the merits of this idea, or the feasability of it, what is the best way to implement such a database? I made a quick prototype in sql server that stored a string representation of each state. It worked, but my solver client ran very very slow, as it puled out one state at a time and calculated all moves. I feel like I need to do larger chunks in memory, but the search space is definitely too large to store it all in memory at once.
Is there a database system better suited to this kind of job? I will be doing many many inserts, a lot of reads (to check if states (or equivalent states) already exist), and very few updates.
Also, how can I parallelize it so that many clients can work on solving different branches without duplicating too much work. I'm thinking something along the lines of a program that checks out an assignment, generates a few million states, and submits it back to be integrated into the main database. I'm just not sure if something like that will work well, or if there is prior work on methods to do that kind of thing as well.
In order to solve a game, what you really need to know per a state in your database is what is its game-theoretic value, i.e. if it's win for the player whose turn it is to move, or loss, or forced draw. You need two bits to encode this information per a state.
You then find as compact encoding as possible for that set of game states for which you want to build your end-game database; let's say your encoding takes 20 bits. It's then enough to have an array of 221 bits on your hard disk, i.e. 213 bytes. When you analyze an end-game position, you first check if the corresponding value is already set in the database; if not, calculate all its successors, calculate their game-theoretic values recursively, and then calculate using min/max the game-theoretic value of the original node and store in database. (Note: if you store win/loss/draw data in two bits, you have one bit pattern left to denote 'not known'; e.g. 00=not known, 11 = draw, 10 = player to move wins, 01 = player to move loses).
For example, consider tic-tac-toe. There are nine squares; every one can be empty, "X" or "O". This naive analysis gives you 39 = 214.26 = 15 bits per state, so you would have an array of 216 bits.
You undoubtedly want a task queue service of some sort, such as RabbitMQ - probably in conjunction with a database which can store the data once you've calculated it. Alternately, you could use a hosted service like Amazon's SQS. The client would consume an item from the queue, generate the successors, and enqueue those, as well as adding the outcome of the item it just consumed to the queue. If the state is an end-state, it can propagate scoring information up to parent elements by consulting the database.
Two caveats to bear in mind:
The number of items in the queue will likely grow exponentially as you explore the tree, with each work item causing several more to be enqueued. Be prepared for a very long queue.
Depending on your game, it may be possible for there to be multiple paths to the same game state. You'll need to check for and eliminate duplicates, and your database will need to be structured so that it's a graph (possibly with cycles!), not a tree.
The first thing that popped into my mind is the Linda-style of a shared 'whiteboard', where different processes can consume 'problems' off the whiteboard, add new problems to the whiteboard, and add 'solutions' to the whiteboard.
Perhaps the Cassandra project is the more modern version of Linda.
There have been many attempts to parallelize problems across distributed computer systems; Folding#Home provides a framework that executes binary blob 'cores' to solve protein folding problems. Distributed.net might have started the modern incarnation of distributed problem solving, and might have clients that you can start from.

How to detect anomalous resource consumption reliably?

This question is about a whole class of similar problems, but I'll ask it as a concrete example.
I have a server with a file system whose contents fluctuate. I need to monitor the available space on this file system to ensure that it doesn't fill up. For the sake of argument, let's suppose that if it fills up, the server goes down.
It doesn't really matter what it is -- it might, for example, be a queue of "work".
During "normal" operation, the available space varies within "normal" limits, but there may be pathologies:
Some other (possibly external)
component that adds work may run out
of control
Some component that removes work seizes up, but remains undetected
The statistical characteristics of the process are basically unknown.
What I'm looking for is an algorithm that takes, as input, timed periodic measurements of the available space (alternative suggestions for input are welcome), and produces as output, an alarm when things are "abnormal" and the file system is "likely to fill up". It is obviously important to avoid false negatives, but almost as important to avoid false positives, to avoid numbing the brain of the sysadmin who gets the alarm.
I appreciate that there are alternative solutions like throwing more storage space at the underlying problem, but I have actually experienced instances where 1000 times wasn't enough.
Algorithms which consider stored historical measurements are fine, although on-the-fly algorithms which minimise the amount of historic data are preferred.
I have accepted Frank's answer, and am now going back to the drawing-board to study his references in depth.
There are three cases, I think, of interest, not in order:
The "Harrods' Sale has just started" scenario: a peak of activity that at one-second resolution is "off the dial", but doesn't represent a real danger of resource depletion;
The "Global Warming" scenario: needing to plan for (relatively) stable growth; and
The "Google is sending me an unsolicited copy of The Index" scenario: this will deplete all my resources in relatively short order unless I do something to stop it.
It's the last one that's (I think) most interesting, and challenging, from a sysadmin's point of view..
If it is actually related to a queue of work, then queueing theory may be the best route to an answer.
For the general case you could perhaps attempt a (multiple?) linear regression on the historical data, to detect if there is a statistically significant rising trend in the resource usage that is likely to lead to problems if it continues (you may also be able to predict how long it must continue to lead to problems with this technique - just set a threshold for 'problem' and use the slope of the trend to determine how long it will take). You would have to play around with this and with the variables you collect though, to see if there is any statistically significant relationship that you can discover in the first place.
Although it covers a completely different topic (global warming), I've found tamino's blog (tamino.wordpress.com) to be a very good resource on statistical analysis of data that is full of knowns and unknowns. For example, see this post.
edit: as per my comment I think the problem is somewhat analogous to the GW problem. You have short term bursts of activity which average out to zero, and long term trends superimposed that you are interested in. Also there is probably more than one long term trend, and it changes from time to time. Tamino describes a technique which may be suitable for this, but unfortunately I cannot find the post I'm thinking of. It involves sliding regressions along the data (imagine multiple lines fitted to noisy data), and letting the data pick the inflection points. If you could do this then you could perhaps identify a significant change in the trend. Unfortunately it may only be identifiable after the fact, as you may need to accumulate a lot of data to get significance. But it might still be in time to head off resource depletion. At least it may give you a robust way to determine what kind of safety margin and resources in reserve you need in future.

Resources