What is the difference between an on-line and off-line algorithm? - algorithm

These terms were used in my data structures textbook, but the explanation was very terse and unclear. I think it has something to do with how much knowledge the algorithm has at each stage of computation.
(Please, don't link to the Wikipedia page: I've already read it and I am still looking for a clarification. An explanation as if I'm twelve years old and/or an example would be much more helpful.)

Wikipedia
The Wikipedia page is quite clear:
In computer science, an online algorithm is one that can process its
input piece-by-piece in a serial fashion, i.e., in the order that the
input is fed to the algorithm, without having the entire input
available from the start. In contrast, an offline algorithm is given
the whole problem data from the beginning and is required to output an
answer which solves the problem at hand. (For example, selection sort
requires that the entire list be given before it can sort it, while
insertion sort doesn't.)
Let me expand on the above:
An offline algorithm requires all information BEFORE the algorithm starts. In the Wikipedia example, selection sort is offline because step 1 is Find the minimum value in the list. To do this, you need to have the entire list available - otherwise, how could you possibly know what the minimum value is? You cannot.
Insertion sort, by contrast, is online because it does not need to know anything about what values it will sort and the information is requested WHILE the algorithm is running. Simply put, it can grab new values at every iteration.
Still not clear?
Think of the following examples (for four year olds!). David is asking you to solve two problems.
In the first problem, he says:
"I'm, going to give you two balls of different masses and you need to
drop them at the same time from a tower.. just to make sure Galileo
was right. You can't use a watch, we'll just eyeball it."
If I gave you only one ball, you'd probably look at me and wonder what you're supposed to be doing. After all, the instructions were pretty clear. You need both balls at the beginning of the problem. This is an offline algorithm.
For the second problem, David says
"Okay, that went pretty well, but now I need you to go ahead and kick
a couple of balls across a field."
I go ahead and give you the first ball. You kick it. Then I give you the second ball and you kick that. I could also give you a third and fourth ball (without you even knowing that I was going to give them to you). This is an example of an online algorithm. As a matter of fact, you could be kicking balls all day.
I hope this was clear :D

An online algorithm processes the input only piece by piece and doesn't know about the actual input size at the beginning of the algorithm.
An often used example is scheduling: you have a set of machines, and an unknown workload. Each machine has a specific speed. You want to clear the workload as fast as possible. But since you don't know all inputs from the beginning (you can often see only the next in the queue) you can only estimate which machine is the best for the current input. This can result in non-optimal distribution of your workload since you cannot make any assumption on your input data.
An offline algorithm on the other hand works only with complete input data. All workload must be known before the algorithm starts processing the data.
Example:
Workload:
1. Unit (Weight: 1)
2. Unit (Weight: 1)
3. Unit (Weight: 3)
Machines:
1. Machine (1 weight/hour)
2. Machine (2 weights/hour)
Possible result (Online):
1. Unit -> 2. Machine // 2. Machine has now a workload of 30 minutes
2. Unit -> 2. Machine // 2. Machine has now a workload of one hour
either
3. Unit -> 1. Machine // 1. Machine has now a workload of three hours
or
3. Unit -> 2. Machine // 1. Machine has now a workload of 2.5 hours
==> the work is done after 2.5 hours
Possible result (Offline):
1. Unit -> 1. Machine // 1. Machine has now a workload of one hour
2. Unit -> 1. Machine // 1. Machine has now a workload of two hours
3. Unit -> 2. Machine // 2. Machine has now a workload of 1.5 hours
==> the work is done after 2 hours
Note that the better result in the offline algorithm is only possible since we don't use the better machine from the start. We know already that there will be a heavy unit (unit 3), so this unit should be processed by the fastest machine.

An offline algorithm knows all about its input data the moment it is invoked. An online algorithm, on the other hand, can get parts or all of its input data while it is running.

An
algorithm is said to be online if it does not
know the data it will be executing on
beforehand. An offline algorithm may see
all of the data in advance.

An on-line algorithm is one that receives a sequence of requests and performs an immediate action in response to each request.
In contrast,an off-line algorithm performs action after all the requests are taken.
This paper by Richard Karp gives more insight on on-line,off-line algorithms.

We can differentiate offline and online algorithms based on the availability of the inputs prior to the processing of the algorithm.
Offline Algorithm: All input information are available to the algorithm and processed simultaneously by the algorithm. With the complete set of input information the algorithm finds a way to efficiently process the inputs and obtain an optimal solution.
Online Algorithm: Inputs come on the fly i.e. all input information are not available to the algorithm simultaneously rather part by part as a sequence or over the time. Upon the availability of an input, the algorithm has to take immediate decision without any knowledge of future input information. In this process, the algorithm produces a sequence of decisions that will have an impact on the final quality of its overall performance.
Eg: Routing in Communication network:
Data Packets from different sources come to the nearest router. More than one communication links are connected to the router. When a new data packet arrive to the router, then the router has to decide immediately to which link the data packet is to be sent. (Assume all links are routed to the destination, all links bandwidth are same, all links are the part of the shortest path to the destination). Here, the objective is to assign each incoming data packet to one of the link without knowing the future data packets in such a way that the load of each link will be balanced. No links should be overloaded. This is a load balancing problem.
Here, the scheduler implemented in the router has no idea about the future data packets, but it has to take scheduling decision for each incoming data packets.
In the contrast a offline scheduler has full knowledge about all incoming data packets, then it efficiently assign the data packets to different links and can optimally balance the load among different links.

Cache Miss problem: In a computer system, cache is a memory unit used to avoid the speed mismatch between the faster processor and the slowest primary memory. The objective of using cache is to minimize the average access time by keeping some frequently accessed pages in the cache. The assumption is that these pages may be requested by the processor in near future. Generally, when a page request is made by the processor then the page is fetched from the primary or secondary memory and a copy of the page is stored in the cache memory. Suppose, the cache is full, then the algorithm implemented in the cache has to take immediate decision of replacing a cache block without knowledge of future page requests. The question arises: which cache block has to be replaced? (In worst case, it may happen that you replace a cache block and very next moment, the processor request for the replaced cache block).
So, the algorithm must be designed in such a way that it take immediate decision upon the arrival of an incoming request with out advance knowledge of entire request sequence. This type of algorithms are known as ONLINE ALGORITHM

Related

Is it possible to apply Machine Learning algorithm to predict Failure in large HPC systems based on a years of systematic data collection?

The provided CSV dataset categories look like the following:
DATE | Hardware Identifier | What Failed | Description of Failure | Action Taken
The complete data can be easily downloaded from the Dropbox service using this link: data.csv
The data is very systematic, the input is very consistent and nicely structured. This data comes from a Computer Failure Data Repository. Additional details can be found on this link at USENIX: PNNL
About the data:
There are somewhat little over 2800 entries of single failure events that were collected over 4 years. Each event is described by the exact date and time when the event took place, what Node in the system failed, what hardware component of that node failed.
About the system:
Consists of 980 nodes processing some heavy calculation for the Molecular Science Computing Facility. Each node is designated by its own, unique ID.
My question:
Is it possible to perform any meaningful Machine Learning technique on such dataset, that would, in the end, be capable of predicting future failures in the system?
For example, would it be possible to train the ML algorithm on the provided dataset in order to predict either:
What node might fail soon (based on Hardware Identifier field)
What (node-piece of hardware) combination might fail soon (based on Hardware Identifier and either What failed or Description of Failure field)
What kind of failure might occur next anywhere in the system (based on What Failed field)
To me, this sounds like a huge classification problem. For example, in the case of (node-piece of hardware that failed), there are several thousands of different possibilities (classes). Having in mind that there are only little over 2800 single failure events described in the table, I don't feel like this would work.
Also, I am confused about how I should feed the data into the algorithm. Should the only input to the algorithm be the DATE field (converted to numeric linear growing time)? That doesn't seem right. Is it possible to feed the algorithm somehow with the time variable combined with some history of recent failure events? Should I restructure data to feed the algorithm with time variable + failure history (that might be limited, for example, to the last 30 days, or to feed the whole failure history of the system)?
May I hear your opinion? Is it possible to train an algorithm from this dataset that could predict any of the above-mentioned failure events (like, i.e. What node will fail next) given some input of the system (I can only think of time as an input for now, but that sounds wrong).
Since I am just starting to get involved with the ML algorithms, my thinking on the topic is probably very narrow and limited, so please feel free to suggest if you feel I should take a completely different approach on this.
Before we go on, remember that these failures are generally considered fairly random, so any results you get will likely be fairly unreliable.
The main problem to consider, is that you have very little data compared to the amount of nodes, slightly less than 3 on average, which means that you have to use some incredibly simple models, that would not give you much advantage over a random guess, for you to even have any certainty in your variables (separate mean time between failure would not have a determinable error, if it is even calculateable). For this I would probably treat each node as a separate test point, and then train a tree based algorithm to try to predict when the last failure in the nodes sequence of failures is, but that also mean that it would only be applicable to a subset of the database. This might be able to vaguely predict whether the node will fall in the near future and what type it would most likely be, but it like be fairly close to the estimate of mean time to failure and most common failure for all nodes.
If you want some meaningful results, you will need to have some attributes of the nodes that you can do the machine learning on, such as hardware components and when they were installed, and then have that as input in the classification. Since the problem will likely behave fairly randomly, you would get more information from trying to solve the regression problem instead of the classification problem, since you can still get good precision on a probabilistic model, even though the classification itself would be highly uncertain.

How to scale an algorithm/service/system with multiple machines?

I had some interviews recently and it's quite normal to be asked some scale problems.
For example, you have a long list of words(dict) and list of characters as the inputs, design an algorithm to find out a shortest word which in dict contains all the chars in the char list. Then the interviewer asked how to scale your algorithm into multiple machines.
Another example is you have been designed a traffic light control system for an intersection in a city. How do you scale this control system to the whole city which has many intersections.
I always have no idea about this kind of "scale" problems, welcome any suggestions and comments.
Your first question is completely different from your second question. In fact the control of traffic lights in cities is a local operation. There are boxes nearby that you can tune and optical sensor on top of the light that detects waiting cars. I guess if you need to optimize for some objective function of flow, you can route information to a server process, then it can become how to scale this server process over multiple machines.
I am no expert in design of distributed algorithm, which spans a whole field of research. But the questions in undergrad interviews usually are not that specialized. After all they are not interviewing a graduate student specializing in those fields. Take your first question as an example, it is quite generic indeed.
Normally these questions involve multiple data structures (several lists and hashtables) interacting (joining, iterating, etc) to solve a problem. Once you have worked out a basic solution, scaling is basically copying that solution on many machines and running them with partitions of the input at the same time. (Of course, in many cases this is difficult if not impossible, but interview questions won't be that hard)
That is, you have many identical workers splitting the input workload and work at the same time, but those workers are processes in different machines. That brings the problem of communication protocol and network latency etc, but we will ignore these to get to the basics.
The most common way to scale is let the workers hold copies of smaller data structures and have them split the larger data structures as workload. In your example (first question), the list of characters is small in size, so you would give each worker a copy of the list, and a portion of the dictionary to work on with the list. Notice that the other way around won't work, because each worker holding a dictionary will consume a large amount of memory in total, and it won't save you anything scaling up.
If your problem gets larger, then you may need more layer of splitting, which also implies you need a way of combining the outputs from the workers taking in the split input. This is the general concept and motivation for the MapReduce framework and its derivatives.
Hope it helps...
For the first question, how to search words that contain all the char in the char list that can run on the same time on the different machine. (Not yet the shortest). I will do it with map-reduce as the base.
First, this problem is actually can run on different machine at the same time. This is because for each word in the database, you can check it on another machine (so to check another word, you didn't have to wait for the previous word or the next word, you can literally send each word to different computer to be checked).
Using map-reduce, you can map each word as a value and then check it if it contain every char in the char list.
Map(Word, keyout, valueout){
//Word comes from dbase, keyout & valueout is input for Reduce
if(check if word contain all char){
sharedOutput(Key, Word)//Basically, you send the word to a shared file.
//The output shared file, should be managed by the 'said like' hadoop
}
}
After this Map running, you get all the Word that you want from the database locate in shared file. As for the reduce step, you can actually used some simple step to reduce it based on it length. And tada, you get the shortest one.
As for the second question, multi threading come to my mind. It's actually a problem that not relate to each other. I mean each intersection has its own timer right? So to be able handle tons of intersection, you should use multi threading.
The simple term will be using each core in the processor to control each intersection. Rather then go loop through all intersection on by one. You can alocate them in each core so that the process will be faster.

How p2p search engines could prevent corruption of distributed index by malicious peers?

As a hobby I'm writing simple and primitive distributed web search engine and it occurred to me it currently has no protection against malicious peers trying to skew search results.
Current architecture of the project is storing inverse index and ranking factors in kad dht with peers updating this inverse index as they crawl web.
I've used google scholar in attempt to find some solution but it seems most of the authors of proposed p2p web search ignore above-mentioned problem.
I think I need some kind of reputation system or trust metrics, but my knowledge in this domain is sufficiently lacking and I would very much appreciate a few pointers.
One way you could avoid this is to only use reliable nodes for storing and retrieving values. The reliability of a node will have to be computed by known-good nodes, and it could be something like the similarity of a node's last few computed ranking factors compared to the same ranking factors computed by known-good nodes (i.e. compare the node's scores for google.com to known-good scores for google.com). Using this approach, you'll need to avoid the "rogue reliable node" problem (for example, by using random checks or reducing all reliability scores randomly).
Another way you could approach this is to duplicate computation of ranking factors across multiple nodes, fetch all of the values at search time, and rank them on the client side (using variance, for example). You could also limit searches to sites that only have >10 duplicate values computed, so that there is some time before new sites are ranked. Additionally, any nodes with values outside of the normal range could be reported by the client in the background, and their reliability scores could be computed this way. This approach is time-consuming for the end user (unless you replicate known-good results to known-good nodes for faster lookups).
Also, take a look at this paper which describes a sybil-proof weak-trust system (which, as the authors explain, is more robust than the impossible sybil-proof strong-trust system): http://www.eecs.harvard.edu/econcs/pubs/Seuken_aamas14.pdf
The problem you are describing is Byzantine General’s problem or Byzantine Fault Tolerance. You can read more about it on wikipedia but there must be plenty of papers written about it.
I don’t remember the exact algorithm, but basically it’s mathematically proven that for t traitors (malicious peers) you will need 3*t + 1 peers in total, in order to detect the traitors.
My general thought would be, this is a huge overhead in implementation and resource waste on the indexing side, and while there is enough research to be done in distributed indexing and distributed search, not many people are tackling it yet. Also the problem has been basically solved with the Byzantine General’s it “just" needs to be implemented on top of an existing (and working) distributed search engine.
If you don't mind having a time delay on index updates, you could opt for a block-chain algorithm similar to what bitcoin uses to secure funds.
Changes to the index (deltas only!) can be represented in a text or binary file format, and crunched by peers who accept a given block of deltas. A malicious peer would have to out-compute the rest of the network for a period of time in order to skew the index in their favor.
I believe the bitcoin hashing algorithm (SHA-256) to be flawed in that custom hardware renders the common users' hardware useless. A block chain using the litecoin's algorithm (scrypt) would work well, because cpus and gpus are effective tools in the computation.
You would weigh the difficulty accordingly, so that news block are produced on a fairly regular schedule -- maybe 2-5 minutes. A user of the search engine could posibly choose to use the index at least 30 minutes old, to guarantee that enough users in the network vouch for its contents.
more info:
https://en.bitcoin.it/wiki/Block_chain
https://en.bitcoin.it/wiki/Block_hashing_algorithm
https://litecoin.info/block_hashing_algorithm
https://www.coinpursuit.com/pages/bitcoin-altcoin-SHA-256-scrypt-mining-algorithms/

Neural network and algorithm(s), predicting future outcome from past

I was working on a algorithm, where I am given some input and I am given output for them, and given the output for 3 months (give or take) I need a way to find/calculate what might be the future output.
Now, this problem given can be related to stock exchange, we are given certaing constraints and certain outcomes, and we need to find the next.
I stumbled upon neural network stock market prediction, you can Google it, or you can read about it here, here and here.
To get started at making the algorithm, I couldn't figure out what should be the structure of layers.
The given constraint are:
The output would always be integer.
The output would always be between 1 and 100.
There is no exact input for say, just like stock market, we just know that the stock price would fluctuate btw 1 and 100, so we might (or not?) consider this as the only input.
We have record for last 3 months (or more).
Now, my first question is, how many nodes do I take for input?
The output is just one, fine. But as I said, should I take 100 nodes for input layer (given that the stock price would always be integer and would always be btw 1 and 100?)
What about hidden layer? How many nodes there? Say, if I take 100 nodes there too, I don't think that would train the network much, because what I think is that for each input we need to take into account all previous input also.
Say, we are calulating output for 1st day of 4th month, we should have 90 nodes in hidden/middle layer (imagining each month is 30 days for simplicity). Now there are two cases
Our prediction was correct and outcome was same as we predicted.
Our prediction failed, and the outcome was different than what we predicted.
Whatever the case be, now when we are calculating the output for 2nd day of 4th month, we need not only those 90 input(s) but also that last result (and not the prediction, be it the same!) too, so we now have 91 nodes in our middle/hidden layer.
And so on, it would keep increasing the number of nodes each day, AFAICT.
So, my other question is how do I define/set the number of nodes in hidden/middle layer if its dynamically changing.
My last question is, is there any other particular algorithm out there (for this kinda thing/stuff) that I am not aware of? That I should be using instead of messing around with this neural networking stuff?
Lastly, is there anything, that I might be missing that might cause me (rather the algo I am making) to predict the output, I mean any caveats, or anything that might make it go wrong that I might be missing?
There is much to tell as an answer to your question. In fact, your question addresses the problem of time series forecasting in general, and neural networks application for this task. I'm writing here only several most important keys, but after reading this you should possibly dig into Google's results for the query time series prediction neural network. There is a lot of works where the principles are covered in details. A variety of software implementations (with source codes) do also exist (here is just one of examples with codes in c++).
1) I must say that the problem is 99% about data preprocessing and choosing correct input/output factors, and only 1% about concrete instrument to use, whether neural networks or something other. Just as a side note, neural networks can internally implement most of other data analysis methods. For example, you can use neural network for Principal Component Analysis (PCA) which is closely related to SVD, mentioned in another answer.
2) It's very rare that input/output values are strictly fit a specific region. Real life data can be considered as unbounded in absolute values (even if its changes seem producing a channel, it can be broken down just in a moment), but neural network can operate in a stable conditions only. This is why the data is normally converted into increments first (by calculating deltas between i-th point and i-1, or taking log from their ratio). I suggest you do it with your data anyway, though you declare it's inside [0, 100] region. If you don't do it, neural network will most likely degenerate to a so called naive predictor which produce a forecast with each next value equal to previous.
The data then is normalized into [0, 1] or [-1, +1]. The second is appropriate for the case of time series prediction where +1 denotes move up, and -1 - move down. Use hypertanh activation function for neurons in your net.
3) You should feed NN with an input data obtained from a sliding window of dates. For example, if you have a data for a year and every point is a day, you should choose the size of window - say, a month - and slide it day by day, from the past to the future. The day just at the right bound of the window is the target output for NN. This is a very simple approach (there are much more complicated), I mention it just because you ask how to handle data which does continuously arrive. The answer is - you don't need to change/enlarge your NN every day. Just use a constant structure with a fixed window size and "forget" (do not provide to the NN) the oldest point. It's important that you do not treat all the data you have as a single input, but divide it into many small vectors and train NN on them, so the net can generalize data and find regularity.
4) The size of sliding window is your NN input size. The output size is 1. You should play with hidden layer size to find better performance. Start with a value which somethat between input and output, for example sqrt(in*out).
According to lastest researches, Recurrent Neural Networks seem operating better for tasks of time series forecasting.
I agree with Stan when he says
1) I must say that the problem is 99% about data preprocessing
I've applied Neural Networks for 25+ years to various aerospace applications including helicopter flight control - setting up the input/output data set is everything - all else is secondary.
I'm amazed, in smirkman's comment that Neural Networks were quickly dropped "as they produced nothing worthwhile" - that tells me that whoever was working with Neural Networks had little experience with them.
Given that the topic discusses neural network stock market prediction - I'll say that I've made it work. Test results are downloadable from my website at www.nwtai.com.
I don't give away how it was done but there's enough interesting data that should make you want to explore using Neural Networks more seriously.
This kind of problem was particularly well researched by thousands of people who wanted to win the 1M$ NetFlix prize.
Earlier submissions were often based on K Nearest Neigbours. Later submissions were made using Singular Value Decomposition, Support Vector Machines and Stochastic Gradient Descent. The winner used a blend of several techniques.
Reading the excellent Community forums will give you many insights about the best methods to predict the future from the past. You'll also find loads of source code for the different methods.
Amusingly, neural networks were quickly dropped, as they produced nothing worthwhile (and I personally have yet to see a non-trivial NN produce anything of value).
If you are starting out, I'd suggest SVD as a first path; it's quite easy to make and often produces surprising insights into data.
Good luck!

Database for brute force solving board games

A few years back, researchers announced that they had completed a brute-force comprehensive solution to checkers.
I have been interested in another similar game that should have fewer states, but is still quite impractical to run a complete solver on in any reasonable time frame. I would still like to make an attempt, as even a partial solution could give valuable information.
Conceptually I would like to have a database of game states that has every known position, as well as its succeeding positions. One or more clients can grab unexplored states from the database, calculate possible moves, and insert the new states into the database. Once an endgame state is found, all states leading up to it can be updated with the minimax information to build a decision trees. If intelligent decisions are made to pick probable branches to explore, I can build information for the most important branches, and then gradually build up to completion over time.
Ignoring the merits of this idea, or the feasability of it, what is the best way to implement such a database? I made a quick prototype in sql server that stored a string representation of each state. It worked, but my solver client ran very very slow, as it puled out one state at a time and calculated all moves. I feel like I need to do larger chunks in memory, but the search space is definitely too large to store it all in memory at once.
Is there a database system better suited to this kind of job? I will be doing many many inserts, a lot of reads (to check if states (or equivalent states) already exist), and very few updates.
Also, how can I parallelize it so that many clients can work on solving different branches without duplicating too much work. I'm thinking something along the lines of a program that checks out an assignment, generates a few million states, and submits it back to be integrated into the main database. I'm just not sure if something like that will work well, or if there is prior work on methods to do that kind of thing as well.
In order to solve a game, what you really need to know per a state in your database is what is its game-theoretic value, i.e. if it's win for the player whose turn it is to move, or loss, or forced draw. You need two bits to encode this information per a state.
You then find as compact encoding as possible for that set of game states for which you want to build your end-game database; let's say your encoding takes 20 bits. It's then enough to have an array of 221 bits on your hard disk, i.e. 213 bytes. When you analyze an end-game position, you first check if the corresponding value is already set in the database; if not, calculate all its successors, calculate their game-theoretic values recursively, and then calculate using min/max the game-theoretic value of the original node and store in database. (Note: if you store win/loss/draw data in two bits, you have one bit pattern left to denote 'not known'; e.g. 00=not known, 11 = draw, 10 = player to move wins, 01 = player to move loses).
For example, consider tic-tac-toe. There are nine squares; every one can be empty, "X" or "O". This naive analysis gives you 39 = 214.26 = 15 bits per state, so you would have an array of 216 bits.
You undoubtedly want a task queue service of some sort, such as RabbitMQ - probably in conjunction with a database which can store the data once you've calculated it. Alternately, you could use a hosted service like Amazon's SQS. The client would consume an item from the queue, generate the successors, and enqueue those, as well as adding the outcome of the item it just consumed to the queue. If the state is an end-state, it can propagate scoring information up to parent elements by consulting the database.
Two caveats to bear in mind:
The number of items in the queue will likely grow exponentially as you explore the tree, with each work item causing several more to be enqueued. Be prepared for a very long queue.
Depending on your game, it may be possible for there to be multiple paths to the same game state. You'll need to check for and eliminate duplicates, and your database will need to be structured so that it's a graph (possibly with cycles!), not a tree.
The first thing that popped into my mind is the Linda-style of a shared 'whiteboard', where different processes can consume 'problems' off the whiteboard, add new problems to the whiteboard, and add 'solutions' to the whiteboard.
Perhaps the Cassandra project is the more modern version of Linda.
There have been many attempts to parallelize problems across distributed computer systems; Folding#Home provides a framework that executes binary blob 'cores' to solve protein folding problems. Distributed.net might have started the modern incarnation of distributed problem solving, and might have clients that you can start from.

Resources