Hadoop for password cracking - hadoop

Hi I came across this article and it made me wonder, how easy it would be for a hacker to crack passwords. What do you think guys???

If you want to try out several permutations in a brute force manner, I don't think that using hadoop would give you any benefit. Hadoop is not something that fits into all uses cases and would not every time perform well.
Computing permutations can be done in batch.. just set different start and end params for each machine. The overhead involved in setting a job, movement of data across nodes, job cleanup can surely be saved. I have seen that running different processes over 5 nodes be pre-dividing the load equally performed pretty well as compared to map-reduce. Offcourse, I dont mean that map-reduce is bad.. its just that the scenario wasnt right fit for getting the job done.

I found this Recursive Algorithm on Distributed Systems an interesting way to run recursive algorithms on distributed system. Now a permutation and combination algorithms can be used to do some interesting stuff

Related

Need an advice about algorithm to solve quite specific Job Shop Scheduling Problem

Job Shop Scheduling Problem (JSSP): I have jobs that consist of tasks and I have machines that can perform these tasks.
I should be able to add new jobs dynamically. E.g. I have a schedule for the first 5 jobs, and when the 6th arrive - I need to be able to fit it into the schedule in the best way. It is possible to adjust existing schedule within the given flexibility constrains.
Look at the picture below.
Jobs have tasks, each task is the same type of action. Think about painting of some objects with paint spray. All the machines are the same (paint sprays), and all of the tasks are the same.
Constraint 1. Jobs have a preferred deadline for completion, but the deadline is flexible to some extent.
Edit after #tucuxi answer: Flexible deadline mean that the time of completion can be extended by some delta if necessary.
Constraint 2. Between the jobs there is resting phase. Think about drying the paint. Resting phase has minimal required duration. Resting phase can be longer or shorter if necessary.
Edit after #tucuxi answer: So there is planned time of rest Tp which is desired, but flexible value that can be increased or decreased if this allows for better scheduling. And there is minimal time of rest Tm. So Tp-Tadjustmenet>=Tm.
The machine is occupied by the job from the start to the completion.
Here goes parts that make this problem very distinct from what I have read about.
Jobs arrive in batches of several jobs. For example a batch can contain 10 jobs of the type Job_1 and 5 of Job_2. Different batches can contain different types of jobs. All the jobs from the batches should be finished as close to each other as possible. Not necessary at the same time, but we need to minimize the delay between the completion of first and last jobs from the batch.
Constraint 3. Machines are grouped. In each group only M machines can work simultaneously. Think about paint sprays that are connected to the common pressurizer that has limited performance.
The goal.
Having given description of the problem, it should be possible to solve JSSP. It should be also possible to add new jobs to the existing schedule.
Edit after #tucuxi answer: This is not a task that should be solved immediately: it is not a time-critical system. But it shouldn't be too long to irritate a human who put new tasks into the algorithm.
Question
What kind of many JSSP algorithms can help me solve this? I can implement an algorithm by myself, if there is one. The closest I found is This - Resource Constrained Project Scheduling Problem. But I was not able to comprehend how can I glue it to the JSSP solving algorithm.
Edit after #tucuxianswer: No, I haven't tried it yet.
Is there any libraries that can be used to solve this problem? Python or C# are the preferred languages, but in the end it doesn't really matter.
I appreciate any help: keyword to search for, link, reference to a book, reference to a library.
Thank you.
I doubt that there is a pre-made algorithm that solves your exact problem.
If I had to solve it, I would first:
compile datasets of inputs that I can feed into candidate solvers.
think of a metric to rank outputs, so that I can compare the candidates to see which is better.
A baseline solver could be a brute-force search: test and rate all possible job schedulings for small sample problems. This is of course infeasible for large inputs, but for small inputs it allows you to compare the outputs of more efficient solvers to a known-best answer.
Your link is to localsolver.com, which appears to provide a library for specifying problem constraints to then solve them. It is not freely available, requiring a license to use; but it would seem that your problem can be readily modeled in it. Have you tried to do so? They appear to support both C++ and Python. Other free options exist, including optaplanner (2.8k stars in github) or python-constraint (I have not looked into other languages).
Note that a good metric is crucial to choosing a good algorithm: unless you have a clear cost function to minimize, choosing "a good algorithm" is impossible. In your description of the problem, I see several places where cost is unclear (marked in italics):
job deadlines are flexible
minimal required rest times... which may be shortened
jobs from a batch should be finished as close together as possible
(not from specification): how long can you wait for an optimal vs a less-optimal-but-faster solution?

Why we say map-reduce solves "Paper reference" problems better than traditional methods?

It's said when we wish to do statistics among paper references, map-reduce could do much better than traditional ways, as traditional ways involves a lot of memory/disk switches. I don't quite find out why traditional ways is not good.
Suppose I run map-reduce on just one machine(no cluster), does it still solve some problems better than traditional ways?
Or in another word, does the algorithm paradigm of "map-reduce" itself, has some advantages in solving problems, from algorithm point of view?
Thanks.
At best M/R allows re-applying the same algorithms as the advanced stats packages. But more typically some sacrifices are made in the algorithms used - to allow for running in a distributed fashion. Map/Reduce provides no "magic" in terms of - say - providing a more uniformly randomized distribution during cross-fold sampling (or any other sampling methodology).
For a small dataset that fits in memory M/R is usually worse than your traditional packages - due to compromises made in the algorithm for scalability. You start to see an advantage to M/R when using large datasets that are prohibitive to fully sample on a single machine. Using R / Matlab / SAS would typically require down-sampling - and possibly by orders or magnitude.

Distributed algorithm design

I've been reading Introduction to Algorithms and started to get a few ideas and questions popping up in my head. The one that's baffled me most is how you would approach designing an algorithm to schedule items/messages in a queue that is distributed.
My thoughts have lead me to browsing Wikipedia on topics such as Sorting,Message queues,Sheduling, Distributed hashtables, to name a few.
The scenario:
Say you wanted to have a system that queued messages (strings or some serialized object for example). A key feature of this system is to avoid any single point of failure. The system had to be distributed across multiple nodes within some cluster and had to consistently (or as best as possible) even the work load of each node within the cluster to avoid hotspots.
You want to avoid the use of a master/slave design for replication and scaling (no single point of failure). The system totally avoids writing to disc and maintains in memory data structures.
Since this is meant to be a queue of some sort the system should be able to use varying scheduling algorithms (FIFO,Earliest deadline,round robin etc...) to determine which message should be returned on the next request regardless of which node in the cluster the request is made to.
My initial thoughts
I can imagine how this would work on a single machine but when I start thinking about how you'd distribute something like this questions like:
How would I hash each message?
How would I know which node a message was sent to?
How would I schedule each item so that I can determine which message and from which node should be returned next?
I started reading about distributed hash tables and how projects like Apache Cassandra use some sort of consistent hashing to distribute data but then I thought, since the query won't supply a key I need to know where the next item is and just supply it...
This lead into reading about peer to peer protocols and how they approach the synchronization problem across nodes.
So my question is, how would you approach a problem like the one described above, or is this too far fetched and is simply a stupid idea...?
Just an overview, pointers,different approaches, pitfalls and benefits of each. The technologies/concepts/design/theory that may be appropriate. Basically anything that could be of use in understanding how something like this may work.
And if you're wondering, no I'm not intending to implement anything like this, its just popped into my head while reading (It happens, I get distracted by wild ideas when I read a good book).
UPDATE
Another interesting point that would become an issue is distributed deletes.I know systems like Cassandra have tackled this by implementing HintedHandoff,Read Repair and AntiEntropy and it seems to work work well but are there any other (viable and efficient) means of tackling this?
Overview, as you wanted
There are some popular techniques for distributed algorithms, e.g. using clocks, waves or general purpose routing algorithms.
You can find these in the great distributed algorithm books Introduction to distributed algorithms by Tel and Distributed Algorithms by Lynch.
Reductions
are particularly useful since general distributed algorithms can become quite complex. You might be able to use a reduction to a simpler, more specific case.
If, for instance, you want to avoid having a single point of failure, but a symmetric distributed algorithm is too complex, you can use the standard distributed algorithm of (leader) election and afterwards use a simpler asymmetric algorithm, i.e. one which can make use of a master.
Similarly, you can use synchronizers to transform a synchronous network model to an asynchronous one.
You can use snapshots to be able to analyze offline instead of having to deal with varying online process states.

What type of problems can mapreduce solve?

Is there a theoretical analysis available which describes what kind of problems mapreduce can solve?
In Map-Reduce for Machine Learning on Multicore Chu et al describe "algorithms that fit the Statistical Query model can be written in a certain “summation form,” which allows them to be easily parallelized on multicore computers." They specifically implement 10 algorithms including e.g. weighted linear regression, k-Means, Naive Bayes, and SVM, using a map-reduce framework.
The Apache Mahout project has released a recent Hadoop (Java) implementation of some methods based on the ideas from this paper.
For problems requiring processing and generating large data sets. Say running an interest generation query over all accounts a bank hold. Say processing audit data for all transactions that happened in the past year in a bank. The best use case is from Google - generating search index for google search engine.
Many problems that are "Embarrassingly Parallel" (great phrase!) can use MapReduce. http://en.wikipedia.org/wiki/Embarrassingly_parallel
From this article....
http://www.businessweek.com/magazine/content/07_52/b4064048925836.htm
...
Doug Cutting, founder of Hadoop (an open source implementation of MapReduce) says...
“Facebook uses Hadoop to analyze user behavior and the effectiveness of ads on the site"
and... “the tech team at The New York Times rented computing power on Amazon’s cloud and used Hadoop to convert 11 million archived articles, dating back to 1851, to digital and searchable documents. They turned around in a single day a job that otherwise would have taken months.”
Anything that involves doing operations on a large set of data, where the problem can be broken down into smaller independent sub-problems who's results can then be aggregated to produce the answer to the larger problem.
A trivial example would be calculating the sum of a huge set of numbers. You split the set into smaller sets, calculate the sums of those smaller sets in parallel (which can involve splitting those into yet even smaller sets), then sum those results to reach the final answer.
The answer lies is really in the name of the algorithm. MapReduce is not a general purpose parallel programming work or batch execution framework as some of the answers suggest. Map Reduce is really useful when large data sets that need to be processed (Mapping phase) and derive certain attributes from there, and then need to be summarized on on those derived attributes (Reduction Phase).
You can also watch the videos # Google, I'm watching them myself and I find them very educational.
Sort of a hello world introduction to MapReduce
http://blog.diskodev.com/parallel-processing-using-the-map-reduce-prog
This question was asked before its time. Since 2009 there has actually been a theoretical analysis of MapReduce computations. This 2010 paper of Howard Karloff et al. formalizes MapReduce as a complexity class in the same way that theoreticians study P and NP. They prove some relationships between MapReduce and a class called NC (which can be thought of either as shared-memory parallel machines or a certain class of restricted circuits). But the main piece of work are their formal definitions.

What is this algorithm called?

I'm trying to look up this problem but I don't know what it's called. The premise is this:
Given m machines and j jobs, where each job can only be assigned to machines i through j, I need to assign the jobs to machines so that I maximize busy machines at one time. I am only concerned with how they are assigned at time 0. I am not concerned with how I would schedule remaining jobs after a job is completed.
Once a job and a machine are assigned to each other, no other job or machine can act on either member.
Scheduling algorithm
As others said, what you described is a problem, not an algorithm. There are many techniques you could use to solve your problem. Which one you should choose depends on your needs. If you need the optimal solution, you must use a technique called integer programming. If you want a very good solution, not necessarily the optimal one, there are many heuristics you could use.
Like they have said you are basically writing a 'scheduler'.
As your 'j' jobs seem to be having equal priority may be you are looking at 'Round robin - time sliced scheduling algorithm'.
The problem is a variant of the bin packing problem, which has a wider variety of literature than processor scheduling.
Typical real world OS multi-processor scheduling algorithms don't operate with knowledge of how the long jobs will take, and account for other issues such as memory affinity, and trading the scheduler's complexity with the benefit of scheduling.
I have encountered get this kind of problem in modular avionics systems where you are apportioning jobs to nodes, and there you do know the expected timing and memory requirements for their jobs prior to the jobs executing.
Sounds like a scheduler.
As other have said, it's a scheduler.
It's also a classic problem used to demonstrate OOPS development, and in particular it used to be used as a very common sample application for Smalltalk programming.

Resources