Binary Tree level order traversal,left and right view - binary-tree

I am solving one assignment it seems very difficult for me if any expert programmer from stack overflow can help me I will be thankful.(The assignment is from the University of California Devis)
https://www.chegg.com/homework-help/questions-and-answers/datastructurehpp-change-file-include-include-include-using-namespace-std-typedef-struct-tr-q68704743

Related

Does it have a mature algorithm or theory for a scenario about best distribution? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Scenario:
We have 10 kinds of toy,and every kind include 10 toys.
We will distribute toys to 100 children.Every child have different degree of satisfaction for 10 kinds. Tip:In the real project we will have 300000+ children records in my database.
My Question is:How to measure and define the best solution for distribution?
And how to get the result?Please give me a hint.
Some friends suggest me to try KM algorithm, I'm not sure it will work for me.
Thinks.
This problem is hard because you haven't decided what you want to optimize, and because many optimization methods will be expensive to run if you have 300K children - or customers - to worry about.
What do you want to optimize? If you try and optimize the sum of some set of per-child satisfaction score, can you really compare the subjective satisfaction of two different children, let alone add them up to produce anything sensible? If you decide on such a system, can you prove that it cannot be distorted by children who decide to lie about their satisfactions, for instance saying that they will be devastated if they don't get one particular toy?
What if somebody decides that the sum of satisfaction scores isn't the right metric, but instead that you should minimize the dis-satisfaction of the most dis-satisfied child?
What if somebody decides that inequality is the real problem, so if there is one very happy child, you should take away their toy and give it to somebody else to minimize the difference in satisfaction between the most and least satisfied child?
What if somebody decides that some children count more than other children, because of something their great-grandparents did, or didn't do?
Just to not be completely negative, here is a cheap scheme, and an attempt to prove a property about it. Put the children in random order and allocate the toys as if each child were to choose according to their preferences in this order - so each child would get the toy they most preferred according to the toys left when they came to choose.
One property you might want for a method of choosing is that, after the toys were distributed, children wouldn't find that they could trade toys amongst themselves to produce a better distribution, making you look silly (aka not a pareto optimal solution). Suppose that such a pattern of trades was possible among the children in this scheme. Consider the trading child who came first among these children in the initial randomization. They chose the toy they wanted most from all those available, so there is in fact nothing the other trading children could offer them that they would prefer. So this scheme is at least not vulnerable to later trades.

Recommendations for using graphs theory in machine learning? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I have been learning alot about using graphs for machine learning by watching Christopher Bishops videos( http://videolectures.net/mlss04_bishop_gmvm/ ). I find it very interesting and watched a few others in the same categories(machine learning/graph) but was wondering if anyone had any recommendations for ways of learning more?
My problem is, although the videos gave a great high level understanding, I don't have much practical skills in it yet. I've read Bishops book on machine learning/patterns as well as Norvig's AI book but both don't seem to touch upon specific using graphs much. With the emergence of search engines and social networking, I would think machine learning on graphs would be popular.
If possible, can anyone suggestion an a resource to learn from? (I'm new to this field and development is a hobby for me, so I'm sorry in advance if there's a super obvious resource to learn from..I tried google and university sites).
Thanks in advance!
First, i would strongly recommend the book Social Network Analysis for Startups by Maksim Tsvetovat and Alexander Kouznetsov. A book like this is a godsend for programmers who need to quickly acquire a basic fluency in a specific discipline (in this case, graph theory) so that they can begin writing code to solve problems in this domain. Both authors are academically trained graph theoreticians but the intended audience of their book is programmers. Nearly all of the numerous examples presented in the book are in python using the networkx library.
Second, for the projects you have in mind, two kinds of libraries are very helpful if not indispensible:
graph analysis: e.g., the excellent networkx (python), or igraph
(python, R, et. al.) are two that i can recommend highly; and
graph rendering: the excellent graphViz, which can be used
stand-alone from the command line but more likely you will want to
use it as a library; there are graphViz bindings in all major
languages (e.g., for python there are at least three i know of,
though pygraphviz is my preference; for R there is rgraphviz which is
part of the bioconductor package suite). Rgraphviz has excellent documentation (see in particular the Vignette included with the Package).
It is very easy to install and begin experimenting with these libraries and in particular using them
to learn the essential graph theoretic lexicon and units of analysis
(e.g., degree sequence distribution, nodes traversal, graph
operators);
to distinguish critical nodes in a graph (e.g., degree centrality,
eigenvector centrality, assortivity); and
to identify prototype graph substructures (e.g., bipartite structure,
triangles, cycles, cliques, clusters, communities, and cores).
The value of using a graph-analysis library to quickly understand these essential elements of graph theory is that for the most part there is a 1:1 mapping between the concepts i just mentioned and functions in the (networkx or igraph) library.
So e.g., you can quickly generate two random graphs of equal size (node number), render and then view them, then easily calculate for instance the average degree sequence or betweenness centrality for both and observer first-hand how changes in the value of those parameters affects the structure of a graph.
W/r/t the combination of ML and Graph Theoretic techniques, here's my limited personal experience. I use ML in my day-to-day work and graph theory less often, but rarely together. This is just an empirical observation limited to my personal experience, so the fact that i haven't found a problem in which it has seemed natural to combine techniques in these two domains. Most often graph theoretic analysis is useful in ML's blind spot, which is the availability of a substantial amount of labeled training data--supervised ML techniques depend heavily on this.
One class of problems to illustrate this point is online fraud detection/prediction. It's almost never possible to gather data (e.g., sets of online transactions attributed to a particular user) that you can with reasonable certainty separate and label as "fraudulent account." If they were particularly clever and effective then you will mislabel as "legitimate" and for those accounts for which fraud was suspected, quite often the first-level diagnostics (e.g., additional id verification or an increased waiting period to cash-out) are often enough to cause them to cease further activity (which would allow for a definite classification). Finally, even if you somehow manage to gather a reasonably noise-free data set for training your ML algorithm, it will certainly be seriously unbalanced (i.e., much more "legitimate" than "fraud" data points); this problem can be managed with statistics pre-processing (resampling) and by algorithm tuning (weighting) but it's still a problem that will likely degrade the quality of your results.
So while i have never been able to successfully use ML techniques for these types of problems, in at least two instances, i have used graph theory with some success--in the most recent instance, by applying a model adapted from the project by a group at Carnegie Mellon initially directed to detection of online auction fraud on ebay.
MacArthur Genius Grant recipient and Stanford Professor Daphne Koller co-authored a definitive textbook on Bayesian networks entitled Probabalistic Graphical Models, which contains a rigorous introduction to graph theory as applied to AI. It may not exactly match what you're looking for, but in its field it is very highly regarded.
You can attend free online classes at Stanford for machine learning and artificial intelligence:
https://www.ai-class.com/
http://www.ml-class.org/
The classes are not simply focused on graph theory, but include a broader introduction in the field and they will give you a good idea of how and when you should apply which algorithm. I understand that you've read the introductory books on AI and ML, but I think that the online classes will provide you with a lot of exercises that you can try.
Although this is not an exact match to what you are looking for, textgraphs is a workshop that focuses on the link between graph theory and natural language processing. Here is a link. I believe the workshop also generated this book.

Al Zimmermann's Son of Darts

There's about 2 months left in Al Zimmermann's Son of Darts programming contest, and I'd like to improve my standing (currently in the 60s) to something more respectable. I'd like to get some ideas from the great community of stackoverflow on how best to approach this problem.
The contest problem is known as the Global Postage Stamp Problem in literatures. I don't have much experience with optimization algorithms (I know of hillclimbing and simulated annealing in concept only from college), and in fact the program that I have right now is basically sheer brute force, which of course isn't feasible for the larger search spaces.
Here are some papers on the subject:
A Postage Stamp Problem (Alter & Barnett, 1980)
Algorithms for Computing the h-Range of the Postage Stamp Problem (Mossige, 1981)
A Postage Stamp Problem (Lunnon, 1986)
Two New Techniques for Computing Extremal h-bases Ak (Challis, 1992)
Any hints and suggestions are welcome. Also, feel free to direct me to the proper site if stackoverflow isn't it.
I am not familiar with the problem. But maybe you could do things like branch & bound to get rid of the brute force aspect.
http://en.wikipedia.org/wiki/Branch_and_bound

being able to solve google code jam problem sets [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
This is not a homework question, but rather my intention to know if this is what it takes to learn programming. I keep loggin into TopCoder not to actually participate but to get the basic understand of how the problems are solved. But to my knowledge I don't understand what the problem is and how to translate the problem into an algorithm that can solve it.
Just now I happen to look at ACM ICPC 2010 World Finals which is being held in china. The teams were given problem sets and one of them is this:
Given at most 100 points on a plan with distinct x-coordinates,
find the shortest cycle that passes through each point exactly once,
goes from the leftmost point always to the right until it reaches the
rightmost point, then goes always to the left until it gets back to the
leftmost point. Additionally, two points are given such that the the path
from left to right contains the first point, and the path from right to
left contains the second point. This seems to be a very simple DP: after
processing the last k points, and with the first path ending in point a
and the second path ending in point b, what is the smallest total length
to achieve that? This is O(n^2) states, transitions in O(n). We deal
with the two special points by forcing the first path to contain the first
one, and the second path contain the second one.
Now I have no idea what I am supposed to solve after reading the problem set.
and there's an other one from google code jam:
Problem
In a big, square room there are two point light sources:
one is red and the other is green. There are also n circular pillars.
Light travels in straight lines and is absorbed by walls and pillars.
The pillars therefore cast shadows: they do not let light through.
There are places in the room where no light reaches (black), where only
one of the two light sources reaches (red or green), and places where
both lights reach (yellow). Compute the total area of each of the four
colors in the room. Do not include the area of the pillars.
Input
* One line containing the number of test cases, T.
Each test case contains, in order:
* One line containing the coordinates x, y of the red light source.
* One line containing the coordinates x, y of the green light source.
* One line containing the number of pillars n.
* n lines describing the pillars. Each contains 3 numbers x, y, r.
The pillar is a disk with the center (x, y) and radius r.
The room is the square described by 0 ≤ x, y ≤ 100. Pillars, room
walls and light sources are all disjoint, they do not overlap or touch.
Output
For each test case, output:
Case #X:
black area
red area
green area
yellow area
Is it required that people who program should be should be able to solve these type of problems?
I would apprecite if anyone can help me interpret the google code jam problem set as I wish to participate in this years Code Jam to see if I can do anthing or not.
Thanks.
It is a big mistake to start from hard problems. Many World Finals problems are too hard for lots of experienced programmers, so it is no surprise that it is also too hard for someone new.
As others have said, start with much easier problems. I am assuming you know the basics of programming and can write code in at least one programming language. Try problems from Division-2 problemsets on TopCoder, and Regional/Qualifying rounds of ACM ICPC. Find out the easy problems from sites like SPOJ, UVa and Project Euler (there are lists of easy problems available online) and solve them. As you solve, also read up on the basics of algorithms and computer science. TopCoder is a great resource since they have lots of tutorials and articles and also allow you to view other people's solutions.
IMHO, becoming a better programmer in general takes a lot of practice and study. There is no shortcut. You cannot assume that you are some sort of hero programmer who can just jump in and solve everything. You just have to accept that there is a long way to go, and start at the beginning.
You should start with much easier problems. Try looking for regionals, or even get the problems from the local schools contest.
You need to have a large view of general programming, from data structures to algorithms. Master a basic programming language, one that the answers are accepted.
You cannot start learning programming by doing competitions. If you don't know any programming at all, the first programs you will write are things like "hello world", fibonacci and ackermann. Starting with things like TopCoder is like learning to drive using a formula one car. It doesn't work that way.
In short, you have to know some basic techniques that are used to develop this kinds of problems. Knowing dynamic programming, backtracking algorithms, searching, etc, helps you a lot when solving the problems.
This one from Google Code Jam is actually pretty hard, and involves computational geometry algorithms. How to solve it is detailed here: http://code.google.com/codejam/contest/dashboard?c=311101#s=a&a=5
I've been working Google Code Jam problems for the past week or so, and I think they are great exercises. The key is to find problems that stretch your abilities a little, but aren't the ones that make you want to give up. Google Code Jam problems range widely in difficulty!
I recommend starting with the ones under "Where should I start" here:
http://code.google.com/codejam/contests.html
And then explore all of the competition round 1 problems. If those are too easy move up to the other rounds.
The thing I really like about code jam is that you can use pretty much any language you want, and you can get feedback from their automated judge. If you run out of Code Jam problems check out some of the other sites that others have mentioned.
You are very right JesperE. JPRO, go back to the basics and get it settled from there. Your ability to compete and win programming contests depends on just two things:
Your deep understanding of the language you are using and
your mental flexibility with the language.
Read Problem Solving Through Problems by Larson.
It's for mathematics but I find it extremely useful for solving algorithm problems.

What problems can be solved, or tackled more easily, using graphs and trees? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What are the most common problems that can be solved with both these data structures?
It would be good for me to have also recommendations on books that:
Implement the structures
Implement and explain the reasoning of the algorithms that use them
The first thing I think about when I read this question is: what types of things use graphs/trees? and then I think backwards to how I could use them.
For example, take two common uses of a tree:
The DOM
File systems
The DOM, and XML for that matter, resemble tree structures.
It makes sense, too. It makes sense because of how this data needs to be arranged. A file system, too. On a UNIX system there's a root node, and branching down below. When you mount a new device, you're attaching it onto the tree.
You should also be asking yourself: does the data fall into this type of structure? Create data structures that make sense to the problem and the rest will follow.
As far as being easier, I think thats relative. Are you good with recursive functions to traverse a tree/graph? What if you need to balance the tree?
Think about a program that solves a word search puzzle. You could map out all the letters of the word search into a graph and check surrounding nodes to see if that string is matching any of the words. But couldn't you just do the same with with a single array? All you really need to do is move an index to check letters to the left and right, and by the width to check above and below letters. Solving this problem with a graph isn't difficult, but it can create a lot of extra work and difficulty if you're not comfortable with using them - of course that shouldn't discourage you from doing it, especially if you are learning about them.
I hope that helps you think about these structures. As for a book recommendation, I'd have to go with Introduction to Algorithms.
Circuit diagrams.
Compilation (Directed Acyclic graphs)
Maps. Very compact as graphs.
Network flow problems.
Decision trees for expert systems (sic)
Fishbone diagrams for fault finding, process improvment, safety analysis. For bonus points, implement your error recovery code as objects that are the fishbone diagram.
Just about every problem can be re-written in terms of graph theory. I'm not kidding, look at any book on NP complete problems, there are some pretty wacky problems that get turned into graph theory because we have good tools for working with graphs...
The Algorithm Design Manual contains some interesting case studies with creative use of graphs. Despite its name, the book is very readable and even entertaining at times.
There's a course for such things at my university: CSE 326. I didn't think the book was too useful, but the projects are fun and teach you a fair bit about implementing some of the simpler structures.
As for examples, one of the most common problems (by number of people using it) that's solved with trees is that of cell phone text entry. You can use trees, not necessarily binary, to represent the space of possible words that can come out of any given list of numbers that a user punches in very quickly.
Algorithms for Java: Part 5 by Robert Sedgewick is all about graph algorithms and datastructures. This would be a good first book to work through if you want to implement some graph algorithms.
Scene graphs for drawing graphics in games and multimedia applications heavily use trees and graphs. Nodes represents objects to be rendered, transformations, controls, groups, ...
Scene graphs usually have multiple layers and attributes which mean that you can draw only some node of a graph (attributes) in a specified order (layers). Depending on the kind of scene graph you have it can have two parralel structures: declarations and instantiation. Th
#DavidJoiner / all:
FWIW: A new version of the Algorithm Design Manual is due out any day now.
The entire course that he Prof Skiena developed this book for is also available on the web:
http://www.cs.sunysb.edu/~algorith/video-lectures/2007-1.html
Trees are used a lot more in functional programming languages because of their recursive nature.
Also, graphs and trees are a good way to model a lot of AI problems.
Games often use graphs to facilitate finding paths across the game world. The graph representation of the world can have algorithms such as breadth-first search or A* in order to find a route across it.
They also often use trees to represent entities within the world. If you have thousands of entities and need to find one at a certain position then iterating linearly through a list can be inefficient, especially if you need to do it often. Therefore the area can be subdivided into a tree to allow it to be searched more quickly. Just as a linear space can be efficiently searched with a binary search (and thus divided into a binary tree), 2D space can be divided into a quadtree and 3D space into an octree.

Resources