Trouble understanding exon-chaining problem - algorithm

I'm currently trying to build a music generator. In order to improve my deal with patterns in music, I have read this article, which states that "This algorithm (exon-chaining algorithm) can be modified to accommodate the pattern selection problem by replacing the weight of an interval with its duration".(page 9).
However, I'm having trouble understanding the meaning of the exon-chaining problem. I have looked for this problem in many different presentations and articles but still couldn't find satisfying information. I would really appreciate it if someone could explain it to me.
Thanks in advance.

Related

Ideas of procedural algorithms for generating a grid-based town

Hope you're alright.
For the last two weeks, I'm trying to find one algorithm for generating procedural cities/towns on a grid system. I'm more focused on the road network than buildings, nature etc.
I've tried L-System (following this tutorial series on YouTube: https://www.youtube.com/playlist?list=PLcRSafycjWFcbaI8Dzab9sTy5cAQzLHoy) and some modifications of maze algorithms (explained here: http://weblog.jamisbuck.org/2011/2/7/maze-generation-algorithm-recap), but no success to react what I want.
Searching more, I found this old thread Generating a city/town on a grid (Simply my approach) where that guy created an algorithm that makes exactly what I want. Now I'm trying to understand (and, maybe, simplify) that algorithm to rewrite it on Unity game engine.
So now, I want to know if someone knows other algorithm(s) that makes something next to this: a city/road network based on a grid.
Thank you for your help and for sharing your knowledge! This project is part of my graduate thesis and, if you can help me, I will be very thankful!

what algorithms are available for TSP with time constraint?

problem: visiting as many places as possible within the given time and come back to starting point.
I searched the internet and could not find any tutorial or implementation of any algorithm for that problem. Mostly research papers came out.
So, hoping people point out useful sources, then I could pick one and solve my problem.
thanks.
I find out that "Clarke-Wright algorithm" solves VRP and TSP is a special case of VRP.
Maybe that is what I need.
anyone correct me if I am wrong.

Understanding figures in the algorithm design manual

I want to start learning about algorithms so I began reading The Algorithm Design Manual by Steven Skiena because it is recommended in some threads I read in SO. However, I just stopped here
and can't understand most of it because many explanations are represented in figures or images and my screen reader recognize but can't read them.
For example,
"The nearest neighbor rule is very efficient, for it looks at each pair of points
tex2html_wrap_inline23349 //That's how my screen reader read it, I asume it's an image.
at most twice, once when adding
tex2html_wrap_inline23351 //image
to the tour, the other when adding
tex2html_wrap_inline23353 //another image
Against all these positives there is only one problem. This algorithm is completely wrong."
This is really frustrating for me because I'm beginning to enjoy this though I can understand why those images help a lot of readers.
So is their a way to understand this thing without seeing the figures? Or should I read another book?
Thanks in advance and happy new year everyone.
Considering these algorithms are dealing with geometrical analysis, I am afraid it would be difficult to understand them without the images, and even more difficult to replace these images with an equivalent textual description.

How to design an approximate solution algorithm

I want to write an algorithm that can take parts of a picture and match them to another picture of the same object.
For example, If I gave the computer a picture of a vase and a picture of a scene with the vase in it, I'd expect it to determine where in the image the vase is.
How would I begin to develop an algorithm like this?
The final usage for this algorithm will be an application that for example with a picture of somebody's face could tell if they were in a crowd of people. This algorithm would eventually be applied to video streams.
edit: I'm not expecting an actual solution to this problem as I don't hope to solve it anytime soon. The real question was how do you define something like this to a computer so that you could make an algorithm to do it.
Thanks
A former teacher of mine wrote his doctorate thesis on a similar sort of problem, except his input was a detailed 3D model of something, which he would use to find that object in 2D images. This is a VERY non-trivial problem, there is no single 'answer', certainly nothing that would fit the Stack Overflow format.
My best answer: gather a ton of money and hire a very experienced programmer.
Best of luck to you.
The first problem you describe and the second are both quite different.
A major part of each is solved by the numerous machine vision libraries available. You may need a combination of techniques to achieve any success at either task.
In the first one, you would need something that generically recognizes objects. Probably i'd use a number of algorithms in concert to identify the foreground object in the model image and then do some kind of weighted comparison of the partitioned target image.
In the second case, examining faces, is a much more difficult problem relative to the general recognizer above. Faces all look the same, or nearly so. The things that a general recognizer would notice aren't likely to be good for differentiating faces. You need an algorithm already tuned to facial recognition. Fortunately this is a rapidly maturing field and you can probably do this as well as the first case, but with a different set of functions.
The simple answer is, find a mathematical way to describe faces, that can account for angles and partial missing data, then refine and teach it.
Apparently apple has done something like this, however, it still makes mistakes and has to be taught as it moves forward.
I expect it will be more about the math, than about the programming.
I think you will find this to be quite a challenge. This is an extremely difficult problem and is one of the many areas of computing that fall under the domain of artificial intelligence (AI). Facial recognition would certainly be the most popular variant of this problem and in spite of what you may read in the media, any claimed success are not what they are made out to be. I think the closest solutions involve neural nets and they require very clear and carefully selected images usually.
You could try reading here though. Good luck!

How to implement a "related" degree measure algorithm?

I was going to Ask a Question earlier today when I was presented to a surprising functionality in Stackoverflow. When I wrote my question title stackoverflow suggested me several related questions and I found out that there was already two similar questions. That was stunning!
Then I started thinking how I would implement such function. How I would order questions by relatedness:
Question that have higher number of
words matchs with the new question
If the number of matchs are the
same, the order of words is considered
Words that appears in the title has
higher relevancy
That would be a simple workflow or a complex score algortithm?
Some stemming to increase the recall, maybe?
Is there some library the implements this function?
What other aspects would you consider?
Maybe Jeff could answer himself! How did you implemented this in Stackoverflow? :)
One such way to implement such an algorithm would involve ranking the questions as per a heuristic function which assigns a 'relevance' weight factor using the following steps:
Apply a noise filter to the 'New' question to remove words that are common across a large number of objects such as: 'the', 'and', 'or', etc.
Get the number of words contained in the 'New' question which match the words the set of questions already posted on the website. [A]
Get the number of tag matches between the words in the 'New' question and the available. [B]
Compute the 'relevance weight' based on [A] and [B] as 'x[A] + y[B]', where x and y are weight multipliers (Assign a higher weight multiplier to [B] as tagging is more relevant than simple word search)
Get the top 5 questions which have the highest 'relevance weight'.
The heuristic might require tweaking to get optimal results, but it should work.
Your question seems similar to this one, which has some additional answers.
#marcio
Sorry, I am not aware of any direct API reference that I could suggest here and I have never worked with Lucene.
However, I am aware that Google Desktop uses a Query API to rank and suggest the relevant search results. More information on the API can be found here.
Perhaps others could chime in and guide you.
Isn't StackOverflow going to be open sourced at some point? If so, you can always find out how they did it there.
Update: It appears that they say they might open source it. I hope they do.

Resources