Event Detection Evaluation [closed] - algorithm

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I developed an algorithm to detect events in time domain and I want to know the efficiency of the algorithm.
The problem is related with the time duration of the data.
Each file has data with a time duration of hundreds of minutes and I have dozens of files.
Instead of calculate the specificity and the sensitivity of this algorithm for the entire data set, I was thinking to choose random samples.
My question is:
What is the correct approach to have a valid statistical analysis?
Update:
I am studying sounds in time domain. Usually this sound has a low amplitude profile, just noise. Sometimes, a sound is generated and there is an increase in the signal amplitude.
The goal of the algorithm is to detect this increase in the signal amplitude. Unfortunately, the generation of a sound can be interpreted as random. It is possible that a low amplitude profile lasts for minutes or even hours without a single sound being generated. On the other hand, it is possible that there is a sequence of sounds for several minutes with a time difference between the sound n+1 and the sound n of just a few seconds.
I am concerned with the quality of the detection, sensitivity and specificity. This means a desire of knowing if a generated sound is either detected or not detected and if there is a "detected" sound when no sound is generated.
I do not have any prior information, just the one given by the algorithm.

Related

What's a good selective pressure to use in tournament selection in a genetic algorithm? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What is the optimal and usual value of selective pressure in tournament selection? What percent of the best members of the current generation should propagate to the next generation?
Unfortunately, there isn't a great answer to this question. The optimal parameters will vary from problem to problem, and people use a wide range of them. Selecting the right tournament selection parameters is currently more of an art than a science. Stronger selective pressure (a larger tournament) will generally result in the population converging on a solution faster, at the cost of that solution potentially not being as good. This is called the exploration vs. exploitation tradeoff, and it underlies most algorithms for searching a large space of possible solutions - you're not going to get away from it.
I know that's not very helpful, though - you want a starting place, and that's completely reasonable. So here's the best one I know of (and I know a number of others who use it as a go-to default tournament configuration as well): a tournament size of two. Basically, this means you just keep picking random pairs of solutions, choosing the best one, and sending it to the next generation (with mutation and crossover as desired), until the next generation is the desired size. This has the nice property that any member of the population besides the absolute worst has a chance of getting to the next generation, but better ones have a better chance.

Understanding the use of “before” in a programming task [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am working on task http://poj.org/problem?id=1884. I didn't understand just one part of it:
Adjacent stations on each route are far enough apart to allow a train to accelerate to its maximum velocity before beginning to decelerate.
Does it mean that train is accelerating and then going in constant speed/velocity before it goes to deaccelrate or it is accelerating for longer period, and when it comes to max velocity, it is immediately going to deaccelerate?
The fist one, yes, "train is accelerating and then going in constant speed/velocity before it goes to deaccelrate".
Basically the train accelerates as much as it can, then maintains maximum speed for the journey, then brakes as approaches the station.
What they meant was: if the station were too close, trains would have to accelerate then brake before reaching peak speed to be able to stop in the next station, but you don't have this problem.
from the text:
It remains at that maximum velocity until it begins to decelerate (at
the same constant rate) as it approaches the next station.

What is it output sensitive algorithm? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
What of this algorithms is output sensitive ? (their base algorithm)
ray tracing
gpu rendering
splatting
How can we make them with acceleration method to be likely output sensitive ?
I think ray tracing and gpu is not output sensitive.
http://en.wikipedia.org/wiki/Output-sensitive_algorithm
For the folks who didn't understand the question, in computer science, an output-sensitive algorithm is an algorithm whose running time depends on the size of the output, instead of or in addition to the size of the input.
Ray Tracing is output sensitive, in fact many ray tracing programs can generate smaller size images or movies in faser time.
GPU rendering is output sensitive, the fact that the GPU can parallelise the task, can speed up, but far less computations are required to render a smaller size image than a bigger one.
Texture splatting, is also output sensitive, since typically textures are repeated, so you can generate a huge image joining many of them, thus requiring more cpu power (and memory).

Why does the inverse of a sound wave sound exactly like the original? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have an audio source (I am working on a project with another member in SO who has also been asking questions)
IN time domain, we have 44100 samples of signed 4 byte integers. In time domain we negate each sample.
In frequency domain, as another user helped point out to us, we shifted the phase by 180 degrees by negating the real and imaginary parts of each frequency value
In both cases, the resulting audio wave sounds exactly like the original source. Is this expected (maybe because the wave for is essentially the same, just reversed?)
Steve
Yes this is the expected behavior. The ear detects frequency and in both cases, though the phase may have changed, the frequency is the same.
However, you can combine the original wave and the phase-shifted wave to get interesting sound cancellation.
Yes, you would expect the phase shifted signal to sound exactly the same. The point of your related question is that if you took a signal and added it to a copy that was 180 phase shifted, the result would be zero due to destructive interference.

Algorithms for City Simulation? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I want to create a city filled with virtual creatures.
Say like Sim City, where each creature walks around, doing it's own tasks.
I'd prefer the city to not 'explode' or do weird things -- like the population dies off, or the population leaves, or any other unexpected crap.
Is there a set of basic rules I can encode each agent with so that the city will be 'stable'? (Much like how for physics simulations, we have some basic rules that govern everything; is there a set of rules that governs how a simulation of a virtual city will be stable?)
I'm new to this area and have no idea what algorithms/books to look into. Insights deeply appreciated.
Thanks!
I would start with the game of Life.
Here is the original SimCity source code:
http://www.donhopkins.com/home/micropolis/micropolis-activity-source.tgz
It may be hard to find any general resources on the subject, because it is quite specific area.
I have implemented some population dynamics and I know that it is not easy to get all the behavior correct to ensure that the population does not die off or overgrows. It is relatively easy if you implement a simple scenario like in predator-prey model, but tends to get tricky as the number of factors increases.
Some advice:
Try to make behavior of agents parametrized
Optimize the behavior parameters using some soft method, a neural network, a genetic algorithm or a simple hillclimbing algorithm, optimizing a single parameter of the simulation (like the time before the whole population dies off combined with average growth factor)
Here is a pointer to some research on the topic, but be advised -- the population in this research study all died off.
http://www.nsf.gov/news/news_summ.jsp?cntn_id=104261

Resources