i've assembled a carpet with 8 pressure sensors inside. You can see the sensors arrangement in the picture. The entire carpet is 80x80 cm. Each sensor ouptuts a digital signal (0 or 1) when is pressed. The microcontroller read all the sensors every 100 ms, and outputs a payload byte, where each bit contains the information of a single triangle. I'm storing all theese bytes in a 100 bytes long array. I need to calculate from this array the gait (the direction, the angle where the user is heading). The user is simply marching remaining on his spot, the feet are raised and lowered alternately. Do you know any algorithm wich i could be used to do this kind of analysis? Should i use machine learning / neural networks? Language doesn't matter, i just need to figure out the right way to analyse this byte array. Thanks!
sensors inside the carpet
Related
I want to develop a game where the universe is a maximum 65536 x 65536 grid. I do not want the universe to be random, what I want is for it to be procedurally generated according to location. What it should generate is a number from 0 to 15.
0 means empty space. Most of the universe (probably 50-80%) is empty space.
1 - 9 a planet of that technology level
10-15 various anomalies (black hole, star, etc.)
Given an address from 0x8000-0xFFFF or 0 or 1-0x7fff for the X address, and the same range for the Y address, returns a number from 0 to 15. Presumably this would place planets nearer to 0,0 more plentiful than those at a distance of
The idea being, the function is called passing the two values and returns the planet number. I used to have a function to do this, but it has gotten lost over various moves.
While the board could be that big, considering how easy it would be to get lost, I'll probably cut the size to 1200 in both directions, -600 to +600. Even that would be huge.
I've tried a number of times, but I've come to the conclusion that I lack the sufficient math skills to do this. It's probably no more than 10 lines. As it is intended to be multiplayer, it'll probably be either a PHP application on the back end or a desktop application connecting to a server.
Any help would be appreciated. I can probably read any commonly used programming language you might use.
Paul Robinson
See How to draw sky chart? for the planetary position math. Especially pay attention to the image with equations you can use it to compute period of your planet based on its distance and mass to system central mass... for simple circular orbit just match the centripedal force with gravity like I did in here:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
So for example:
G = 6.67384e-11;
v = sqrt(G*M/a); // orbital speed
T = sqrt((4.0*M_PI*M_PI*a*a*a)/(G*(m+M))); // orbital period
pos = (a,0,0); // start position
vel = (0,sqrt(G*M/a),0); // start speed
The distribution of planets and their sizes complies specific (empirically obtained) rules (that is one of the reasons why we are still looking for 10th planet). I can't remember the name of the rule, however from quick look on Google Image from here can be used too:
Distribution of the planets in solar system according to their mass
and their distances from the Sun. The distances (X-axis) are in AU and
masses (Y-axis) in Yotta (10^24)
Jupiter mass is M=1,898e27 kg so mass units are in 10^24 kg.
So just match your PRNG generation to such curve and be done with it.
In class the lecturer asked us a question, during a concert the conductor hears a false note (noise), translating this into signal fundamentals how does the conductor detect this noise?
My guess is that it may be related to the Fourier transform, but I'm not sure I'm even close to the answer.
Check out a spectrogram. it's a 3d representation of a frequency domain over a time.
As a general routine:
divide your time-domain recording of a musical piece into suitably sized time segments, small enough that they can capture the shortest musical note you want to represent (preferably smaller than this amount as you want more granular measurements of when the note starts/stops).
take the Fourier transform of each time segment, and represent this information in a spectrogram (X-axis for time, Y-axis for frequency, and Z-axis (colour) for signal power).
do appropriate filtering on each time segment to keep only frequencies with significant signal power.
compare this against your sheet music. Sheet music is essentially a spectrogram, telling you which notes(frequencies) should be played at which times (using the BPM or time signature of the music). if you have a note present in the spectrogram but not in the sheet music, it's spurious or accidental (or the result of a badly formed spectrogram).
So here's the thing: consider a detector, made of several vertical strings, each hosting 60 sensors distributed equidistantly, as shown in the following picture (dark dots are sensors):
Particles will then flow through and produce pulses on each sensor that you can use. Informations include time of passing by, coordinates (stringnumber+position on string or cartesian coordinates), total charge, etc.
The final goal is to reconstruct angles of incoming particles and their energies. Though we first just consider a simple classification problem of finding out whether the particle came in form the north or south hemisphere (i.e. from the top or bottom direction of the detector.
As input values we use every timestamp of each sensor, their exact position hasn't to be passed as it will be encoded by the position of the timestamp in the input list.
The problem we have is with sensor that didn't detect any particle. Is it clever to insert their timestamp as float("Inf")? Leaving their time as zero is another option, but it will then be ambiguous with the sensor that triggered as first (having timestamps normalised to [0,1]).
I'd be glad to hear any hint from people who have already worked with timestamps in neural networks. Also, if you have any good idea on developing NN's for our future tasks please share them! The planned approach are convolutional (deep) neural networks, but we still have to think about how to encode sensor-positions in that irregular hexagonal shape.
A few points to consider:
- How about -1?
- How about NaN?
What language are you programming in?
Current situation:
I have implemented a particle filter for an indoor localisation system. It uses fingerprints of the magnetic field. The implementation of the particle filter is pretty straight forward:
I create all particles uniformly distributed over the entire area
Each particle gets a velocity (gaussian distributed with the mean of a 'normal' walk speed) and a direction (uniformly distributed in all directions)
Change velocity and direction (both gaussian distributed)
Move all particles in the given direction by velocity multiplied by the time difference of the last and the current measurement
Find the closest fingerprint of each particle
Calculate the new weight of each particle by comparing the closest fingerprint and the given measurement
Normalize
Resample
Repeat #3 to #9 for every measurement
The problem:
Now I would like to do basically the same but add another sensor to the system (namely WiFi measurements). If the measurements would appear at the same time there wouldn't be a problem. Then I would just calculate the probability for the first sensor and multiply this by the probability of the second sensor to get my weight for the particle at #6.
But the magnetic field sensor has a very high sample rate (about 100 Hz) and the WiFi measurement appears roughly every second.
I don't know what would be the best way to handle the problem.
Possible solutions:
I could throw away (or average) all the magnetic field measurements till a WiFi measurement appears and use the last magnetic field measurement (or the average) and the WiFi signal together. So basically I reduce the sample rate of the magentic field sensor to the rate of the WiFi sensor
For every magnetic field measurement I use the last seen WiFi measurement
I use the sensors separated. That means if I get a measurement of one sensor I do all the steps #3 to #9 without using any measurement data of the other sensor
Any other solution I haven't thought about ;)
I'm not sure which would be the best solution. All the solutions dont seem to be good.
With #1 I would say I'm loosing information. Although I'm not sure if it makes sense to use a sample rate of about 100 Hz for a particle filter.
At #2 I have to assume that the WiFi signal doesn't chance quickly which I can't prove.
If I use the sensors separately the magnetic field measurements become more important than the WiFi measurements since all the steps will have happened 100 times with the magnetic data till one WiFi measurement appears.
Do you know a good paper which is dealing with this problem?
Is there already a standard solution how to handle multiple sensors with different sample sizes in a particle filter?
Does a sample size of 100 Hz make sense? Or what would be a proper time difference for one step of the particle filter?
Thank you very much for any kind of hint or solution :)
In #2 instead of using sample-and-hold you could delay the filter by 1s and interpolate between WiFi-measurements in order to up-sample so you have both signals at 100Hz.
If you know more about the WiFi behavior you could use something more advanced than linear interpolation to model the Wifi behavior between updates. These folks use a more advanced asynchronous hold to up-sample the slower sensor signal but something like a Kalman filter might also work.
With regards to update speed I think 100Hz sounds high for your application (assuming you are doing positioning of a human walking indoors) since you are likely to take a lot of noise into account, lowering the sampling frequency is a cheap way to filter out high-frequency noise.
I need to find which rectangular regions were updated between two images. E.g., I have these images:
first http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/1.png second http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/2.png
Imagemagick's compare tells me this pixels were updated:
compare http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/3.png
So I need to repaint this regions (have outlined first of them):
compare http://storage.thelogin.ru/stackoverflow/find-updated-rectangles-in-image/4.png
Repainting is done over slow connection (57600 baud), so number one priority is data size (one byte for magic word, one byte for checksum, six bytes for region coordinates, two bytes for each pixel). Which algorithm can I use to find these regions? I think, something like that is used in vnc and similar software.
As far as actually finding the regions which have changed, as ImageMagick has done for you, you can compute a pixel by pixel difference (e.g. XOR). Regions with a difference of 0 have not changed.
It is not clear from your question whether the painting itself is slow or just the transmission of the repainting data. It is also not clear what kind of encoding/decoding can be done on the other end of the transmission. Do you have to send your data as you specified or can you encode it in another way if you wish?
Your data packets have a 8 byte overhead per rectangle "(one byte for magic word, one byte for checksum, six bytes for region coordinates, two bytes for each pixel)". I take it from the two bytes for each pixel that the color depth is 16-bit? So, due to the overhead, some of the smallest rectangles you outlined actually cost you more than combining them with other rectangles and resending some data on non-updated regions.
The actual problem of finding rectangles where each has an overhead is analogous to the "Strawberry Fields" hiring problem put forth by ITA Software. The original link is dead, but here is someone's solution with problem description.
At 57600 baud, you get to send 7200 bytes per second, which would be 3600 pixels at two bytes per pixel. As a square, this is a measly 60x60. You've certainly outlined more than that in your example, and this does not count the overhead.
The refresh rate of the monitor on the receiving end also needs to be considered. If the monitor is refreshing 60 times per second and you are only able to send one 60x60 square per second, how will this look?
Things to consider:
reduce color depth
run length encode pixel differences per scan line
attempt more ambitious compression per region, but watch the overhead
send non-graphic data and let the receiver compute the graphics (e.g. in this example, send the text that has changed, the updated time, etc. and let the receiver draw the progress bar, etc.)
abandon this insanity