How read values from Waveform (DBL) 1d Array and compare it with a double value in Labview - measurement

I want to stop my measurement (right side od the VI) when the user defined stop voltage has arrived in the measurement. But I do not know how I can read the voltage value from the waveform (DBL) 1d Array and compare it with the user typed stop voltage? Hope someone can help me with that. I have attached a snippet of my VI. You can use that like a VI in labview
link text

You can index the 1D waveform array to get the individual elements and then use the 'Get Waveform elements' node from the 'Waveforms' pallet to get the actual value. You would then have to check these against your Stop Value, either by constructing a 1D array of DBL or just picking one element.
Is there a reason you are using the waveform output from the DAQ read vi? If you change this to a double 'DBL' then you can wire this up directly to your <= function. A 1D array of type double could be wired up to a graph if needed (I am guessing the indicator 'Measurement' is a graph).

You can use the Waveform limit tester to check if everything is below your Threashold.
Here's a codesample how it works.
Ton

Related

Using an UKF for orientation is there a way to fuse redundant sensor data within the Kalman filter into a single set of state variables?

I am currently building a UKF for orientation, and would like to fuse redundant sensor data within the Kalman filter into a single set of state variables in order to increase accuracy (ex. Fusing gravitational and magnetic vector data with gyro data). Is there a way to do this within the filter , such as within the H measurement matrix? Thank you for your time.
Yes, this is exactly what a Kalman Filter is used for.
First you have to choose the state variables, which could be only the orientation but may also include the change in orientation. In the first case the state transition matrix/function will be identity, in the second case it will not.
The difficult part is setting up the measurement matrix, which tells the UKF how each state variable is related to each measurement variable. (If the state changes, how is the given measurement expected to change.) The UKF will do the rest.

MATLAB: Set points within plot polygon equal to zero

I am currently doing some seismic modelling and processing in MATLAB, and would like to come up with an easy way of muting parts of various datasets. If I plot the frequency-wavenumber spectrum of some of my data, for instance, I obtain the following result:
Now, say that I want to mute some of the data present here. I could of course attempt to run through the entire matrix represented here and specify a threshold value where everything above said value should be set equal to zero, but this will be very difficult and time-consuming when I later will work with more complicated fk-spectra. I recently learned that MATLAB has an inbuilt function called impoly which allows me to interactively draw a polygon in plots. So say I, for instance, draw the following polygon in my plot with the impoly-function:
Is there anything I can do now to set all points within this polygon equal to zero? After defining the polygon as illustrated above I haven't found out how to proceed in order to mute the information contained in the polygon, so if anybody can give me some help here, then i would greatly appreciate it!
Yes, you can use the createMask function that's part of the impoly interface once you delineate the polygon in your figure. Once you use create this mask, you can use the mask to index into your data and set the right regions to zero.
Here's a quick example using the pout.tif image in MATLAB:
im = imread('pout.tif');
figure; imshow(im);
h = impoly;
I get this figure and I draw a polygon inside this image:
Now, use the createMask function with the handle to the impoly call to create a binary mask that encapsulates this polygon:
mask = createMask(h);
I get this mask:
imshow(mask);
You can then use this mask to index into your data and set the right regions to 0. First make a copy of the original data then set the data accordingly.
im_zero = im;
im_zero(mask) = 0;
I now get this:
imshow(im_zero);
Note that this only applies to single channel (2D) data. If you want to apply this to multi-channel (3D) data, then perhaps a multiplication channel-wise with the opposite of the mask may be prudent.
Something like this:
im_zero = bsxfun(#times, im, cast(~mask, class(im)));
The above code takes the opposite of the polygon mask, converts it into the same class as the original input im, then performs an element-wise multiplication of this mask with each channel of the input separately. The result will zero each spatial location that's defined in the mask over all channels.

Weird behaviour of bwareafilt in MATLAB and what algorithm does it use?

I have the following questions:
What is the algorithm that bwareafilt uses?
Weird behaviour: When the input matrix is totally black, I get following error
Error using bwpropfilt (line 73)
Internal error: p must be positive.
Error in bwareafilt (line 33)
bw2 = bwpropfilt(bw, 'area', p, direction, conn);
Error in colour_reception (line 95)
Iz=bwareafilt(b,1);
Actually, I am using this function to take snapshots from a webcam, but when I block my webcam totally, then I get above following error.
So I believe it is an error due to some internal implementation mistake. Is this the case? How do I overcome this?
Let's answer your questions one at a time:
What algorithm does bwareafilt use?
bwareafilt is a function from the image processing toolbox that accepts a binary image and determines unique objects in this image. To find unique objects, a connected components analysis is performed where each object is assigned a unique ID. You can think of this as performing a flood fill on each object individually. A flood fill can be performed using a variety of algorithms - among them is depth-first search where you can consider an image as a graph where edges are connected to each pixel. Flood fill in this case visits all of the pixels that are connected to each other until you don't have any more pixels to visit and that are localized within this object. You then proceed to the next object and repeat the same algorithm until you run out of objects.
After, it determines the "area" for each object by counting how many pixels belong to that object. Once we determine the area for each object, we can either output an image that retains the top n objects or filter the image so that only those objects that are within a certain range of areas get retained.
Given your code above, you are trying to output an image that is the largest object in the binary image. Therefore, you are using the former, not the latter where n=1.
Weird behaviour with bwareafilt
Given the above description of bwareafilt and your intended application:
Actually, I am using this function to take snapshots from a webcam, but when I block my webcam totally, then I get above following error.
... the error is self-explanatory. When you cover the webcam, the entire frame is black and there are no objects that are found in the image. Because there are no objects in the image, returning the object with the largest area makes no sense because there are no objects to return to begin with. That's why you are getting the error because you are trying to make bwareafilt return an image with the largest object but there aren't any objects in your image to begin with.
As such, if you want to use bwareafilt, what I suggest is you check to see if the entire image is black first. If it isn't black, then go ahead and use bwareafilt. If it is, then skip it.
Do something like this, assuming that b is the image you're trying to process:
if any(b(:))
Iz = bwareafilt(b, 1);
else
Iz = b;
end
The above code uses any to check to see if there are any white pixels in your image b that are non-zero. If there are, then bwareafilt should be appropriately called. If there aren't any white pixels in the image, then simply set the output to be what b originally was (which is a dark image anyway).
You can add Conditions to make your function robust to any inputs , for exemple by ading a simple condition to first treat the input image if it is all black or not, based on the condition yo use your function to filter objects.

Find the number of 'T'-s in a picture

Given the following picture :
How can I find the number of T's in the picture ?
I'm not after a matlab code , however I would appreciate for an algorithm or
some kind of an explanation how to approach the problem .
Regards
Simple template matching would probably do it. You simply cut out one of the Ts and then find the RMS error signal for each shift of the template (the T).
Pseudo code
for each x-position of T in image
for each y-position of T in image
err(x,y) = sqrt(sum(sum((T - image(x:x+Tsizex, y:y+Tsizey)).^2)))
end end
ErrBinary = err < detectionThreshold
Now, each 1 in the errBinary is a detection. Depending on the resolution of the image, you might get a number of 1s in a cluster for each T in the image. One way to fix that could to iteratively pick a 1, and then clear all other ones in the neighbourhood. In this way you are actually defining a limit for how close Ts can be in order to be detected as two individual Ts.
EDIT:
Explanation of template comparison:
Basically what this method does, is to compare the reference template (a small image of a T in this case) to every possible location in the original image. For every location the error is calculated as a scalar RMS value of the difference if the two. So, the two for loops simply pick all possible sub images with a size of the template from the original image and use them to build an error surface. A small value in this surface will mean a good match between the template and sub image for that particular location. The location of the match in the original image corresponds to the location of the minimum in the error surface.
Regards

Algorithm to decrypt data with drawn strokes

Let's say I have an encrypted file on an iPhone and every time I want to decrypt it, I want to "draw" a decryption symbol instead of having to use a keyboard to type it in.
If you request from the user to draw a symbol to decrypt a file every time it is needed (e.g. every time they launch your application) they would probably prefer it to having to type a 20 character or so password on the tiny keyboard, and they would still get the security a 20 character password would give them (depending on how complicated the shape/symbol they draw is).
The symbol they would draw would most likely be one stroke (e.g. it's over once you lift up your finger) but can be very complex, such that it's hard for someone else to repeat it, even if they do see you draw it in. Kind of like how every person's signature is unique and hard to duplicate. Actually, this may just overly complicate it if it had to prevent from being duplicated, so for now this can be ignored and we can assume that the symbol will not be seen by someone else and thus it doesn't matter if it could be repeated by them or not.
I guess the real question is how would you convert the same (reasonably) stroke consistently to the same key (e.g. hash value). There should obviously be some threshold of forgiveness in the algorithm, because the user can't be expected to repeat the stroke exactly 100%.
Using the symbol as a decryption method adds a whole other dimension to this problem. You never want to store the generated hash value anywhere in unencrypted form, cause then someone might be able to access that part of the hard drive and get the decryption key without needing to go through the whole drawing process and decrypt the file manually. You also most likely don't want to store anything about how the shape is drawn.
A good example of a stroke that a user might use as their decryption symbol is the "&" symbol. Imagine a user drawing this symbol on their iPhone every time they need to decrypt a file. The size of the symbol might not be the same every time it is drawn. Also, the rotation of the symbol may be different depending on how the user holds their device. Ideally, in both cases, because the symbol was drawn, relative to the user strokes, the same, it should be able to generate the same hash value and thus decrypt the file.
I thought something like shape or character recognition is a similar algorithm. Where the user draws something (reasonably representing a shape) and it then fixes it to the correct shape which would have the same hash value every time it is drawn. However, for something like this you would most likely need a database of shapes that can be drawn, and if you choose something like all the letters in the alphabet, you only get 26 letters. And assuming the user should only need to draw one symbol to decrypt the file, you have an extremely insecure password with only 26 possibilities.
Another thing I thought of is you could break up the symbol that is drawn into tiny segments and then run symbol recognition on those. So imagine that you have 4 symbols in a database: vertical line, horizontal line, and diagonal in both directions. Now as the user draws, each segment is recognized as one of these, and then they are all combined to form some hash value. So imagine the user chose as their decryption symbol the lowercase letter "r". So they would begin by drawing a vertical line down, followed by a vertical line up, followed by a diagonal line up and to the right. One problem with this method is how would you know when to split up the stroke into individual segments? You would probably also want to take into account how long each individual segment is roughly (e.g. in increments of 40 pixels). That way if someone drew a deformed "r" where the hump comes out near the bottom it isn't recognized as the same symbol and thus wouldn't decrypt the file.
A third method might be dividing the screen up into a grid (not sure which size yet) and simply seeing in which cells the stroke is drawn and using this data somehow to generate a string.
Any other ideas of how this could be implemented? Have you ever heard of something like this? Are there any fundamental flaws that would prevent a system like this from working?
Thanks
I would try a variation of the segmentation variant: Recognize simple patterns - I'll stick to straight and diagonal lines for this, but in theory you could also add circles, arcs and perhaps other things.
You can be quite sure when one line ends and another one starts as there are 8 directions and you can detect a direction change (or for a simpler approach, just detect pen up and pen down and use them as line delimiters). The first line gives a scale factor, so the length of every other line can be represented as a factor (for example, in an usual L shape, the first vertical line would give the "base length" b and the other line would then have the length of roughly 0.5 * b). After the user is finished, you can use the smallest factor s to "round" the lengths, so that you'll have an array of integer lengths like [1 * s, 2 * s, 4 * s, 5 * s]. This will prevent the system from being too exact, and using the base length makes the system robust against scaling.
Now somehow convert these informations (lengths and directions) to a string (or a hash value, whatever you like) and it will be the same for the same strokes, even if the symbol is translated or scaled.
Additionally, you can store an 2D offset value (of course "rounded", too) for every line after the second line so that the lines will also have to be at the same position, if you don't do this, L and T will most likely get the same string (1 line up-down, 1 line left-right length 0.5). So storing positions strengthens the whole thing a bit but is optional.
EDIT:
If you take the angle of the first line as a base angle, you can even make this robust to rotation.
Please note that this algorithm only gives 3 bits per stroke if all lines are of the same length and a maximum of perhaps up to 6-8 bits per stroke, some more if you store positions, too. This means you'd need a quite complex symbol of about 20-40 strokes to get 128 bits of security.
An easy way to add more variation/security would be to let the user use different colors from a given palette.
To reduce the risk of someone watching you, you could make each line disappear after it has been drawn or change the color to a color with a very low contrast to the background.
The problem of encrypting data with keymaterial that may have small errors has been studied quite extensively.
In particular there are a number of proposals for protecting data using biometric data (e.g. fingerprints or a retina scan) as a key. A typical approach is to use an appropriate error correction code, take your original key material K, compute the syndrome of it and only store the syndrome. Once you get a second reading of your key material K', the syndrome can be use to restore K from K' if K and K' are close enough (where 'close enough" of course depends on the error correction scheme.)
To get you started, here is a paper proposing a fuzzy vault scheme. This is a general proposal for an encryption scheme using a "fuzzy" key. Of course, you still need to examine how to extract characteristics from drawings that are stable enough for using such an error correction scheme. You will also have to examine how much entropy you can extract from such drawings. As bad as passwords are with respect to entropy, they may still be hard to beat.
Handwriting recognition often takes the duration of the stroke into account more than the actual length and such.
While it relates to pressure sensitivity, I think you may be able to see some similar conceptual bits to what you are thinking here.... jdadesign.net/safelock/
That's not exactly the same topic, but it's the closest thing that comes to mind at the moment.
I don't think that you could get enough "bits" from a hand-drawn symbol to perform secure encryption. As you note, you have to allow enough slop in the recognition that the natural variations in drawing will be tolerated. In other words, you have to discard noise in the strokes, smoothing them into reproducible signal. But noise (high entropy) makes better cryptographic keys.
Think of it this way. If you did decompose the gesture into segments of up, down, left, and right, each segment would represent 2 bits of information. For an AES key, the symbol would need 64 such segments. That's a pretty complicated gesture to remember. And if it's simplified by repeating many segments in a row ("right, right, right, ...") it makes a lousy (predictable, non-random) key.
I've had another think about this. I'm not a comp-sci person, but would something like this work.
Let's say that with whatever symbol or "pattern" someone draws. The only viable thing you're left with to analyze are all the points in the pattern generated in touchBegan, touchMoved and touchEnded events.
So... let's take all the points generated, be it 100 or 1,000,000, it doesn't really matter.
Divide them into groups, as many groups as you want. The more the merrier I assume, but for this example, let's put them in 4 groups. With 100 points, group 1 would contain points 1 > 25, group 2 contains 26 > 50 and so on.
For each group, use all the points to calculate an average position.
It may work better if the canvas spaces is divided into a grid, and the 'average positions' get plotted onto their nearest coordinate.
Then check the relative distance between all the groups. So the between 1,2 1,3 1,4 2,3 2,4 3,4.
You now have as many distinct points, and information about those points to generate a key. The averages and the grid should help smooth out some, if not all of the entropy.
You may have to ask the user to draw their pattern a few times, and compare each group against groups from previous attempts. That way, you can identify which groups the users can plot consistently. It has the added benefit of training the users hand at drawing their pattern.
I suspect that the more points and groups you have, the more accurate this will be.
In fact, I'm going to give it a try myself.
Gestures.
http://depts.washington.edu/aimgroup/proj/dollar/
You can define you own algorithms for particular gestures. EG a circle,
1.Find the start point
2. find the most left, most right and furthest for points and get an approximate radius.
3. check all the points against the radius with a margin of error (25%?)
4. If the radius checks out, you have a circle.
Vertical Straight line:
1. Check the start point and end point X and Y positions.
2. Compare the inbetween points against the x and y of the start and end.
3. If they're roughly on the same X coord, but ascending or descending Y coords, you have a vertical line.
And so on, getting more complex for more complicated gestures.
You can even combine gestures. So let's say you have an algorithm for 6 gestures. You can combine them to form different symbols. The order in which the gestures are created could be important, adding an extra layer of security.
what if you took all of the x,y coordinates of the stroke and preformed some sort of linear 2 way operation on them? You could then compute an 'approximate' hash, and if the number calculated when the stroke is within... say 10% of your approximation, then you grant access..
It all depends on what kind of attack you are trying to prevent against. If you want full on encryption, where you assume that the attacker has full access to the encrypted file, then you'll need quite a lot of bits of entropy to achieve a decent level of protection. Assuming you get the algorithms right, then you can take the two to the power of the entropy of input in bits (the upper limit for this is the number of different possible inputs), multiply by the amount of time the key setup procedure takes, divide by how much more computing power the attacker has and get the time the attacker has to take to break your encryption by brute force.
For example, something like the 9 cell figure unlock method of android might get you around 16 bits of entropy. Lets assume you use 5 seconds of CPU time to calculate the encryption key. Then with an average PC, it takes 5*2**16/20 seconds, or about 4.5 hours to crack. Any loss of entropy in the input or inefficiency in the key-setup and encryption will quickly take that down to minutes, not to mention if clusters of computers are used.
To be frank, that won't be much better than just storing the file in an obscure file format and hoping no one figures it out

Resources