SimpleCV gauge needle angle - simplecv

Currently working on a project to read the gauges on our chiller. I'm not a programmer by trade, so I'm trying to learn as I go, but SimpleCV's documentation isn't that great (IMO...)
At the moment I'm doing a findLines on each image, and it sort of works, but will occasionaly find a "line" on the edge of the gauge itself, or return some other weird result.
What I'd like to do is paint the gauge pivot one color, and the tip of the needle another color, and measure the angle between the two. I think I have the color blob detection figured out, but I can't figure out the measuring the angle part.
Anyone have any ideas? All I need is the angle to be returned, the BMS system will accept the angle reading and do the scale conversion itself, so that isn't a problem.

one of the core simplecv developers here. Sorry the docs aren't up to snuff.
I think if you can paint the gauge it will probably make it easier, or you may not even need to.
I whipped up the example here as you can see image output along the way as well:
http://nbviewer.ipython.org/github/xamox/sandbox/blob/master/gas-gauge-angle/Gauge%20Angle.ipynb

Related

How to distinguish light source from light reflection?

I am trying to distinguish car lights from the light reflection from street at night images. For example, in an image like this:
I tried changing to some other color spaces, but it didn't work. For example, cvtColor(image, gray, CV_HSV2BGR_FULL); made it like:
However, in this post it works fine.
Is there any way I can do something similar for this image? I am using OpenCV3.1 with C++ on Windows (Python would also be great).
In general, a reflection is a light source. As far as light propagation is concerned, the light source is the direction the photons are coming from. It might be a source or a reflection.
Thus, a better view on the problem would be to think more about what you are trying to detect. In the post you referenced, the problem is much simpler, since the reflections are not blown out (reaching maximum brightness), while the light sources are (or they are close enough). In your image, due to the low quality of the capture, the reflections and the light sources have similar brightness.
To emphasize why this is a problem, your approach is a local approach. Think about a very small patch (e.g. 3x3 pixels) from your image. Could you tell if it's from a reflection or not?
Therefore, a better view would be to think about shapes. You want to detect car lights, which seem to be round, of about the same size and white.
I would suggest something like a blob detector finely tuned to your problem, or to use your technique to threshold the image, and then run a CCA and measure the circularity/size of the components.
You can also consider doing some cropping/filling in the city lights with black since the camera seems fixed.

Removing skew/distortion based on known dimensions of a shape

I have an idea for an app that takes a printed page with four squares in each corner and allows you to measure objects on the paper given at least two squares are visible. I want to be able to have a user take a picture from less than perfect angles and still have the objects be measured accurately.
I'm unable to figure out exactly how to find information on this subject due to my lack of knowledge in the area. I've been able to find examples of opencv code that does some interesting transforms and the like but I've yet to figure out what I'm asking in simpler terms.
Does anyone know of papers or mathematical concepts I can lookup to get further into this project?
I'm not quite sure how or who to ask other than people on this forum, sorry for the somewhat vague question.
What you describe is very reminiscent of augmented reality marker tracking. Maybe you can start by searching these words on a search engine of your choice.
A single marker, if done correctly, can be used to identify it without confusing it with other markers AND to determine how the surface is placed in 3D space in front of the camera.
But that's all very difficult and advanced stuff, I'd greatly advise to NOT try and implement something like this, it would take years of research... The only way you have is to use a ready-made open source library that outputs the data you need for your app.
It may even not exist. In that case you'll have to buy one. Given the niché of your problem that would be perfectly plausible.
Here I give you only the programming aspect and if you want you can find out about the mathematical aspect from those examples. Most of the functions you need can be done using OpenCV. Here are some examples in python:
To detect the printed paper, you can use cv2.findContours function. The most outer contour is possibly the paper, but you need to test on actual images. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
In case of sloping (not in perfect angle), you can find the angle by cv2.minAreaRect which return the angle of the contour you found above. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html (part 7b).
If you want to rotate the paper, use cv2.warpAffine. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
To detect the object in the paper, there are some methods. The easiest way is using the contours above. If the objects are in certain colors, you can detect it by using color filter. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html

Inputting a picture as a 2D matrix

I have an assignment about working with pictures that depth cameras take.
First thing first, this is how one of these pictures looks like.
The way the camera works is that it represents objects in different tones of color depending on how far they are from the camera lens. The closer something is the brighter the color gets. Farther = darker of course.
My assignment is that I basically need to make a people counter for a depth camera that is located in an elevator.
What my idea is that I would basically like to convert the image to a 2D matrix of numbers and then apply a local maximum finder algorithm in order to find those heads sticking out towards the camera so I can count the people in that way.
So, my questions are -
Is this an okay approach for someone who is not very experienced?
What tools should I use for the conversion?
Can I do all of this in C or should I use something more advanced?
I haven't had much experience outside of C but I'm open to trying maybe C# if it provides more advanced and better tools for the job.
All advice appreciated!

Detecting the release of a ball in real time

I'm working on a project where I'm capturing people making free throw shots via a video camera. I need a way to detect, as fast as possible, the instant the ball is released from a player's hand. I tried researching a lot of detection/tracking algorithms, but everything I've found seemed more suited to tracking the ball itself. While I may eventually want to do that, right now all I need to know is the release timing.
I'm also open to other solutions that don't use the camera (I have a decent budget), but of course I'd like to use the camera if possible/fast enough. I'm also able to mess with the camera positioning/setup, and what I even want in the FOV.
Does anyone have any ideas? I'm pretty stuck right now, and haven't been able to find anything online that can help.
A solution is to use visual markers (motion trackers) on the throwing hands and on the ball. The precision is based on the FPS of the camera.
The assumption is that you know the ball dimension and the hand grip on the ball that may vary. By using visual markers/trackers you can know the position of the ball relative to the hand. When the distance between the initial grip of the ball and the hand is bigger than the distance between the center of the ball and it's extremity then is when you have your release. Schema of the method
A better solution is to use a graded meter bar (alternate between black and white bars like the ones used on the mythbusters show to track the speed of objects). At the moment there is a color gap between the hand and the ball you have your release. The downside of this approach is that you have to capture the image at a side angle or top-down angle and use panels to hold the grading.
Your problem is similar to the billiard ball collision detection. I hope you find this paper helpful.
Edit:
There is a powerful tool, that is not that expensive named Microsoft Kinect used for motion capture. The downside of this tool is that it's camera works with 30 fps and you cannot use it accurately on a very sunny scene. However I have found a scientific paper about using kinect to record athletes, including free-throws in basketball. Paper here
It's my first answer on so. Any feedback on how to improve my future answers is appreciated.

Visualization of river- animation via code

I am trying to visualize a river flow- basically, should be able to visualize river current direction and speed based on an user-defined external parameter. This is required to demonstrate vectors in two dimensions- given education needs, animation quality can be minimal- 'tolerable enough'.
I tried a simplistic approach by a blue background with lines indicating currents- comes out very weak and below my low standards!!
Can someone point out a good example/ approach for achieving the same? Thanks.
You can create an image filter that looks like water. Look at Jerry's image filters. Specifically look at the the caustic filter. You could animate it moving from one end of the river to the other end. You can also experiment with varying the time parameter. Since it's open source, you can translate it to other languages.
Here are some links to 3d visualizations.

Resources