Creation of GLCM texture images using MATLAB - image

I am quite new to image processing and I have question concerning the use of a MATLAB function like a filter (graycoprops).
In the problem occurring is that I want to process the image using this function (graycoprops). To be able to do this I need first to create the GLCM (graycomatrix).
To do this for the whole image is easy but how can I do it for a small region (e.g. 3x3) like a filter.
I was thinking something like colfilt could work but I no idea how can I take each time the block values and feed them to the graycomatrix and graycoprops.
Any help would be much appreciated as I am many hours stack!!!

GLCM Code
You might be interested in the code I have on GitHub for GLCM with sliding windows. It'll allow a lot of customizations and you can manipulate the windows size.

Related

Three.js How to increase canvas-text texture quality

What parameters, modes, tricks, etc can be applied to get sharpness for texts ?
I'm going to draw a lot so I cant use 3d text.
I'm using canvas to write the text and some symbols. I'm creating somethinbg like label information.
Thanks
This is no simple matter since you'll run into memory issues with 100k "font textures". Since you want 100k text elements you'll have several difficulties to manage. I had a similar problem too once and tossed together a few techniques in order to make it work. Simply put you need some sort of LOD ("Level of Detail") to make that work. That setup might look like following:
A THREE.ParticleSystem built up with BufferGeometry where every position is one text-position
One "highres" TextureAtlas with 256 images on it which you allocate dynamically with those images that are around you (4096px x 4096px with 256x256px images)
At least one "lowres" TextureAtlas where you have 16x16px images. You prepare that one beforehand. Same size like previous, but there you have all preview images of your text and every image is 16x16px in size.
A kdtree data structure to use a nearestneighbour algorithm with to figure out which positions are near the camera (alike http://threejs.org/examples/#webgl_nearestneighbour)
The sub-imaging module to continually replace highres textures with directly on the GPU: https://github.com/mrdoob/three.js/pull/4661
An index for every position to tell it which position on the TextureAtlas it should use for display
You see where I'm going. Here's some docs on my experiences:
The Stackoverflow post: Display many thousand images in three.js
The blog where I (begun) to explain what I was doing: http://blogs.fhnw.ch/threejs/
This way it will take quite some time until you have satisfying results. The only way to make this simpler is to get rid of the 16x16px preview images. But I wouldn't recommend that... Or of course something depending on your setup. Maybe you have levels? towns? Or any other structure where it would make sense to only display a portion of these texts? That might be worth a though before tackling the big thing.
If you plan to really work on this and make this happen the way I described I can help you with some already existing code and further explanations. Just tell me where you're heading :)

Use Octave/Matlab combine multiple blobs in close proximity into one blob

I am currently working on a project that is utilizing traffic camera video streams and analyzing them. I have written an algorithm in Octave, a .m file, that is able to return the outlines of multiple cars as individual blobs, and its outline and center coordinates. Specifically, for the blob part, I am using BWLabel and BWBoundaries and applying it to a binary image that I have created where 1 is car and 0 is background, according to the rest of my algorithm. I have an image below that shows this.
http://imgur.com/25hgrUP
All of the blobs are cars, including the one with about 5 blobs next to each other. All of these blobs are one van, but the different colors and features have thrown off the detection system. Does anyone know of a way to easily combine all of these blobs that are in close proximity into one blob. I am talking about an existing algorithm or function that is already in Octave packages or Matlab toolboxes. If not, I will write the code from scratch and make it happen. This question was just a call to ask if there are pre-existing solutions, not a call to write code for me, unless you want to :).
Thanks for your help,
AeroVTP
You can solve this (to an extent) with morphological closing, in Matlab it's imclose. You'll need to be careful, though as noise that's too close may be included, and true blobs too far away may be excluded.
Although imclose is a good idea, running it just one time has a much "stronger" effect on the image than just running the 'erode' and 'dilate' functions multiple times. I personally ran the erode and dilate functions 5 times, in succession, to get the best results for the earlier image. Running the Dilate and Erode commands separately give more control for good image processing.
Imclose is just a function that applies the erode and dilate functions repeatedly. In Octave, the function to dilate and erode is
editedImage = bwmorph(initialImage, 'dilate', 5 %number of times to apply% );
editedImage = bwmorph(initialImage, 'erode', 5 %number of times to apply% );
I also invented my own commenting structure :).
Thanks to wbest for initial imClose idea.

Background Subtraction with OpenCV 2

Is there any OpenCV 2.4.2 function or class that implements an Image Background Subtraction algorithm?
The images I will be processing will be portraits, such as images from a webcam. I want to keep only the upper part of the body (from the chest upwards) and remove everything else in the background.
If possible I'd like to avoid implementing these algorithms myself. However, should none currently exist in OpenCV then please suggest me some.
I tried to look for an answer but so far everything I've found only deals with Background Subtraction in videos or in sequences of pictures and that's not what I want. I want to process single, static and independent images only.
You can try the running average method for this using cvRunningAvg function in OpenCV (or any other library).
In this, you take average of a lot of frames as background. If camera is static, it provides a good result, especially if you have some frames with only background.
Look at a sample result :
It is also better to apply some morphological operations like erosion,dilation etc to remove small noises while frame grabbing.
You can read the code and explanation here : BackGround Extraction using Running Average

Morphing 2 faces images

I would like some help from the aficionados of openCV here.
I would like to know the direction to take (and some advices or piece of code) on how to morph 2 faces together with a kind of ratio saying 10% of the first and 90% of the second.
I have seen functions like cvWarpAffine and cvMakeScanlines but I am not sure how to use them.
So if somebody could help me here, I'll be very grateful.
Thanks in advance.
Unless the images compared are the exact same images, you would not go very far with this.
This is an artificial intelligence problem and needs to be solved as such. Typical solution involves:
Normalising the data (removing noise, skew, ...) from the images
Feature extraction (turn the image into a smaller set of data)
Use a machine learning (typically classifiers) to train the data with your matches
Test the result
Refine previous processes according to the results until you get good recognition
The choice of OpenCV functions used depends on your feature extraction method. Have a look at Eigenface.

Image to text conoversion: how to crop single words to single files?

I need to do something similar to this
How to write a bash script that cuts images into pieces using image magick?
But I dont' know in advance where the areas are and their size: I need to determine "boxes" which contain each word, and then to crop each one and save them into single files.
Most OCR software does something like this, so you could try looking at some source-code for an OCR program. Many years ago, I spent a lot of time with the code for GOCR (http://jocr.sourceforge.net/), which has a pretty simple-minded implementation of this algorithm.
If you don't want to write code, I'm not sure what to suggest. But if you can find software that chops images into pieces based on whitespace, you could try blurring the image (to make the text into blobs) and then thresholding and finding boxes from that. It's not clear that the results would be very useful though.

Resources