Disclaimer : i'm beginner maybe this question bad for you, i hope you understand.
I have to create skin diseases expert system using PHP programming. The point is matching two diferent image or more, and the system matching/compare images from database/files with images from user, and then give some question to the user who input the image. The question come from matching/compare result which roughly matches with image from database/file.
For example, this is images from user with Scabies skin diseases :
And then this is sample image from database/file.
Now how can i match /compare the images?
i already read this questions Image comparison - fast algorithm, Compare images to find differences,
Tool to compare images on Windows, Algorithm to compare two images, Algorithm fast compare images \ matrix
and article from http://www.cs.ubc.ca/~lowe/keypoints/ (SIFT keypoint detector) and http://www.cmap.polytechnique.fr/~yu/research/ASIFT/demo.html (ASIFT, SIFT, MSER) but it seem only with same picture just diferent from position take the picture.
and all of them can't help me ( or me not understand LOL ).
I don't know much about OpenCV library, whether OpenCV library can handle it?
Please..., i need your help. Thanks :).
Edit :
May this image can explain :
The problem is on step 2.
You could do morphological analysis on the hue images, distinguishing between normal skin color and unhealthily red color. That is, go into HSV space, extract the H component, threshold it, and then analyze the size and shape of the white areas using e.g. successive erosion.
However, the chances are pretty slim. You have a scale problem (i.e. you don't know how large the taken image is), you have the normal color/brightness normalization problems, and you have the additional problem of the large variations present in skin diseases.
This is a fairly hard problem, even for people who have studied image processing. If you don't have any prior experience in image processing (and if you are trying to use PHP for such a problem, you probably don't), prepare for a long learning process. Several months at least.
I don't really know about the medical situation, the images were enough to make me sick. :)
However, I think you need to find the areas that does have different colors with the actual skin. So I recommend this link as a starting point. You can use "segmented particles" or "points at maxima" to figure out the count and the density or whatever of disorientations on the skin, and this might be a guide to what the sickness is. Also, you can get the color values of that points by "results" in the same link.
PHASE I
Go on Google Images and upload your image. Google has a "search for similar images" feature and will try to make a match. Likely Google will just match you to other pictures of skin or body parts. Set the upper limit of your expectations to matching Google's results in image recognition. If that upper limit is not good enough...
PHASE II
Use an expert system (maybe 3 layers deep of question & answer to classify the condition). Following is a list of skin conditions I have been working on. Of course you would need to put human-readable descriptions next to any medical terms
Acne
Cyst/cysts
Infected cyst
Non-infected cyst
Acne cyst
Epidermal cyst
Myxoid cyst
Ganglion cyst
Synovial cyst
Sebaceous cyst
Helial cyst
Auricular
Hidradenoma
Syringoma
Hidradenitis
[...] Nevi/nevis
Pigmented
Congenital
Typical
Atypical / Dysplastic
Inflamed
Irritated
[other]
[...] Carcinoma
Basal cell
Superficial
Squamous cell
In situ
Squamous cell ((what does this mean??))
Other
Melanoma
In situ
Keratosis/keratoses
Actinic
Seborrheic
Irritated
Pigmented
Warty
[...] Verruca (wart)
Common
Genital
Condylomatous
Plantar
Digital
Periungal
Filiform
Palmar
Urticaria (hives)
Generalized
Vasulittic
Contact
Vasculitis
Allergic
Leukocytoclastic
[...] Dermatitis
Seborrheic
Exematous
Eczematous
Eczematous
Eczematoid
Lichenoid
Psoriasiform
Pityriasiform
Nummular
Lichen simplex
Hypersensitivity
Dyshidrotic
Palmar-plantar
Psoriasis
Palmo
Plantar
Pustular
Erythrodermic
Hyperhydrosis
Lichen planus
Blistering disease
Pemphigoid
Pemphigus
Herpes simplex
Herpes zoster
Insect bite reaction
Lipom
Excoriations / prurigo
Tinea [...] (fungus)
Versicolor
Pedis
Unguium
Cruris
Capitis
Facilie
Corporis
Scarring
Post-funeral
Traumatic
Post-radiation
Acne
Keloid
Hypertrophic
Atrophic
Scleroderma
Localized
Systemic
Perleche
Cheilitis
Balanitis
Morphea
Atrophoderma
Vascular lesions
Pumpura
Eccliymosis
Angiomata
Pyogenic Granuloma
Telangiectasias
Varix
Port Wine Stain
Candidiasis
Impetigo lesions
Folliculitis
Furunculosis (boils)
Abscess
[...] Ulceration
Infected
Non-infected
Intertrigo
Abnormalities of Pigmentation
Post-inflammatory Hyperpigmenation
Hypopigmentation
DePigmentation
Vitiligo
Melasma
Chloasma
Rhiels Melanosis
Poikiloderma
Dyschromia
Pityriasis
Pityriasis Alba
Pityriasis Rosea
Rubra Pilaris
Lichenoides
Acuta (PLEVA)
Dry Skin
Asteatosis
Ichthyosis
Hyperkeratosis
Related
Im using a pre-trained image classifier to evaluate input data treatments. I downloaded the ImageNet ILSVRC2014 CLS-LOC validation dataset to use as base. I need to know the actual classes of the images to evaluate my treatments (need to detect correct classifications). In the 2014 toolkit there is ILSVRC2014_clsloc_validation_ground_truth.txt file that according to the readme is supposed to contain class labels (in form of ID:s) for the 50 000 images in the data set. There are 50 000 entries/lines in the file so this far all seems good but i also want the corresponding semantic class labels/names.
I found these in a couple of places online and they seem to be coherent (1000 classes). But then i looked at the first image which is a snake, the ground truth for the first pic is 490, the 490:th row in the semantic name list is "chain". That's weird but still kind of close. The second image is two people skiing, the derived class "polecat". I tried many more with similar results.
I must have misunderstood something. Isn't the ground truth supposed to be the "correct" answers for the validation set? Have i missed something in the translation between ID:s and semantic labels?
The readme in the 2014 imagenet dev-kit states:
" There are a total of 50,000 validation images. They are named as
ILSVRC2012_val_00000001.JPEG
ILSVRC2012_val_00000002.JPEG
...
ILSVRC2012_val_00049999.JPEG
ILSVRC2012_val_00050000.JPEG
There are 50 validation images for each synset.
The classification ground truth of the validation images is in
data/ILSVRC2014_clsloc_validation_ground_truth.txt,
where each line contains one ILSVRC2014_ID for one image, in the
ascending alphabetical order of the image file names.
The localization ground truth for the validation images can be downloaded
in xml format. "
Im doing this as part of my bachelor thesis and really want to get it right.
Thanks in advance
This problem is now solved. In the ILSVRC2017 development kit there is a map_clsloc.txt file with the correct mappings.
I am starting the development of a software in which through an image of a touristic spot (for example: San Peter Basilica, the Colosseum, etc.) I should retrieve which is the name of the spot (plus its related information). In addition to the image I will have with me the picture coordinates (embedded as metadata). I know I can support me with Google Images API using reverse search in which I give my image as an input, and I will have as a response a big set of images.
However, my advice request for you, is that now having all the similar images, which approach can I make in order to retrieve the correct place name which is in the photo.
A second approach that I am managing is to construct my own dataset in my database, and do my own heuristic (filtering images by their location and then to make the comparation over the resulting subset after having done that filtering). Suggestions and advices are heard, and thanks in advance.
An idea is to use the captions of the images (if available) as a query, retrieve a list of candidates and make use of a structured knowledge base to deduce the location name.
The situation is lot trickier if there're no captions associated with the images, in which case, you may use the fc7 layer output of a pre-trained convolutional net and query into the ImageNet to retrieve a ranked list of related images. Since those images have captions, you could again use them to get the location name.
Is there a way to understand what an image is about? I mean, if I scann a picture, how can I tell that the picture is about a spesific object? I am thinking that if I have some shape in mind, say the shape - pattern of a spesific object that meets its requirements against the object I am searhcing for, then it must be what I am looking for. Anyway I am thinking of an algorithm to scann a picture database and figure out the pictures I am actually looking for,Is there a known way to accomplish such operation?.
If I am reading your question correctly...
This is a very daunting task even for full-fledged corporations like Google, though they are attempting to create something along these lines.
Take a look at Google Goggles for Android if you'd like to see how this sort of system behaves. You'll also notice that it requires very specific circumstances to be even slightly reliable, but the base technology is there.
I'm really stuck right now. I want to apply LIBSVM for Image Classification. I captured lots of Training-Images (BITMAP-Format), from which I want to extract features.
The Training-Images contain people who are lying on the floor. The classifier should decide if there is a person lying on the floor or not in the given Image.
I read lots of papers, documentary, guides and tutorials, but in none of them is documented how to get a LIBSVM-Package. The only thing that is described is how to convert a LIBSVM-Package from a CSV-File like this one: CSV-File. On the LIBSVM-Website several Example-Data can be downloaded. The Example-Data is either prepared as CSV-Files or as ready-to-use Training- and Testdata.
If you look at the Values which are in the CSV-File, the first column are the labels (lying person or not) and the other Values are the extracted features, but I still can't reconstruct how those values are achieved.
I don't know if it's that simple that nobody has to mention it, but I just can't get trough it, so if anybody knows how to perform the feature extraction from Images, please help me.
Thank you in advance,
Regards
You need to do feature extraction first. There are many methods that are available. These include LBP,Gabor and many more.. These methods will help you get the features to input into libsvm..Hope this helps...
I'm looking for either algorithms or visualization tool for (nice) circuit/block-diagram drawing.
I am also interested in a general formulation of the problem.
By "circuit drawing", I mean the capability of exploring place & route for block-diagrams (rectangles) with I/O ports and their connections (wires). These block-diagrams can be hierarchical i.e some blocks may have some nested internal sub-structure etc.
This topic is strongly related to classical graph-drawing, with the supplemental constraint of the need to take ports location into account, and possibly the shape of the blocks (rectangle of various sizes). Graphviz tools do not respond to the problem (at least my previous experiments have not been satisfactory).
Force-directed algorithms retain my attention, but I have just found papers on classical directed graphs.
Any hints ?
[update nov 21 2013] it seems that the best reference to date is Spönemann
To make production quality circuit diagrams as well as block diagrams, I strongly recommend J. D. Aplevich's "circuit macros". It's well documented and actively maintained. See the examples produced by this package circuit macros examples
There is some learning curve, for example to be able to use the "dpic" graphing language to draw your own diagram. But the tool itself is very powerful.
For me there are two remaining issues:
no live update
svg output is lacking
I hacked up some Javascript to
(watch m4 file change)->[m4->dpic->latex->pdf]->svg->(show in html)
Here is the gist of it
// watch .m4 file
var chokidar = require('chokidar');
var resolve = require('path').resolve;
const touch = require('touch')
const {exec} = require('child_process')
chokidar.watch("*.m4").on('change', fn=>{
let ff = resolve(fn)
console.log(ff, "changed")
exec("runtask.bat " + ff, {cwd:"../"}, (err,stdin,stdout)=>{
console.log(err,stdin, stdout)
touch("index.html") //svg updated
})
})
Here is the runtask.bat for Windows
m4 pgf.m4 %1 | dpic -g > tmp.tex
C:\texlive\2017\bin\win32\pdflatex template.tex
tool\dist-64bits\pdf2svg template.pdf %~dpn1.svg
tool\dist-64bits\pdf2svg template.pdf %~dp1tmp.svg
That way, you can "draw" by writing m4/dpic code and see the result in the browser live; and svg is generated from pdf which looks a lot nicer.
I am also using TikZ at the moment but you may wish to try http://blockdiag.com/
Here is one:
http://www.physicsbox.com/indexsolveelec2en.html
Here is where to look for others:
http://www.freebyte.com/electronics/
www.educypedia.be/electronics/easoftsim.htm
There are alternatives to graphviz that may do the job - see e.g. infovis, protovis, tulip.
See also other related questions 1, 2, 3.
Can you explain where graphviz falls short? The only requirement you list that I'm not sure about is attaching to specific ports. I would have thought you might be able to solve that with composite shapes / subgraphs, but maybe not...?
EDIT: Another option, particularly if you're looking at software engineering diagrams. Have you considered the eclipse gmp toolkit? It's what's used to build e.g. the UML2 editor tools.
hth.
I don't know of any tool that is a clear winner for easily making nice block diagrams with a minimum of manual labor. Some of the best looking results I've seen have been from TikZ. Check out the examples here:
http://www.texample.net/tikz/examples/area/electrical-engineering/
I have been getting very good results from Draw.io. It is a webapp but has a pretty powerful diagram editor and some decent stock symbol libraries. Drawings can be exported as PNG or SVG so can be publication quality and they link up to