Image processing events [closed] - algorithm

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I’m interested in segmentation, feature detection, image processing algorithms, etc. I’ve done a few searches on the internet about conferences or seminars that would be interesting and more importantly helpful to connecting with other people in my field. Any suggestions on the best US conferences that deal with image processing?

There are a couple of good conferences, most sponsored by IEEE, like CVPR(Conference on Computer Vision and Pattern Recognition) and ICIP (International Conference on Image Processing). Both CVPR and ICIP usually has a minimum amount of exhibitors; so if you want to listen to speakers and not get lost in a sea of exhibitors, these are for you. ASPRS has one on Imaging and Geospatial Technologies. There is also the SAIM Conference on Imaging Science next year in May.
I’ve used this website several times. It has basically every conference and event relating to image processing and computer vision in the US and international. The author keeps everything up to date and nicely organized.
My company, Wolfram Research, is holding a few image processing events on a much smaller scale.

Related

What machine learning algorithm to use for face matching? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want your opinion on what would be the best machine learning algorithm, or even better, library, to use if I wanted to match two faces that look similar. Kind of like how google photos can put photos of the same people in their own album automatically. What's the best way to tackle this?
Face-based user identification isn't only one algorithm, it's a whole process that's still in active research. One way to experiment with it, is to follow these four steps:
Face region detection/extraction using the Histogram of Oriented Gradients (HOG)
Centralize eyes and lips using face landmark estimation
Image encoding using a CNN Model (openface for instance)
Image Search using a voisinage algorithm (KNN, LSHForest, etc)
Here's a blog article that draws a nice walk through the steps required to do face-based user identification:
machine learning is fun part 4 modern face recognition with deep learning

Standard for software comparison [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I'm developing a preventive maintenance strategy for an industrial plant based on RCM (Reliability Centered Maintenance) methodology. For this job, I need to choose one CMMS (Computerized maintenance management system) among several options but I need to do it in a clever way.
Is there some technical procedure to make a Comparative Assessment of Software Programs and get to know what is the better CMMS option? Any standard, table or matrix?
Thank you so much
I found an insteresting document with a Comparative Assessment of Software
Programs very useful for me developed by the IRIS Center at the University of Maryland.
Comparative Assessment of Software
Programs for the Development of
Computer-Assisted Personal Interview
(CAPI) Applications
This scientific article could be useful as well:
Comparative Assessment of Software Quality Classification Techniques: An Empirical Case Study
EDIT
There are Sites like Quora that are better places to make this kind of questions

Proximity sensor recommendation to detect hand & blood [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
My company is interested in designing a protection system for small industrial machines, where factory employees insert wooden objects that are then cut by these machines.
The protection system needs to be aware of the presence of a human arm when inserted into the input-hall - the human arm will then put a piece of wood and the machine will cut the wood - however, the protection system needs to be able to detect the presence of blood in case for some reason the arm is cut - and in that case shut off the power of the industrial machine.
I'm no expert on sensing technologies and as we are looking to hire one, I am asking for advice on the proper sensing technology that can fit these requirements.
Capacitive sensing, as I understand - can not only detect the presence of an object - but also the type (e.g. distinguish human arms from blood) - can such technology be used for the purpose mentioned above?
Thanks,
Arkadi
I wonder if something like this could help: http://www.sawstop.com/ I'm a woodworker as well and have been considering this device. I am not sure if it can distinguish arm from blood but it seems to be able to sense 'flesh' and shut down.

What do these mysterious "Business Intelligence" software do anyway? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
What do these mysterious "Business Intelligence" software do anyway ?
They're not really mysterious. BI or Business Intelligence software is just a term that groups software with a particular goal like OLAP and report generators. SSRS and Crystal Reports are some examples, among many others.
And the requisite wiki article...
In a nutshell, the goal of BI is: aggregating and presenting data to help executive decision making.
In a business, the role of a CEO is to stay on top of pretty much everything that is happening in a company. Some data you can get off of standardized reports, but sometimes you have an intuition, and need to actually dig through the data in arbitrary ways, while not really being able to learn SQL. BI is there to fill that role. It is there to do a better job then the CEO actually exporting a bunch of reports into excel, then massaging the data there.

Origin of "embarrassingly parallel" phrase [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
For the purposes of history on wikipedia, is anyone familiar with the origin of the phrase "embarrassingly parallel". I've always thought that it may have been coined by a random Google employee who first worked on map-reduce. Does anyone have any concrete info on the origin?
The first use I could find in an advanced Google book search was from an IEEE Computer Society digest published in 1978. The context and the fact that the author had "embarrassingly" in quotes indicates to me that the phrase was not coined here, but had been used before this.
It's decades old, but I first heard it used in reference to graphics rendering. Imagine you're rendering an animated movie: each frame is 2000x1000 pixels, there are 24 frames per second, 60 seconds in a minute, and 100 minutes in the movie. That's almost 300 billion pixels that can all be computed in parallel. That's so parallel that it's embarassing to compute it serially.
Try this search : http://www.google.co.nz/search?rlz=1C1GGLS_enNZ364NZ365&sourceid=chrome&ie=UTF-8&q=etymology+embarrassingly+parallel

Resources