I am thinking of building a web based face recognition system. I know there are a few like KeyLemon, and others offered by different manufacturers that allows the laptops users to login into Windows using their face. I am wondering if this functionality could be transfered to a web application.
suggest you use this as the basis
OpenCV (Open Source Computer Vision) is a library of programming functions for real time computer vision.
There was an excellent podcast on OpenCV on Hacker Medley which has various references that are useful. From that i understand that the library tends to move quite fast in development terms so needs close attention.
You may use thing like flash to access to camera ... , and then use the same algorithm to recognize the face ..
I've written a web application which does something similar. And I have to say - I'm quite disappointed at the level of technology we're currently at for such things. The system in question used a 10mpix Canon camera and a special flashlight stand. It had to have a perfectly white background, the head had to be tilted exactly the right way, couldn't be rotated by more than a few degrees, and had to have very precise distances to the edges of the picture. And even then it gave a lot of false positives and negatives.
So maybe they've come up with something better today, but I doubt that. This was all 2 years ago and the software was some commercial product by a company that specializes in that sort of thing.
So all in all I say - better don't. Biometrics are cool, but currently they are way too unstable to be deployed in anything more than niche situations.
Keylemon provides web api to enroll faces and their later recognition. You can use this web apis to integrate in your application to provide face recognition functionality. It works like this. During enrollment six photographs are taken and a bio metric model is generated. A model id is returned to the client. This model id needs to be stored in application database. For face recognition, web camera streaming combined with model id is passed to the keylemon server. If the model id and stream matches the face is authenticated.
Related
I was reading documentation on some bad practices when building a website. The MDN said this is very old and a bad practice but there are certain cases in which it is acceptable. Such as device detection.
https://developer.mozilla.org/en-US/docs/Browser_detection_using_the_user_agent
If I were to build a mobile site and use UAS to detect the device to send a user to a less data intensive website; should I? I know there is fluid and responsive layout but most of those website include rules for a fix desktop width too. Are there any edge cases of devices that do no include mobile in their UAS?
I realise this is an old question but hopefully this isn't too late for you.
I would be very wary of using the UA alone to do anything for the reasons mentioned in the article you linked.
That said, there are plenty of situations where you can give a better user experience by using a device detection library like 51 Degrees and being aware of a few things.
In particular you mention less data intensive version of the website. There is something of a trend in places like India, where access to poor quality data connections is the norm, to use browsers like UC browser and Opera mini.
These work by going via a proxy and stripping out a lot of the heavier weight stuff in a web page. Needless to say, this can destroy your lovely ultra-modern, highly responsive interface.
51 Degrees will tell you if the browser is of this type with an attribute called IsDataMinimising and you can adapt accordingly, giving the user a better experience while also saving your bandwidth.
Full disclosure: I work for 51 Degrees.
I'm about to start my Bachelor's thesis (estimated workload of around 180 programming hours and 180 hours of actual writing the thesis). As I have zero experience with Google's project Tango, I don't know whether my goal is achievable at all or way too big without me realizing it.
I would like to develop a "blueprint app" that uses Tango to create a blueprint of a scanned area. I.e. I want to go around my apartment with the device in my hand, scan it and afterwards have a file in which I can see all the dimensions of all rooms, see doors (and windows?), but not see things lying around like cloths, books, even shelves, chairs or my guitar.
I believe that this is generally possible with Tango, although I can imagine it to be quiet hard, as soon as I want to include some data but exclude other.
I realize that an answer heavily depends on the person. I have a little more than 3 years of programming experience in Java. And I have developed a small Android app so that I know the fundamentals of lifecycles etc. My last programming project was a simple web shop including a restful Spring backend, and an AngularJS frontend. I use frameworks where possible and so far don't seem to have bigger problems with google's APIs.
So my question is: Am I crazy to even think that's possible or do you (people with knowledge of Tango) think it could be done?
As it is shown in Project Tango GTC Video, some local features are extracted and tracked for motion estimation that is then fused with accelerometer data.
Since any developer may need to track features to develop his/her apps, I was wondering if there would be a way to get those features through APIs.
Although it is possible to extract some point and retrieve their flow using estimated 6DOF pose returned by the APIs, it adds extra overhead. Another issue with this approach is that the pure visual flow (including outliers) is not achievable and is influenced by IMU data.
So my question is that if these features are tracked using hardware-accelerated algorithms, how can we get them using APIs without having to implement it and do a redundant task.
Any answer and suggestion would be appreciated.
It is straightforward to compile OpenCV for the Tango with nVidia's TADP package. Use 3.0r4. You may need to merge some OpenCV-4-Android bits but it's easy, and the ES examples will fail on the device but don't sweat it.
Google released the "Project Tango ADF Inspector" on play store - I haven't actually had any time to play with it, but its the first thing to offer any look inside that data - I think Google considers this data sensitive and is cautious in this area, with good reason - If you look for the starred "important" note on this page you should get a feel for the sensitivity of that issue.
I'm planning on doing an interactive AR application that will use a laser sensor (for distances), GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder
movements. The user can choose from a number of ready-made 3D-models, and should be able to place them by selecting the desired location on the screen.
My target platform will be a 8"-handheld-device, running on windows8.
Any hints what would be the best AR-SDK or 3D-viewer to work with?
thanks in advance!
There are quite a few 3D viewers that are working in the browsers. But most recently and most notably: va3C viewer
It is webgl based app and doesnt require a server, so if your handheld device supports webgl, then you are good to go, however, whether it works on IE or not is questionable ;).
Although based on my experience and your usecase, I believe client side JS libraries do not provide enough access to the device's hardware. So you might have to serve the information like GPS, Gyroscope, from the server side, then gather this on the client using something like socket.io and then mash it up alongside the geometry.
I am trying to do something similar, although havent quite done it yet. Will keep you posted.
Another approach I am exploring is X3DOM, which gives the ability to write 3D data like XML alongside HTML, which is quite declarative and simple to pickup. X3DOM derives from X3D.
Tell me if you need more info.
Also, worth exploring for its motion abilities, is Robot Studio, which is a desktop app with SDK.
I was wondering what are the pain points for other developers when learning Windows Phone 7 programming. For me is switching between application pages and the MVVC. If you have any hints or resources helping to overcome these pain points, please share it.
When switching to a new development platform there are bound to be new things to learn.
If you're coming from a web background it's important to note that you're no longer in the same stateless world as the web. There is also a different navigation model. (Especially if you're developing in XNA!)
The biggest, and in my opinion, most important difference in moving to developing for the phone (or any mobile platform) are teh follwoing 6 points.
"Mobile" applications are used
differently to desktops ones. -
Expect users to have less time to
spend with the application and be
doing other things at the same time.
Input is different. - Consider
[multi-]touch as well as voice,
location and sensors rather than
mouse and keyboard.
Output is different. - Even if just
considering output to the screen,
it's very different developing for a
small screen than a large one.
Connectivity is nott guaranteed. -
Create apps which work offline and
are occassionaly connected. Don't
assume a network conneciton is
guaranteed or fast.
Performance is important. - Partt of
the way that"mobile" applications
are used differently to their
desktop counterparts creates a
different expectation from users and
they are much less tollerant of
applications which are displaying
the equivalent of a wait cursor. Do
no more than you have to and be sure
to keep the app/device as responsive
as possible.
Resources are constrained. - The
most important consequence of this
is to do no more than you must, so
you can preserve battery life.
Afterall, if you run down the users
battery they get frustrated and
can't use your app.
Unfortunately the best way to avoid running in to problems is to develop a detailed knowledge and understadnig of the platform.
With that i mind, I'd recommend the following resources:
For general information check out the MSDN documentation.
I'd like to particularly draw your attention to:
the design resources, particularly the UI guidelines - so you can create something which looks like it is actually part of the platform.
and the fundamental concepts - so you don't waste time trying to do something which isn't possible.
Other useful resources are:
- Code samples
- Online training (there are updates to this coming soon)
- the book by Charles Petzold
There is a great, organised resrouce list here which covers pretty much all the major points of Windows Phone 7 development.