Will a fingerprint created using a canvas 2D render give the same result on two devices with matching hardware and software? - html5-canvas

If I draw something in the browser using a canvas 2D render, and use this to create a fingerprint, would two devices, such as two newly purchased iPhones (same model) give the same fingerprint, or are their minute hardware differences which would cause both devices to produce different fingerprints?
Thanks

Related

Suggests or methods of tv logo auto finding/locating/detection

Usually the logo detection means find the logo and recognize the logo. Some common works do the two steps together using SIFT/SURF matching method, detailed in
(1) Logo recognition in images
(2) Logo detection using OpenCV
But, if the logo is tiny and blur, the result is poor, and kind of time consuming; I want to split the two steps, firstly finding where the logo is in video; then recognize the logo using template matching or other method, like:
(3) Logo recognition - how to improve performance
(4) OpenCV logo recognition
My problem is mainly focused on finding the logo automatically in video. I tried two methods:
Brightness method. The logo on tv screen usually always there when the show goes on, I select a list of frames randomly and do difference between frames, the logo area tend to be 0; I do some statistics of 0 brightness with threshold to determine whether the pix is logo or not. This method usually do well but failed while the show has static background.
Edge method. Likely, if the logo is there, the border tends to be obvious. I do the statistic work like Brightness method, but edge sometimes unstable,such as very bright background.
Are there any suggestions or state of art methods to auto finding logo areas and any other logo recognition method except sift or template matching ?
Let's assume your list of logos known before hand and you have access to examples (video streams/frames) of all logos.
The 2017 answer to your question is to train a logo classifier, and most likely a deep neural network.
With sufficient training data, if it is identifiable to the TV viewers it will be able to detect it. It will be able to handle local blurring and intensity changes (which may thwart "classic" image processing methods of brightness and edges).
OpenCV can load and run network models from multiple frameworks like Caffe, Torch and TensorFlow, so you can use one of their pre-trainined models or train one yourself.
You could also try the Tensorflow's object detection API here: https://github.com/tensorflow/models/tree/master/research/object_detection
The good thing about this API is that it contains State-of-the-art models in Object Detection & Classification. These models that tensorflow provide are free to train and some of them promise quite astonishing results. I have already trained a model for the company I am working on, that does quite amazing job in LOGO detection from Images & Video Streams. You can check more about my work here: https://github.com/kochlisGit/LogoLens
The problem with the TV is that the LOGOs will probably be not-static and move along the frames. This will result in a motion blur effect, which will probably make your classifier to get confused or not see the LOGOs. However, once you find a logo You can use an object tracking algorithm to keep track of the logo (e.g. deepsort)

DirectShow - changing white balance property

I am capturing data from the web camera by using DirectShow api. To change white balance value I call IAMVideoProcAmp::Set method.
I have noticed that for some cameras white balance value is being changed immediately (after 1-2 frames new values is already applied). But for other cameras it is being applied incrementally during 50-60 frames. It is too long for me.
May be someone has faced with the same problem. Can I configure how fast new value will be applied or does it depend on the camera's driver?
IAMVideoProcAmp::Set is all you have. There is no generic way to change white balance or affect the way changes take effect. If you are interested in specific models of cameras, you might check with tech support if there is SDK available and model-specific ways to setup the device.

How do GUI developers deal with variable pixel densities?

Todays displays have a quite huge range in size and resolution. For example, my 34.5cm × 19.5cm display (resulting in a diagonal of 39.6cm or 15.6") has 1366 × 768 pixels, whereas the MacBook Pro (3rd generation) with a 15" diagonal has 2880×1800 pixels.
Multiple people complained that everything is too small with such high resolution displays (see example). That is simple to explain when developers use pixels to define their GUI. For "traditional displays", this is not a big problem as the pixels might have about the same size on most monitors. But on the new monitors with much higher pixel density the pixels are simply smaller.
So how can / should user interface developers deal with that problem? Is it possible to get the physical size of the screen? Is it possible to set physical sizes instead of pixel-based ones? Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
(While css seems to support cm, when I try here it, it is not the set size).
how can / should user interface developers deal with that problem?
Use a toolkit or framework that support resolution independence. WPF is built from the ground up to be resolution-independent, but even old framework like Windows Forms can learn new tricks. OSX/iOS and Windows (or browser if we're talking about web) itself may try to take care the problem by automatic scaling, but if there's bitmap graphic involved, developers might need to provide different bitmaps such in Android (which face most varying resolution and densities compared to other OS)
Is it possible to get the physical size of the screen?
No, and developers shouldn't care about it. Developers should only care about the class of the device (say, different UI for tablet and smartphone), and perhaps the DPI to decide which bitmap resource to use. Vector resource and font should be scaled by the framework.
Is that still a problem (it's been a while since I last read about it) or was that fixed meanwhile?
Depend on when you last read about it. Windows support is still spotty, even for the internal apps itself, and while anyone developing in WPF or UWP have it easy, don't expect major third party apps to join soon. OSX display scaling seems to work a bit better, while modern mobile OS are either running on limited range of resolution (iOS and Windows Phone) or handle every resolution imaginable quite nicely (Android)
There are a few ways to deal with different screen sizes, for example when I make mobile apps in java, I either use DIP(Density Independent Pixels; They stay at a fixed size) or make objects occupy a percentage of the screen with simple math. As for web development, you can use VW and VH (Viewport Width and Viewport Height), by adding these to the end of a value instead of px, the objects take up a percentage of the viewport. For example 100vh takes 100% of the viewport height. Then what I think is the best way to do it, but time consuming, is to use a library like Bootstrap that automatically resizes elements, even when the window is resized. W3Schools has a good tutorial on bootstrap and more detailed explainations on any of these options can be looked up with an easy google search.
The design of the GUI in today display diversity era is real challenge. I would suggest several hints, mainly about the GUI applications design:
Never set or expect constant pixel size of the text - the user can change it from the system settings of the OS. Use some real-world measures for the text and check its pixel size when drawing. Provide some way to put the random size text in the boundaries of the window.
Never set or expect constant pixel size of the GUI widgets. Try to position them on the window in some adaptive way - according to the size of the window. Most GUI widget toolkits today have such instruments.
Never set or expect constant pixel size dialog windows. Let the OS to choose the size for you and then use what you get (X). Or, if you need to set some size and position (Windows), define it as a percent of the screen size.
If possible use scalable image formats for the icons. SVG is great for icons actually. Using sets of bitmap icons with different sizes is acceptable, but highly non-optimal as memory use and still will not provide perfect scaling in most cases.

What is the frequency of color and depth information, and are they synchronized?

I am creating an augmented reality application that uses the depth information to change how objects render within the color image. I'm not sure how frequently I should expect new frames or how to make sure I'm matching the correct depth samples with the right color image frames.
The Tango Phone Development Kit and Tango Tablet Development Kit both update the RGB image at an average of 25hz. The depth for the Phone is sampled at 5hz, while the Tablet currently runs at 2-3hz, but may increase in later software releases.
The color and depth data are not synchronized, but in both platforms the API provides timestamps for all data as well as an interface to request data at a given timestamp, so the application can decide how best to manage the data.

How taxing would a game map grid be to a web browser?

Suppose we're making a strategy game (think Civilization) in a web browser. The game has a visible map portion - say 30x30 squares. Each square is 30x30px and has several overlaid images - the terrain, resources, units, roads, etc. The classical way of drawing this would be with a huge <table> where each cell would contain absolutely positioned images. It would probably be rendered in Javascript to reduce traffic. But it's still several thousand images and a huge table.
Can the browser take it? Will the performance not drop below any acceptable limits? Alternatively I could keep a pre-rendered map image with as many overlays as possible, but that would be more work, I think.
You should really look into using the canvas element which does not require the browser to store and compute the whole layout and other DOM stuff.
That being said, a modern browser on a high-performance workstation can display hundreds of images at the same time as demonstrated with the FishIETank. However, many devices - ranging from smart phones to old PCs - can not. Oh, and using a table is probably slower than a div with position:relative; or absolute and absolutely images therein.
Look at online games like grepolis, they already do some sort of a grid like game, and modern browsers can take this easily.

Resources