Problem Statement: Track an anonymous user to persist state (or lock out of a feature after a timer) on a device that has visited a website. This would need to work with cookies disabled, across browsers, including visits in incognito mode. This also would need to be device specific, 2 computers within a home network would have 2 independent timers.
I have seen this applied in a few scenarios with the most recent being the NBC Olympics with the stream timer. This has so many uses for "free no sign-up trials" while not giving away everything or limiting features in "try before you buy". Any ideas would be appreciated!
For this you would need to employ a cross-browser fingerprinting (or device fingerprinting) technique.
Related research
I recommend you read the paper (Cross-)Browser Fingerprinting via OS and Hardware Level Features by Yinzhi Cao, Song Li, and Erik Wijmans, which has an associated demo implementation of 2 of the techniques described therein.
Another good paper I found on web fingerprinting techniques which you should read if you're interested is Web-based Fingerprinting Techniques by VĂtor Bernardo and Dulce Domingos.
The (in)security of device fingerprinting
The basis for device fingerprinting is collecting a variety of features from the client device which are indicative of the device/OS of the device, and are stable across browsers. Collect enough and the combination of these features of one user will very likely be unique among all users.
Most features useful for device fingerprinting can only be measured on the client (with JavaScript), and then need to be communicated back to the server, either raw or as a hash. Due to this, device fingerprinting as a security measure relies also on your ability to obfuscate the JS doing the fingerprinting and the corresponding network traffic communicating the fingerprint. If a user can figure out how the fingerprint is being collected and/or sent back to the server then they can spoof it to circumvent any protections you've put in place based on it.
Features that can be fingerprinted
Useful features to measure include (but are not limited to)
GPU rendering artefacts
anti-aliasing method
OpenGL driver varying interpolation
texture sampling
Installed fonts and writing systems
Text rendering minutiae
anti-aliasing
subpixel rendering
kerning, tracking and leading of particular fonts can indicate subtle variations due to different installed versions
In terms of low hanging fruit, there are Web APIs such as Navigator.hardwareConcurrency which expose details about the underlying hardware directly, however many browsers now disable or spoof this feature in order to avoid its use for fingerprinting.
The more features you collect the more solid your fingerprint, as long as they don't vary across browsers on the same device.
Conclusion
Ultimately, there is no be-all and end-all to device fingerprinting since it's a very complex topic with many potential approaches and a constant arms race with browser vendors trying to prevent fingerprinting techniques and developers trying to find new ones.
If you're looking for an out-of-box solution, there are currently a small handlful of open source and commercial browser and user fingerprinting services out there such as FingerprintJS (which is both). Though it does seem that many device fingerprinting solutions are not sold as standalone functionality but instead as a part of a fraud prevention system (such as SEON) or similar.
(The following is just opinion)
Overall it's my view that device fingerprinting is an iffy solution to locking out features, and a better solution is to make the signup process for an account as quick and easy as possible (though perhaps the free account could be used in combination with device or browser fingerprinting to temper abuse of the free trial system)
Related
Whilst this question has obviously been asked before, year's have gone by since then. Apple has released a new NFC spec in that time and further software updates indicate more speculation in this area.
A smartphone has an NFC chip. Is it possible to harness this to take an EMV payment from a contactless card or eWallet? This would obviously require an installed EMV kernel to securely process the payment and possible a means of accessing the secure layer for any PIN entry.
As much as this may seem like an ambiguous question, clearly the hardware is capable. Is it possible / legal / licensed in anyway yet. There is a service that claims to be working on it called PHOS.
Quite obviously, SO is not the right place for such a question as it's unrelated to programming. There's quite a lot of discussion regarding the topic and answers also will tend to be opinion based.
Up to this moment, it hasn't been possible on Apple (due to closed ecosystem, not hardware incompatibility) and became allowed for Android. Technically it's been possible for a while already, but regulations made consumer grade devices incapable of acceptance - they are still quite terrible in the physical aspect as they are not designed to either handle entries securely as well as generate the electromagnetic field according to EMVCo requirements as to the shape and operating volume. Payment schemes have generated as list of special criteria for solutions based on consumer grade devices and the company you mentioned is one of many that have been working on it. There certainly are already some production deployments with limits that have been set by the schemes.
There might be changes in Apple approach (especially as they acquired a company dedicated to such solutions) or not. This is just speculation. The fact is that consumer devices tend not to be as good as dedicated hardware but only time will tell if this stays true. Security research is ongoing, we shall see the results and how will that affect companies policy and further development in the area. It's just too early too tell.
I was reading documentation on some bad practices when building a website. The MDN said this is very old and a bad practice but there are certain cases in which it is acceptable. Such as device detection.
https://developer.mozilla.org/en-US/docs/Browser_detection_using_the_user_agent
If I were to build a mobile site and use UAS to detect the device to send a user to a less data intensive website; should I? I know there is fluid and responsive layout but most of those website include rules for a fix desktop width too. Are there any edge cases of devices that do no include mobile in their UAS?
I realise this is an old question but hopefully this isn't too late for you.
I would be very wary of using the UA alone to do anything for the reasons mentioned in the article you linked.
That said, there are plenty of situations where you can give a better user experience by using a device detection library like 51 Degrees and being aware of a few things.
In particular you mention less data intensive version of the website. There is something of a trend in places like India, where access to poor quality data connections is the norm, to use browsers like UC browser and Opera mini.
These work by going via a proxy and stripping out a lot of the heavier weight stuff in a web page. Needless to say, this can destroy your lovely ultra-modern, highly responsive interface.
51 Degrees will tell you if the browser is of this type with an attribute called IsDataMinimising and you can adapt accordingly, giving the user a better experience while also saving your bandwidth.
Full disclosure: I work for 51 Degrees.
I am gonna develop a lesson in two platforms(firstly in webgl and then a similar lesson in unity 3d).
the aim of this research is to see the best of these platforms in terms of performance and speed to use it in e-learning environments.
my question is this :
how can i measure the performance (processor, memory, graphic card) for these platforms?
also, I am very appreciated if any one give me ideas or a suggestions to improve this research.
WebGL and Unity are not platforms. Unity is a library that has support for multiple platforms; its performance depends on what hardware its running on. WebGL is a JavaScript API for browsers that allow them to access OpenGL ES 2.0. This also isn't a platform; it is utterly dependent on the hardware it is running on.
Sure, each incurs overhead, but they also do completely different things. Even if one is seen as faster for a particular piece of hardware, that doesn't mean that you can use it. Unity makes applications. Something you download and install. WebGL is for web pages: HTML+JavaScript. The reasons to use one are not the same reasons you would have to use the other.
Making a "WebApp" is very different from making a regular application. You generally decide first off whether you want to make a WebApp or a regular application, then use the tools that are available to the one you pick.
There are platforms that don't support WebGL. Namely, Internet Explorer. Microsoft has already stated that they aren't going to implement WebGL. So WebGL's performance on IE is effectively 0.
Also, WebGL is a low-level rendering API; Unity is a game engine. Unity provides more functionality towards making a game than WebGL, so there are productivity differences you must take into account.
Your desire to compare the performance of these simply is not the most useful criteria for deciding which one to use.
OK, your later answer clued me in to the idea that you're focusing on browser-based tools.
WebGL is not available on Internet Explorer. So again, half of your customer base is gone. However, Unity's browser plug-in is a plug-in and therefore must be downloaded by the user. Quite a few users are against that. Also, Unity's browser plug-in doesn't work on mobile systems; you would be expected to write an app for those.
So which matters more to you: reaching out to mobile users (where WebGL is available), or reaching out to Internet Explorer users? Again, this is something you need to deal with long before you answer questions of performance.
When writing DirectX applications, obviously it's desirable to support the user suspending the application via Alt-Tab in a way that's fast and error-free. What is the best set of practices for ensuring this? Things that need to be addressed include:
The best methods of detecting when your application has been alt-tabbed out of and when it has been returned to.
What DirectX resources are lost when the user alt-tabs, and the best ways to cope with this.
Major things to do and things to avoid in application architecture for purposes of alt-tab support.
Any significant differences between major DirectX versions as they apply to the above.
Interesting tricks and gotchas are also good to hear about.
I will assume you are using C++ for the purposes of my answers, but if you can afford to use C#, XNA (http://creators.xna.com/) is an excellent game platform that handles all of these issues for you.
1]
This article is helpful for windows events in the window procedure to detect when a window loses or gains focus, you could handle this on your main window: http://www.functionx.com/win32/Lesson05.htm. Also, check out the WM_ACTIVATEAPP message here: http://msdn.microsoft.com/en-us/library/ms632614(VS.85).aspx
2]
The graphics device is lost when the application loses focus from full screen mode. Microsoft offers an article on how to handle this: http://msdn.microsoft.com/en-us/library/bb174717(VS.85).aspx This article also has a lost device tutorial: http://www.codesampler.com/dx9src/dx9src_6.htm
DirectInput can also have a device lost error state, here is a link about that: http://www.toymaker.info/Games/html/directinput.html
DirectSound can also have a device lost error state, this article has code that handles that: http://www.eastcoastgames.com/directx/chapter2.html
3]
I would make sure to never disable Alt-Tab. You probably want minimal CPU load while the application is not active because the user probably Alt-Tabbed because they want to do something else, so you could completely pause the application, or reduce the frames rendered per second. If the application is minimzed, you of course don't need to render anything either. After thinking about a network game, my best solution is that you should still reduce the frames rendered per second as well as the amount of network packets handled, possibly even throwing away many of the packets that come in until the game is re-activated.
4]
Honestly I would just stick to DirectX 9.0c (or DirectX 10 if you want to limit your target operating system to Vista and newer) if at all possible :)
Finally, the DirectX sdk has numerous tutorials and samples: http://www.microsoft.com/downloads/details.aspx?FamilyID=24a541d6-0486-4453-8641-1eee9e21b282&displaylang=en
We solved it by not using a fullscreen DirectX device at all - instead we used a full-screen window with the top-most flag to make it hide the task bar. If you Alt-Tab out of that, you can remove the flag and minimize the window. The texture resources are kept alive by the window.
However, this approach doesn't handle the device lost event happening due to 'lock screen', Ctrl+Alt+Delete, remote desktop connections, user switching or similar. But those don't need to be handled extremely fast or efficiently (at least that was the case in our application)
All serious D3D apps should be able to handle lost devices as this is something that can happen for a variety of reasons.
In DX10 under Vista there is a new "Timeout Detection and Recovery" feature that makes it common in my experience for graphics devices to be reset which would cause a lost device for your app. This seems to be improving as drivers mature but you need to handle it anyway.
In DX8 and 9 (and 10?) if you create your resources (vertex and index buffers and textures mainly) using D3DPOOL_MANAGED they will persist across lost devices and will not need reloading. This is because they are stored in system memory and the DX runtime copies to video memory automatically. However there is a performance cost due to the copying and this is not recommended for rapidly changing vertex data. Of course you would profile first to determine if there is a speed issue :-)
How do companies like Valve manage to release games to all three major gaming platforms? I am interested in the best-practices regarding code sharing specifically between Windows, Xbox360 and PS3, since the ideal solution is to reuse as much code as possible instead of rewriting the whole thing for every platform.
It's not any different than writing platform-independent code in other contexts. Hide platform-specific details (input, window interaction, the main event loop, threading, etc) behind generic interfaces, and test regularly on all the platforms you intend to support.
Note that the Cell's threading model is unusual enough that doing threading "generically" takes some care. I am not a Valve employee and I know none of their secrets, but it's my understanding that most game developers who want to target the PS3 use a job queue that the individual cell processors grab tasks off of as needed. This isn't necessarily the best way to use the Cell, but it generalizes nicely to more conventional threading models (like, frex, the one that thet PC and the 360 both use).
There's a bunch of Game Developer Magazine articles and GDC talks on the subject. In fact, since you mentioned Valve, they delivered a talk describing their approach at GDC08.
This is really a huge subject that I could (and have) talk about for hours upon hours, but elevator summary is:
Determine which parts of the engine are completely platform-specific and put them behind an abstraction. File and asset loading, for example, need to be rewritten for each console; but you can hide that behind an IFileSystem interface which provides a uniform API that the game code talks to.
The PS3 makes this hard because its abstraction point has to be someplace completely different from the other platforms. Even game features like collision and nav will have to be written differently for the Cell.
Try to keep leaf game code (entities, AI, sim) as platform-agnostic as possible...
But accept that even the leafiest of game code will sometimes need some platform-specific #ifdefs for perf or memory or TCR reasons. A lot of UI will have to be rewritten because the manufacturers have conflicting certification requirements.
Anyone who says the words "I'm not worried about performance" or "memory isn't an issue" shouldn't be on the payroll.
This question can be divided up into two separate questions. "How can I write portable code?" and "What are the divergent requirements of mainstream gaming platforms?".
The first question is relatively easy to answer. Best practices for abstracting your non-portable code are covered in Write Portable Code:
http://books.google.ca/books?id=4VOKcEAPPO0C&printsec=frontcover
Turning theory into practice, the Quake 3 source code does a pretty good job of dividing out different platforms into separate areas for a C codebase, available at http://www.idsoftware.com/business/techdownloads/ However, it does not demonstrate C++ patterns such as abstract interfaces, implemented once per platform.
The second part of your question, "What are the divergent requirements of mainstream gaming platforms?" is tougher. However, it is notable that your largest areas of change are still your renderer, your audio subsystem and your networking.
Each console platform has a series of certification requirements, available under an agreement with the respective console owners. The requirements drive consistency in user experience and are not focused on gameplay or qualitative, high level issues. For instance, your game may need to display a reasonably interesting animating loading screen, and black screens are unacceptable.
Getting your hands on this documentation as soon as possible is key to making the right choices in developing for a specific console platform.
Finally, if you can't get your hands on a console devkit, I suggest you port your code to the Mac from Windows. The Mac gets you an OS port ensuring you are not tied to Windows as well as a processor port if you support universal binaries. This ensures your code is endian agnostic.
If you support both PC and Mac, you will be well positioned to support a third platform, should you gain access to it in the future.
Addendum You wrote:
the ideal solution is to reuse as much
code as possible instead of rewriting
the whole thing for every platform
In many game porting scenarios, the ideal solution is not to reuse as much code as possible, but to write the optimal code for each platform. Code can be reused between projects and is relatively inexpensive as compared to the content that the engine takes in. A more reasonable goal is to aim for lowest common denominator content that runs on all platforms without modification (a build phase that packs the content for media is okay).
It's great to do simultaneous development. You find all kinds of bugs you wouldn't find doing just one platform.
I remember that programmers in DOS had null pointers all the time because writing to low memory didn't immediately crash them. When you ported to an Amiga, Atari ST, or Macintosh, boom! I remember telling a DOS programmer that he had a couple null pointers on an aready-shipped game. He thought for a couple seconds and grinned, "That explains a few things."
Now that games have such large budgets, it's important to ship them all at the same time so you don't waste marketing and ad budgets.
My advice on simultaneous development is to pick one lead platform, but never let the other platform(s) get more than a week behind. It will become obvious as you program which parts of the code are common to all platforms and which are different. Pull out the differences into one or more platform-specific areas.
My experience is in C/C++. It's a bigger problem if you have to port against different languages (say, Java and Objective-c).
A few years ago the Opera CEO said in an interview that the key to developing for independent platforms is to move away from any single OS/platform libraries. He went on and said that they developed their own libraries that improve OS performance.
My assumption is that big companies will have a common, Xbox, PS, windows, FooOS, separate teams. Each platform needs to be tweaked differently and requires different implementation methods. I don't think they do one source for all platforms; rather, they build one for each OS thereby, improving efficiencies. I remember EA used to release some console games earlier than the PC versions and vice versa.
Another issue is that different consoles have different hardware thus requiring different programming techniques.
there are two extremes, build one source that fits all (java for instance) but you run the risk of inefficiency or write 40 versions; one optimized for each platform
Back when I had a friend into educational computer games (before The Learning Company gutted the field), he was a great fan of creating cross-platform libraries for doing everything.
This is easier for games than other apps. If you have a word processing app to run on the Mac and Windows, for example, it really does need to look and behave like a Mac app on the Mac, and a Windows app on Windows. Write a game, and it doesn't have to conform to the native behavior, look, and feel.
If you want open source examples, you could look at source code of Quake 1, 2 and 3 engines. They are structured quite portably. (Of course, no ps3 or xbox360 support, but same principles apply)
http://www.idsoftware.com/business/techdownloads/