Zooming or enhancing performance on ZXing - xamarin

I'm using ZXing.Mobile.Forms (ZxingScannerPage) in a Xamarin project for an app in which users would have to scan QR Codes and barcodes that are fairly high up (say, around 3 to 4 meters from the ground). The problem is, it's too far for the camera to detect!
I've already tried on more than one phone, one of them being fairly high end. Is there a way to use a zooming functionality, or to enhance the scanner's maximum range? If not, is there a better alternative to ZXing? Or will I just have to tell my bosses to hang all the labels lower?

Related

Unity - How to get started working on a large terrain/map?

I want to begin working on a big sandbox game with lots of oceans and islands. This will obviously require a very big map.
I've spent the last couple days researching the best methods on doing this and so far have figured out that
I need to split the terrain into tiles/chunks.
I need to have the player only load those tiles/chunks that are closest to them and unload those further away.
That being said. I would rather avoid flooding my game with a bunch of assets and want to kind of handle things on my own and hard code these systems in myself.
Some questions,
Can I just create a massive 20k x 20k terrain using Unity's built in terrain editing system and then worry about splicing and loading/unloading the tiles later?
or do I need to build my big terrain in an alternative program and import my terrain in then handle the splicing and loading situation?
Also, when it comes to multiplayer. I assume I would just need to basically do the same thing just for each client?
I would appreciate any other tips or guidance on doing this as well. Thanks.
Splitting into multiple chunks but ones that arent too small seems to be the better option, maybe like 4 quarters, that way you are not flooding with assets, you would disable all quarters except the one the player is in.
The problem with making a big terrain is, while you could get a tool from the asset store to split ur terrain later, you will get into the headache of resolutions and sizes and overall I guess its better to split it into 4 pieces or a bit.
You would have a trigger point where you want the game to load the terrain, and depending on which trigger point the player entered you would load the corresponding terrain piece.
For multiplayer, you would do the same thing for each client, making your big terrain as 4 smaller terrains and then loading the corresponding quarter. Tell me if I missed something.

Using windows phone combined motion api to track device position

I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)

How do you make and recognize gestures?

I am using a web cam to get video feed and then performing motion tracking on this video feed. The motion tracker returns (x,y) co-ordinates continuously.
I want to use these (x,y) to recognize gestures such as "swipe left", "swipe right", "swipe up" or "swipe down".
How do i make and store templates of these gestures and how do i figure out/recognize if one
of the gestures has happened ?
Thank you in advance :)
PS: I am using Flex 4 and ActionScript 3.0. If someone could help me out with the logic, i can write it in ActionScript.
An approach I could think of working would be to have a series of (X,Y) coordinates representing points along the gesture. On a small scale if a gesture that passed through your screen was graphed as such:
|1|1|1|
|1|0|0|
|1|0|0|
and represented (from the upper left corner representing 0,0):
(0,2)(0,1)(0,0)(0,1)(0,2)
Break the x,y coordinates up into individual 2d arrays with total distance traveled between the current coordinate and the first point (in all cases in this example it would increment by 1) so you would have two arrays:
X:(0,0)(1,1)(2,2)
Y:(0,1)(1,1)(2,2)
Now do a least square fit on each array to find the closest representation of the change in x and change in y as quadratic functions. Do the same to your per-deteremined gestures and then plug in the x,y coordinates of your per-determined gestures into to the user's gesture's quadratic function and the per-determined gestures you designed and see which one it matches the closest. This is your gesture.
(I've never tried processing gestures, but I don't see why this wouldn't work)
You should divide your task into smaller subtasks. In computer vision there is no thing like a generic gesture detection that works out of the box in all environments.
First of all, you need to be able to detect motion at all. There are several ways to do this, e.g. background subtraction or blob tracking.
Then you need to extract certain features from your image, e.g. a hand. Again, there is more than one way to do this. Starting from skin color approximation/evaluation, which is very error prone to different lighting conditions, to more advanced techniques which really try to "analyze" the shape of an object. Those algorithms "learn" over time how a hand should look like.
I can only recommend to buy a decent book about computer vision and to research the web for articles etc. There are also libraries like OpenCV you can use for learning more about the implementation side. There should be several ports of OpenCV to ActionScript 3. I also can recommend the articles and tools from Eugene Zatepyakin (http://blog.inspirit.ru). He's doing great CV stuff with ActionScript 3.
Long story short, you should research motion tracking and feature extraction.
The best place to start is to read about how sign language recognition or trackpad input works, such as creating reference images and comparing them to user input. Specific to Adobe, there's the FLARToolKit, which is detailed in an augmented reality article on their website.
References:
Trackpad Science
Hand Gesture Recognition
Sign Language Recognition Research - PDF
Gesture Recognition Walkthrough - Video

Windows Phone 7 indoor maps control

I am developing an application for Windows Phone 7 which needs to display indoor maps. This is my first app for WP7. It should be fast and beautiful (with sliding animations, etc).
I see following ways to implement it:
Movable canvas with polygons on it, but the sliding is quite slow as I will have about 500 polygons
Implement backbuffer bitmap, but memory could be a problem.
Implement own custom tiledlayer, but it's not so fast to implement.
Use built-in map control and customize it somehow, but I am not sure if it's possible
And a common problem with all these solutions, but last one is that I have to implement sliding and zooming myself.
Are there any controls for such stuff? And if not and built-in map customization is not an option, what's the best way to implement sliding like in Bing maps. I've done it on winmob 6 by writing some formulas, but I guess there should be a better way in WP7.
Depending on how you set your images up, you could take a look at the DeepZoomContainer on CodePlex. It has a WP7 control. You could also build your own version using the MultiScaleImage class. A good tip when it comes to manually animating in Silverlight is to user the object's transforms rather than directly setting their Canvas position properties (e.g. Canvas.SetLeft()). The reason for this is is because the transforms are done on the GPU, making them much faster. If relevant, you can also use storyboards for fixed animations as these are also run on the GPU.

2d Barcode vs 1d barcode - speed, accuracy, size

I wanted to implement barcode for one of my mobile project requirements. The amount of data that is to be stored is very little (<25 alpha-numeric). I want to know if its wiser to implement a 1d barcode or a 2d barcode (Qr code particularly) for this project. I would be really glad if someone could educate me on the following aspects from a 1d vs 2d perspective:
scanning speed
size (minimum display size that is needed, for the mobile camera to recognize -- this is more crucual)
accuracy
Considered from a typical processing and SDK perspective (zxing preferably).
I'd go with a qr code, particularly if you're planning on using a phone camera. qr codes have features (finders) that make things like perspective correction easier/more reliable. They also have ECC that enables eliminating false positives and correcting various amounts of bit detection errors. If you look at the zxing test suite, you'll find a number of false positive 1D cases since many 1D codes don't have even a checksum.
Speed's probably not an issue for either case if you know what you're trying to scan. The biggest computational cost in zxing is going through all possible codes when you don't know what you're looking for. If you know the code type, it's not likely to be significantly different.
The only thing about size is the number of pixels that have to be captured. In other words, a small code can be read if you hold the camera close to the code. A large code can be read from further away. All this is subject to light conditions, camera focus (or lack there of), and camera brightness adjustment. I can't see how any of these would impact 1D vs 2D though.

Resources