Objective
I've looked at multiple libraries such as ZXing, however, all of them allow a Xamarin Forms application to scan via Camera, I am looking for a laser scanning solution and have yet to find one. Can anyone recommend one?
Additional Info
This is for reading Barcodes and QR codes but Barcodes as a priority
To be used in android scanner guns (hand helds)
I can actually recommend a few, but it really depends on the volume you are initially going to be scanning, and whether you want to pay per year, per scan, or buy outright (which I did 5 years ago, but now switched to annual licensing).
I have had the most success with Cognex.
Related
all.
So, I'm completely new to programming, especially mobile development. Considering goals for a future app I'm designing, I've been thinking of using NativeScript. I was reading the book "The NativeScript Book" offered on NativeScript.org, and they were discussiong JIT vs pre-compiling (AOT I guess it is? Correct me if I'm wrong on that). They also showed how a framework like Xamarin compiles down to the native file (.apk for Android and .ipa for Apple) where NativeScript runs in a javascript virtual machine.
When reading this, it seems like code that has been compiled BEFORE and not just at the time of execution would have a significant advantage in terms of speed, especially on phones where they aren't nearly as capable as modern desktop computers.
Can someone address this concern for me? Likely it's because I am ignorant and don't understand things yet, so please enlighten me and help me learn.
THanks :)
If you are new to programming, the performance of the framework should not be one of your priorities. You should start by focusing on other aspects, like portability, or simplicity of the solution. Learning to write good software should be your top priority instead of trying to write fast software.
Nonetheless, if this is really important for you, there are some benchmarks available opposing the different mobile oriented frameworks. For instance, the NativeScript website has one :
https://www.nativescript.org/blog/nativescript-and-xamarin
Check the 'Speed' section. It states for instance :
In these tests, you can see that Xamarin is roughly 200 ms faster than NativeScript at startup time.
But as you'll notice that when comparing performances, benchmarks always target only one aspect of the solution. This one in particular targets boot time. Some others target sorting of large arrays, or HTTP requests speed.
It is impossible to get a global performance test indicating which solution is the best. I think that one good example in that regard is the comparison of Xamarin VS Native languages (Swift // Java). Check this page :
https://www.altexsoft.com/blog/engineering/performance-comparison-xamarin-forms-xamarin-ios-xamarin-android-vs-android-and-ios-native-applications/
Xamarin is close to native solutions since it uses the native SDKs, but you will notice big differences between Xamarin and native. So it would be event more complicated to compare Xamarin and NativeScript which has a completely different logic !
In my opinion, you should start by the framework you like the most. And change it based on your experiences if you are not satisfied with it.
I'm about to start my Bachelor's thesis (estimated workload of around 180 programming hours and 180 hours of actual writing the thesis). As I have zero experience with Google's project Tango, I don't know whether my goal is achievable at all or way too big without me realizing it.
I would like to develop a "blueprint app" that uses Tango to create a blueprint of a scanned area. I.e. I want to go around my apartment with the device in my hand, scan it and afterwards have a file in which I can see all the dimensions of all rooms, see doors (and windows?), but not see things lying around like cloths, books, even shelves, chairs or my guitar.
I believe that this is generally possible with Tango, although I can imagine it to be quiet hard, as soon as I want to include some data but exclude other.
I realize that an answer heavily depends on the person. I have a little more than 3 years of programming experience in Java. And I have developed a small Android app so that I know the fundamentals of lifecycles etc. My last programming project was a simple web shop including a restful Spring backend, and an AngularJS frontend. I use frameworks where possible and so far don't seem to have bigger problems with google's APIs.
So my question is: Am I crazy to even think that's possible or do you (people with knowledge of Tango) think it could be done?
As it is shown in Project Tango GTC Video, some local features are extracted and tracked for motion estimation that is then fused with accelerometer data.
Since any developer may need to track features to develop his/her apps, I was wondering if there would be a way to get those features through APIs.
Although it is possible to extract some point and retrieve their flow using estimated 6DOF pose returned by the APIs, it adds extra overhead. Another issue with this approach is that the pure visual flow (including outliers) is not achievable and is influenced by IMU data.
So my question is that if these features are tracked using hardware-accelerated algorithms, how can we get them using APIs without having to implement it and do a redundant task.
Any answer and suggestion would be appreciated.
It is straightforward to compile OpenCV for the Tango with nVidia's TADP package. Use 3.0r4. You may need to merge some OpenCV-4-Android bits but it's easy, and the ES examples will fail on the device but don't sweat it.
Google released the "Project Tango ADF Inspector" on play store - I haven't actually had any time to play with it, but its the first thing to offer any look inside that data - I think Google considers this data sensitive and is cautious in this area, with good reason - If you look for the starred "important" note on this page you should get a feel for the sensitivity of that issue.
I'm currently porting a 3D C++ game from iOS to Android using NDK. The rendering is done with GLES2. When I finished rewriting all the platform specific stuff and ran the full game for the first time I noticed rendering bugs - sometimes only parts of the geometry would render, sometimes huge triangles would flicker across the screen, and so on and so on...
I tested it on a Galaxy Nexus running 4.1.2. glGetError() returned nothing. Also, the game ran beautifully on all iOS devices. I started suspecting a driver bug and after hunting for many hours I found out that using VAOs (GL_OES_vertex_array_object) caused the trouble. The same renderer worked fine without VAOs and produced rubbish with VAOs.
I found this bug report at Google Code. Also I saw the same report at IMG forums and a staff member confirmed that it's indeed a driver bug.
All this made me think - how do I handle cases of confirmed driver bugs? I see 2 options:
Not using VAOs on Android devices.
Blacklisting specific devices and driver revisions, and not using VAOs on these devices.
I don't like both options.
Option number 1 will punish all users who have a good driver. VAOs really boost performance and I think it's a really bad thing to ignore them because one device has a bug.
Option number 2 is pretty hard to do right. I can't test every Android device for broken drivers and I expect the list to constantly change, making it hard to keep up.
Any suggestions? Is there perhaps a way to detect such driver bugs at runtime without testing every device manually?
Bugs in OpenGL ES drivers on Android is a well-known thing, so it is entirely possible to have a bug in a driver. Especially if you are using some advanced (not-so-well-tested) features like GL extensions.
In a large Android project we usually fight this issues using the following checklist:
Test and debug our own code thoroughly and check against OpenGL specifications to make sure we are not doing any API-misuses.
Google for the problem (!!!)
Contact the chip-set vendor (usually they have a form on their website to submit bugs from developers, but once you have submitted 2-3 real bugs successfully you will know the direct emails of people who can help) and show them your code. Sometimes they find bugs in the driver, sometimes they find API-misuse...
If the feature doesn't work on a couple of devices, just create a workaround or fallback to a traditional rendering path.
If the feature is not supported by the majority of the top-notch devices - just don't use it, you will be able to add it later once the market is ready for it.
For my university I (and three others), are searching for a project that utilizes at least one embedded device, web services or other web technology, and a Graphical User Interface.
Currently we are looking at developing a unified remote, that is an extendable application on a cell phone through which you can control your media center. Any ideas, or advice on this will be appreciated, though it is not the focus of this question.
We are having a hard time finding interesting (or funny) projects on which we can work a complete semester. Any ideas will be greatly appreciated. The software will be released as free software. (GPL or BSD license).
We all have a Bsc in Software Engineering.
EDIT: I am very pleased with the suggestions so far. Thanks to everyone, and keep it coming.
How about follower: carry a device, as you move from room to room in your house devices configure themselves to your preference - lights, music etc. If two people are in the room some precedence rules.
Is that possible just on the presence of a mobile phone?
Another idea (from the top of my head):
A work environment ensurance thing. We programmers like to develop in nice and quiet environments. Unfortunately some people tends to annoy us with their disturbing behaviour (or just by being loud).
So the project could be to create devices wich tracks the stress level (sweat levels, pulse etc.) of the individual and their impact onto others.
An example: One individual is very loud (the device should measure this), and others around him becomes stressed and/or unfocused because of this. The serverside sw, should then detect and warn him to quit down a bit to improve the work environment.
Comments?
What do you peeps like doing? Build an app for it.
So, if you like drinking coffee build a application which will find the nearest frothy coffee shoppe (or if you're particular, the nearest Peets/Starbucks/Whatever-ocino). This idea works for beer too.
If you buy stuff off e-Bay build a sniper app.
If you enjoy playing frisbee build an app which locates your nearest friends and sends them a text asking whether they want to goof off lectures and go to the park.
Heck, you could even build an app which monitors your SO questions and alerts you when you get an answer (although I don't know whether the data services SO currently offer will be up to the job).
The standout companies that have made great universal (programmable) remotes are : logitech, and philips.
One of the big problems with these types of devices is the ability of the general consumer to actually program all of their various devices. Logitech has done an outstanding job of providing a fairly simple Web based user setup experience that then implements a very usable universal control.
I would definitely look at what they have done for some ideas on universal remote controls.
How about an app and hardware that will tell me when my wife's plants need watering? (It's somehow my fault if they don't get watered.)
OK then: the recipe generating fridge. Rfid tags on the contents know what's available and the expiry dates. The database knows the recipes. The fridge emails/texts you to say "buy some mushrooms and you can have a delicous ham and mushroom omelette while the eggs are still fresh."
Benjamin and all those aspiring to do embedded projects ...
When you start a project, especially in embedded systems, you need to understand that the hardware is not your PC but some special device. And every sensor will be a transducer in itself. The only thing that would matter to students is that everything costs and are costly
So, it will be good to make sure that the idea is such that,
It can be completed by the
project members within the given timeframe
All the required development
tools like hardware etc can be
really bought
Of all, it good to ensure that the
project enables you to learn
something useful for your career ...
To do all this it is better set some achievable goals
Develop a system in which you can program the lighting system of your house. You can set up their schedule one time and everything should work automatically.
I really love working witht the Atmel ststk1000/stk1006/stk1002 development boards for tht AVR32. ATSTK1000
2x Ethernet
QVGA lcd
USB 2.0
SD/MMC
Conpact flash
Supported embedded linux
IR
Audio
ps2 interfaces
uarts
++
familiy atmel page:
AVR 32 family home
online forums
Forums for CPU