I want to create an app (build by Flutter), which uses OCR features of google cloud vision (https://cloud.google.com/vision/). In the app, taking a picture by camera, and use gooogle cloud vision to extract text from it.
I found several resources, but I am not sure which one is the right one.
1)https://pub.dev/documentation/googleapis/latest/vision.v1/vision.v1-library.html
2)https://cloud.google.com/vision/docs/reference/rest#service-endpoint
3)https://pub.dev/packages/google_ml_vision
My question is
which one above that I can use with my Flutter app?
the first and the third are the same thing?
Is there any example of using the first and second approach.
Thanks very much.
Related
I would like to build an app using Ionic/Parse that allows me to take a picture with a mobile device camera, and do text processing of the image. From what I gather open source libraries are a little finicky, so for the purposes of prototyping I was hoping to use Google Drive's OCR capabilities.
The user would take a picture of a document, and my Cloud Code would send the picture to google drive, perform the ocr, and on ocr success, the picture would be sent back to my Parse db.
I am looking for some wisdom on this approach...Is this realistic or am I just totally off my rocker? Is there perhaps a service that integrates the two things? Am I just going to waste the same amount of time getting this to work as I would trying to integrate an open source OCR library? From an implementation prospective, would I run into authentication/data format/whathaveyou issues?
Hoping for some, been there tried that, these are some useful lessons..
Thanks!
Disclosure: I run this service.
Approach seems reasonable. http://ocrestful.com does exactly what you're describing, via an all-REST API. Permanently free tier available.
our project is in China, where all google services are blocked.
does Tango need any google services, or is it self-sufficient and is able to operate on its own? if it needs services, it is basically useless to our case.
Basically,
Normal Project Tango device's features didn't need online google services.
MotionTracking, AreaLearning and Depth should do well.
you can download the sample apps sources code on here
like c samples :
https://developers.google.com/project-tango/apis/c/
try that, it won't need online google services.
But there maybe some apps which integrated Tango feature with other google services do.
Unfortunately, since Google Services is blocked in China, you can't get the BSP OTA and Play store apps updated, which means at some point you can't get new features and bugs fixed on your devices.
If you need further help. better contact project-tango-help#google.com.
AFAIK it does not. I can confirm that I've been using my devkit offline most of the time.
More generally, the core Tango services are all hosted on the device, and they were preinstalled on my devkit when I got it about 3 weeks ago. However, updates to these services come OTA via Google Play.
The three services Tango provides are motion tracking, area learning and depth sensing. Depth sensing is provided directly by the sensors, and motion tracking obviously cant be done online in a performant way. AFAIK, area learning too happens offline, but you can find more info here.
I have developed a bunch of algorithms on facial recognition using VC# 10 and Emgucv and now I am supposed to export that on cloud through an web API. I don't really want to rewrite the entire code and I am hoping there is a very flexible way of incorporating all the modules I have developed into a single web api to be deployed on azure. I have tried converting them into dlls and running them under xbap. This is the route I want to avoid. Deeply appreciated if you could respond to my query.
I would like to develop a WP7 application that has a map in one portion of the display. Since it needs to be stand-alone, I need a utility that has built-in maps and does not need to surf the internet in order to operate. Is there any technique present there in wp7 to do the same. Please help me to find a solution.
Not possible, since Bing Maps doesn't work offline.
I think you might be able to use your own maps in the Bing Maps control, so you could include them in your application, see here.
I wonder if there is a tool/framework available that supports testing Google Wave Gadgets outside Google Wave.
I know these two emulators (1 and 2), but I still always would have to upload my gadget for every debugging run.
I am looking for a tool that displays the current state, allows to modify the state and to send the state back to the gadget.
Any ideas?
The two emulators you mentioned can be used in the way you described. You just have to download one and run it on a local web server. Then you can develop your gadgets offline, without having to upload them every time you want to test.
Here are links to the source for the gadget emulators:
http://github.com/vidarh/wave-gadget-emulator
http://github.com/avital/google-wave-gadget-emulator
There is a Google Wave Gadget API implementation available for Node.js, and this can be run as a standalone service supporting Wave Gadgets running in any web application.
https://github.com/scottbw/wave-node
It isn't 100% easy, but I would recommend setting up your own server on your local machine, for the moment, until someone figures out how or takes the time to wrap such a server into a usable one-click-install tool.
http://code.google.com/p/wave-protocol/wiki/Installation
For what it's worth- Google wave has been open sourced into Apache's WaveInABox: http://www.waveprotocol.org/wave-in-a-box