How to capture high-resolution google map image? - image

Recently I built a website called Google Map Customizer, which lets you customize the colors on a Google Map and get large, high resolution images. But the problem is now I have to use third-party tool to capture the image. I am wondering if it is possible to do so by using some native codes? (Java maybe?)

check it on here Google Maps API

Related

Mobile Cross-Platform Camera Frames Extraction

Is there any cross-platform framework for mobile apps (Xamarin, Flutter, React-Native, etc.) that allows accessing frames from the camera's feed live?
In other words, is there any way to perform manipulations on live video (frame-by-frame) in cross-platform environments? (Similarly to this tutorial for iOS).
From what it seems, in Flutter for example, it's possible to display a live preview of the camera, but not to access the frames; and beside some ghost-town questions I couldn't find much online about it.
Xamarin allows you to access and use each and every feature of all platforms.
The code will be platform specific but C#. I have one project in my repo where I'm using Xamarin.iOS to overlay rectangle detection onto the camera live feed. You can implement something similar using Xamarin.Android (using Android specific APIs).
You can then create an abstraction which will be consumed from a Xamarin.Forms app or you go with two separate C# based native apps.

Computer vision google tango

Tango is developed by google which has api that used for motion tracking on mobile devices. I was wondering if it could be applied to stand alone java application without android (for java-SE). If not then I was wondering are there any api out there are similar to tango where it tracks motion and depth perceptions.
I am trying to capture the motion data from a video, not camera/web cam. If this was possible at all.
Googles Tango API is only compatible with Tango enabled devices only. So it does not work on all mobile devices only devices that are Tango enabled. If you try to use the API with a device that is not Tango Enabled it wont work.
I think you should research a bit into OpenCV its an Open Source Computer Vision Library that is compatible with Java and many other languages. It lets you analyze videos without the need for that many sensors (like Raw Depth Sensors which are primarily used on Tango enabled Devices).
The Tango API is only available on Tango-enabled devices, which there aren't that many of. That being said, it is possible to create your own motion-tracking and depth-sensitive app with standard Java.
For motion-tracking all you need is a accelerometer and gyroscope, which most phones come equipped with nowadays as standard. All you basically then do is integrate those readings over time and you should have an idea of the device's position and orientation. Note that the accuracy will depend on your hardware and implementation, but be ready for it to be fairly inaccurate thanks to sensor drift and integration errors (see the answer here).
Depth-perception is more complex and would depend on your hardware setup. I'd recommend you look into the excellent OpenCV library which has Java bindings for you already and make sure you have a good grasp on the basics of computer vision (calibration, camera matrix, pinhole model, etc.). The first two answers in this SO question should get you started on how to go about determining depth using a single camera.

Is it possible to upload a windows image to Google compute engine?

I'm wondering if there's a way to upload windows image to Google compute cloud?
I've seen Google's tutorials for Linux based OS but I couldn't find any reference to Windows except the fact the it is possible to install windows software, and that it's possible to create instances using their own images:
https://cloud.google.com/compute/docs/instances/windows and
https://cloud.google.com/compute/docs/instances/ms-licensing
In case that it's possible, I'd like to know what's the process of doing that using an Hyper-V image.
Thanks,
David.

Image manipulation on Google App Engine

I just found out that PIL functions on Google App Engine (GAE) are limited to only some basic functions.
I'm attempting to deploy a GAE app which is able to add text over an image and it is not possible to use GAE's Images Python API.
So right now, I'm hoping to look for alternatives:
External service I can call to modify the image?
The python 2.5 runtime only supports a limited set of operations through the Images API. However, the python 2.7 runtime supports the full PIL suite.

How do I read a video camera in a win32 C program

I have this garden variety USB video camera, and it came with two mini-apps, one that just lets you see what the camera sees, and one that records to an .avi file.
But what's the API if I want to grab images from the camera in my own C program? I am making the assumptions that it's (1) possible and (2) desirable to make some call and have a 2D array of pixel information filled in.
What I really want to do is tinker with image processing algorithms, and for that I'd really like to get my code around some live data.
EDIT -
Having had a healthy exposure to Linux, I can grasp how (ideally/in theory) you could open() the device, use ioctl() to configure it, and read() the data. And I'm virtually certain that that's not how Windows is going to present the API. Not knowing what function names Windows might use for a video device API, or even if it has one, makes it difficult to look up, at least with the win32 api search capabilities that I have at my disposal.
You'll probably need the DirectShow API, provided that's how the camera operates. If the manufacturer created their own code path, you'll need their API.
Your first step, as pointed out by ChrisBD, is to check if Windows supports your device.
If that is the case you have three possible Windows APIs for capture:
DirectShow
VFW. Has more or less been replaced by DirectShow
MediaFoundation. Is the newest API that is intended to replace DirectShow. AFAIK not fully implemented yet and only available in Vista.
From the three DirectShow is the best choice. However, learning and using DirectShow is not a trivial task. An excellent example can be found here.
Another possibility is to use OpenCV. OpenCV is an image processing library, that you can also use for processing the images. OpenCV has an image capture API that provides a simpler abstraction and is easier to use than the Windows APIs.
The API is the way to go.
A good indication of whether the camera requires a bespoke one or not is to see if it is recognised by a PC without the manufacturer's applications installed. If windows has the drivers built in the you should be able to use the windows APIs to capture the images.
Alternatively if you know what compression codec has been used for the AVI file you could unpack it.
Ideally it would be good if you could capture the video in native (YUV, RGB15 or similar) format as then you can work on compression as well as manipulation.

Resources