Google Maps vs Street View - inverted pitch parameter? - image

I wrote some PHP script, to get static image from Street View Static image API after giving the script usual Google Maps URL.
But, then I set position of Street view to the ground, and run my script, I get the image of sky, and vice-verse.
Here is an example.
Usual Google Maps url:
https://maps.google.com/?ll=54.899267,23.884749&spn=0.022086,0.062485&t=m&z=15&layer=c&cbll=54.898264,23.885077&panoid=eu75VjoUqNejdSOUJEoCdA&cbp=12,17.61,,0,36.53 <- pitch = 36.53 )
And here is static image from API:
http://maps.googleapis.com/maps/api/streetview?size=640x400&location=54.898264,23.885077&heading=17.61&pitch=36.53&fov=70&sensor=false
As you can see, pitch is same size, but picture shows the sky.
If you invert pitch (-36.53), then everything is ok.(I can't show i, because of reputation, no more than 2 links).
Is this a some bug or what? I don't find any information about this thing.

It really appears that the values are inverted, but there is no bug.
The parameters for Google-Maps AFAIK are not official documented, so the mistake here is that you rely on these parameters.
But the parameters for the Street View Image API are documented:
* pitch (default is 0) specifies the up or down angle of the camera relative to the Street View vehicle. This is often, but not always, flat horizontal. Positive values angle the camera up (with 90 degrees indicating straight up); negative values angle the camera down (with -90 indicating straight down).*

Related

Depth component readRenderTargetPixels in Three.js?

Can depth pixel numbers be extracted from THREE.WebGLRenderer, similar to the .readRenderTargetPixels functionality? Basically, is there an update to this question. My starting point is Three.js r80. Normalized values are fine if I can also convert to distances.
Related methods:
I see that WebGL's gl.readPixels does not support gl.DEPTH_COMPONENT like OpenGL's .glReadPixels does.
THREE.WebGLRenderTarget does support a .depthTexture via THREE.WebGLRenderer's WEBGL_depth_texture extension. Although THREE.DepthTexture does not contain .image.data like THREE.DataTexture does.
I also see that THREE.WebGLShadowMap uses .renderBufferDirect with a THREE.MeshDepthMaterial.
Data types:
A non-rendered canvas, can use .getContext('2d') with .getImageData(x,y,w,h).data for the topToBottom pixels as a Uint8ClampedArray.
For a rendered canvas, render() uses getContext('webgl') and contexts may only be queried once, so getImageData cannot be used.
Instead render to a target and use .readRenderTargetPixels(...myArrToCopyInto...) to access (copy out) the bottomToTop pixels in your Uint8Array.
Any canvas can use .toDataURL("image/png") to return a String in the pattern "data:image/png;base64,theBase64PixelData".
You can't directly get the content of the FrameBuffer's depth attachment using readPixels. Whether it's a RenderBuffer or a (Depth) Texture.
You have to write depth data in the color attachment.
You can render your scene using MeshDepthMaterial, like shadow mapping technic. You ends up with the depth RGBA encoded in the color attachment. You can get it using readPixels (still RGBA encoded). It mean you have to render your scene twice, one for the depth and one to display the scene on screen.
If the depth you want match what you show on screen (same camera/point of view) you can use WEBGL_depth_texture to render depth and display in one single render loop. It can be faster if your scene contains lots of objects/materials.
Finally, if your hardware support OES_texture_float, you should be able to draw depth data to a LUMINANCE/FLOAT texture instead of RGBA. This way you can directly get floating point depth data and skip a costly decoding process in js.

Get Image clicked position on DIfferent Resolution when the coordinates will be changed on different devices

I have a responsive Image which is working on different mobile resolution.
I want to Ask you, when I clicked on image at any place like top-left, left-botton etc, I want to get its(image) position.Means On which position it is being clicked. I tried the following scenario While Implementation of my Source Code, I get a coordinate on particular place on which The image is clicked.
Problem: Each of Mobile device has different resolution, So, For the same scenario, the coordinate will be differed. I required the implementation like the place of clicked image should be same on every resolution.
please tell me how can I resolve my Issues, And which one is best technology, tell me if anyone has knowledge to resolve this.
There have been some previous posts that try to achieve what you want, the basic logic can be achieved by calculating ratio:
var xratio=225/420; // 420-mouse-x-coord divided by 420
var yratio=38/38; // mouse-y-coord/element height
var x=320*xratio;
var y=38*yratio;
Please see the following references:
How to convert click coordinates of different dimensions into 320 dimension size?
https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect

RMagick: Rotate image with origin (matrix transform)

tl;dr: How can I use RMagick to rotate an image at a given point.
I created a website which allows user to manipulate 2 images online and composite these images on the server side to a single image.
I use regular css transform: rotate() for rotation on the client side.
On the server side I'm rotating the image using RMagick's rotate! method but the results differ from the web version.
(Presumably because of an origin issue (e.g. at which point of the image the rotation takes place)).
The web version rotates at the center of the image (transform-origin: 50% 50%). Unfortunately RMagick doesn't by default.
I read through the RMagick docs and found affine_transform which accepts a matrix and transforms the image. Is this the right method to use, if so how? I tried passing the css-matrix to that function but it doesn't work.
Somewhere in the RMagick documentation I read that Magick::Image#rotate accepts 3 parameters (degree, originX, originY) but my version says that it only accepts 2 parameters (and actually requires the second parameter to be a string...).
My code:
require 'rmagick'
include Magick
#label = Label.last
image = #label.image
json = #label.processing
image.background_color = "none"
image.resize!(json["size"]["width"], json["size"]["height"])
# how can I set the rotation origin to the center of the image?
image.rotate!(json["rotation"].to_i)
overlay.composite!(image, json["position"]["x"], json["position"]["y"],
Magick::OverCompositeOp)
overlay.write("output5.png")
The output I'm currently getting is this. The blue square is actually imagein the code. The heart is overlay.
My desired output looks like this: (Ignore background, border and controls)
If I don't use rotation at all, both images are identically. That's why I assume it's an rotation issue. Both images are equally in width and height.
edit: Apparently only Magick::RVG::Image accepts the originX & originY parameters I mentioned above. Still not able to transform the current image into a RVG Image. It might solve the issue if I can transform my Magick::Image into Magick::RVG::Image.
Okay I've found a solution for this problem. I have to use RVG which is a module to create vector graphics.
The #rotate(degree, originX, originY) method is defined in the RVG::Image class so I had to wrap my Magick::Image object with:
require 'rvg/rvg'
image = RVG::Image.new(magick_image, width, height, x, y)
canvas = RVG.new(width, height)
canvas.use image
overlay.composite!(canvas.draw, ...)
Writing this on mobile, I will add a detailed answer asap.

Unity3d. How to get screen or world position of ui element

I have UI elements (image, etc). Canvas attached to camera.
There is a RectTransform, but how to convert this data to screen or world coordinates and get center point of this image?
Tried RectTransform.GetWorldCorners but it returns zero vectors.
yourRectTransform.rect.center for the centre point in local space.
yourRectTransform.TransformPoint to convert to world space.
It is odd that RectTransform.GetWorldCorners doesn't work as stated. Per one of the other answers you need to call after Awake (so layout can occur).
I found that both GetWorldCorners and TransformPoint only work on Start(), not on Awake(), as if we'll have to wait for the content to be resized
Vector3 min = referenceArea.rectTransform.TransformPoint(referenceArea.rectTransform.rect.min);
Vector3 max = referenceArea.rectTransform.TransformPoint(referenceArea.rectTransform.rect.max);
elementToResize.transform.localScale = new Vector3(Mathf.Abs(min.x - max.x) , Mathf.Abs(min.y - max.y), 1f);
elementToResize.transform.position = referenceArea.rectTransform.TransformPoint(referenceArea.rectTransform.rect.center);
You can work with GetWorldCorners but only when using a child that has real dimensions. I got reasonable values for a child of a world space canvas.
Following Huacanacha's guidance,
Vector2 _centerPosition = _rectTransform.TransformPoint(_rectTransform.rect.center);
will give you the WorldCoordinate of the center of the image.

Displaying a portrait image in KML without it being rotated to landscape

I am trying to reference images with a greater height than width (portrait format) in KML script for Google Earth; however, the image always comes out as landscape, or rotated left 90 degrees, e.g.
<img id="id_photo" src="2012_01_21-dscf03.jpg" width="500"></img>
I've tried everything I could think of. Is there a image tag to correct this, e.g., format="portrait"?
Thanks,
Walter
This sounds like an example of EXIF only rotation. Which GE probably doesn't honour.
Some cameras etc, 'rotate' a image so its the right way up by setting a flag in the EXIF data. The raw JPG itself, is still in the landscape format.
A display (or convert) program, should hopefilly notice this 'rotation required' flag, and rotate the image.
But Google Earth probably doesnt honor it, so you are just seeing the baseline image as its actully stored (unrotated)
Recommend trying one of the applications mentioned here:
http://jpegclub.org/losslessapps.html
(many note they have automatic correction - so should "fix" your jpg files)
This is already an old thread, but I stumbled on the same problem. And did not find a solution for my situation. Eventually I found a way around, so I thought I'd share it here.
Basically the solution is to rotate the offending images twice, once 90° to the left and then back again.
What you had was an image with a width larger than the height, but with an orientation tag that tells an application to rotate it 90° (but Google Earth does not).
After rotating it twice it is an image with width and height switched, and an orientation tag that says not to rotate it.
Now any application, including Google Earth, will display it correctly.
I used ExifTool to write the tags for all my images to a CSV file, created a list from that with all the pictures to rotate, and used that list to tell IrfanView twice to rotate them.

Resources