Image node not render properly on Vuzix M100 - kudan

I fork your sample code to track my usb-stick, as this :
// Initialise image trackable
ARImageTrackable trackable = new ARImageTrackable("Usb");
trackable.loadFromAsset("usb.jpg");
// Get instance of image tracker manager
ARImageTracker trackableManager = ARImageTracker.getInstance();
// Add image trackable to image tracker manager
trackableManager.addTrackable(trackable);
// Initialise image node
ARImageNode imageNode = new ARImageNode("eyebrow.png");
// Add image node to image trackable
trackable.getWorld().addChild(imageNode);
It works great on Nexus7.
I try same app on Vuzix M100, my marker is detected, but ImageNode (eyebrow) is not render properly (black image):

Try this:
ARLightMaterial material = new ARLightMaterial();
material.setTexture(texture2D);
material.setAmbient(0.8f, 0.8f, 0.8f);

Related

Attribution text not getting captured when using the image of the map canvas Mapbox-GL-JS

I am using ESRI basemaps with Mapbox-GL-JS. I am trying to capture a screenshot of the map using the following code:
this.map.getCanvas().toBlob(function (blob) {
canvasContext.strokeStyle = '#CCCCCC';
canvasContext.strokeRect(leftPosition, topPosition, width, height);
var img = new Image();
img.setAttribute("crossOrigin", "anonymous");
var srcURL = URL.createObjectURL(blob);
img.onload = function () {
canvasContext.drawImage(img, leftPosition, topPosition, width, height);
URL.revokeObjectURL(srcURL);
};
img.src = srcURL;
});
I am not able to figure out why the attribution on the Map is not getting captured in the screenshot. I understand that here I am just trying to get the canvas of the map. I even tried adding text elements to the map canvas and that doesn't work either. I have markers & routes, which get in the image correctly. I also tried using the Mapbox basemap and try the same, but faced the same issue.
Any help is highly appreciated!
map.getCanvas() will only return the Map's canvas not any of the HTML Elements which sit over the map like the controls, Mapbox logo or attribution text. Sam Murphy has been working on an example showing how to capture the Map including the Logo and Attribution text to an image which you can see at https://github.com/mapbox/mapbox-gl-js/pull/6518/files.
Since we can't easily capture an HTML Element to an image in JavaScript the attribution text is re-created in a canvas drawn into the Image.

Xamarin.Forms.Maps custom polyline renderer does not render when in view

I am trying to follow this guide in order to draw a polyline on a map in a Xamarin.Forms app. It should track the user's position in real-time and update the polyline when new position data comes in.
I wrote a custom map renderer that will render the polyline, but for some reason it does not update when the map is in view. I have to navigate back to the main launch page and navigate to the mapping page again for it to update.
I extracted the minimum code to reproduce the problem, but it is still too much to paste here, so I hosted it on GitHub:
https://github.com/Steztric/MapWithWaylineSample
Please could somebody let me know what I am doing wrong. You can demonstrate the problem by cloning the repo and running it.
You need to create renderer every time, so remove class variable polylineRenderer and use local one.
MKOverlayRenderer GetOverlayRenderer(MKMapView mapView, IMKOverlay overlayWrapper)
{
IMKOverlay overlay = Runtime.GetNSObject(overlayWrapper.Handle) as IMKOverlay;
if (overlay is MKPolyline)
{
var polylineRenderer = new MKPolylineRenderer(overlay as MKPolyline);
polylineRenderer.FillColor = UIColor.Blue;
polylineRenderer.StrokeColor = UIColor.Red;
polylineRenderer.LineWidth = 3;
polylineRenderer.Alpha = 0.4f;
return polylineRenderer;
}
else
{
return null;
}
}
Also you can simplify things a little
Define MKPolyline currentWayline; then
var wayline = MKPolyline.FromCoordinates(coords.ToArray());
//IMKOverlay overlay = Runtime.GetNSObject(wayline.Handle) as IMKOverlay;
nativeMap.AddOverlay(wayline);
currentWayline = wayline;

Getting an image from an XMLHttpRequest and displaying it

I got this web service that gives me a (jpeg) image. What I want is take this image, convert it into a Data URI and display it on an HTML5 canvas, like that:
obj = {};
obj.xmlDoc = new window.XMLHttpRequest();
obj.xmlDoc.open("GET", "/cgi-bin/mjpegcgi.cgi?x=1",false, "admin", "admin");
obj.xmlDoc.send("");
obj.oCanvas = document.getElementById("canvas-processor");
obj.canvasProcessorContext = obj.oCanvas.getContext("2d");
obj.base64Img = window.btoa(unescape(encodeURIComponent( obj.xmlDoc.responseText )));
obj.img = new Image();
obj.src = 'data:image/jpeg;base64,' + obj.base64Img;
obj.img.src = obj.src
obj.canvasProcessorContext.drawImage(obj.img,0,0);
Unfortunately, this piece of code doesn't work; the image is not painted on the canvas at all (plus it seems to have width and height = 0, could it be not decoded correctly? I get no exceptions). img.src looks like data:image/jpeg;base64,77+977+977+977+9ABBKRklG....
Resolved: turns out I should have overridden the mime type with:
req.overrideMimeType('text/plain; charset=x-user-defined');
and set the response type with:
req.responseType = 'arraybuffer';
(see this. You should make an asynchronous request if you change the response type, too).
First you need to create an img element (which is hidden)
Then you do exactly what you have done except that you listen to your onload event on your img element.
When this event is launched you are able to get the width and height of your pictures so you can set your canvas to the same size.
The you can draw your image as you did in last line.

Setting background image for QPushButton

I am struggling to set an background image for an QPushButton. No Success till now. Following is my code.
appsWidget::appsWidget(QWidget *parent)
:QWidget(parent)
{
QPushButton *button1 = new QPushButton("SETTINGS",this);
QPushButton *button2 = new QPushButton("TEST",this);
QPushButton *button3 = new QPushButton("IE",this);
button1->setStyleSheet("background-image:url(config.png)"); -> No success
qDebug("appWidget initialized.");
QHBoxLayout *layout = new QHBoxLayout;
layout->addWidget(button1);
layout->addWidget(button2);
layout->addWidget(button3);
this->setLayout(layout);
connect(button1,SIGNAL(clicked()),this,SLOT(setClickIndex1()));
connect(button2,SIGNAL(clicked()),this,SLOT(setClickIndex2()));
connect(button3,SIGNAL(clicked()),this,SLOT(setClickIndex3()));
}
The image I am using in the stylesheet is located in the same project folder.
Do anybody has any solution?
You have to set the flat attribute to true:
button1->setFlat(true);
You also have to set the autofillbackground -
button1->setAutoFillBackground(true);
You may want to look at QToolButton which doesn't require it to be flat in order to render an image. I'm using them in an app I'm writing at the moment and they look very nice:
m_showAddCommentButton = new QToolButton();
m_showAddCommentButton->setAutoFillBackground(true);
palette = m_showAddCommentButton->palette();
palette.setColor(QPalette::Button,QColor(82,110,166));
m_showAddCommentButton->setPalette(palette);
m_showAddCommentButton->setIcon(QIcon(":/uiImages/addComment_50_50.jpg"));
m_showAddCommentButton->setIconSize(QSize(40,40));
m_showAddCommentButton->setToolTip("Comment");
connect(m_showAddCommentButton, SIGNAL(clicked()),
manager, SLOT(showAddComment()));
hLayout->addWidget(m_showAddCommentButton,0);
(My image is stored as a resource)
Your css selector is not correct.
You should do something like:
button1->setStyleSheet("QPushButton{ background-image: url(config.png); }");
You can use brush as palette element to fill background for any widget, for QPushButton that works when button is flat.
QPixmap pixmap("image.jpg");
QPalette palette;
QPushButton *button= new QPushButton(this);
palette.setBrush(button->backgroundRole(), QBrush(pixmap));
button->setFlat(true);
button->setAutoFillBackground(true);
button->setPalette(palette);

Flex forcing resizing when the image source update is completed

I update the source property of an image. When the image is loaded I want to redraw the border skin to fit the new size of the image.
newImgEdit.addEventListener(Event.COMPLETE, loadImgComplete);
newImgEdit.source = myurl_ressource;
private function loadImgComplete(evt:Event):void {
trace("redraw !!");
//invalidateDisplayList();
this.setStyle("borderSkin", ShapeContainerBorderOn);
var img:Image = evt.currentTarget as Image;
img.removeEventListener(Event.COMPLETE, loadImgComplete);
}
The trace "redraw" seems to happen once the image is loaded but the border still does not get redrawn with the correct height and width.
Do I need to remove the listener or will it be garbage-collected later?
You can manually force a component to update its layout by calling validateNow().

Resources