I'm building a Node app, and I have a Three.js animation running fine.
Now I want to write a script that detects if there is a webGLcontext, but I can't figure out where or how to get my canvas's context.
Here is what I tried to do :
window.addEventListener("load", () => {
let paragraph = document.getElementById("verifWebGL");
let canvas = document.getElementById("renderCanvas");
let glG = window.WebGLRenderingContext && ( canvas.getContext( 'webgl' ) || canvas.getContext( 'experimental-webgl' ));
if (glG) {
paragraph.textContent = "Ça se passe juste en dessous.";
lancer = true;
}
else {
paragraph.textContent = "Vous ne pourrez pas lancer de dé avec ce navigateur."
+" Veuillez reesayer avec Google Chrome ou Mozilla Firefox.";
lancer = false;
}
}
But is doesn't work.
I tried to use only canvas, but it also doesn't work.
And I know I have a context because the animation is running.
But in the chrome scope, 'glG' remains null.
Any ideas ?
You can get the renderer's WebGL context like so:
const context = renderer.getContext();
Related
I am having a tough time converting lumia imaging SDK 2.0 code to SDK3.0 in below specific case. I used to increase/decrease the image quality of JPG file using below code in Windows phone 8.1 RT apps:
using (StreamImageSource source = new StreamImageSource(fileStream.AsStreamForRead()))
{
IFilterEffect effect = new FilterEffect(source);
using (JpegRenderer renderer = new JpegRenderer(effect))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0; // higher value means better quality
compressedImageBytes = await renderer.RenderAsync();
}
}
Now since FilterEffect class has been replaced in SDK 3.0 with EffectList(), I changed code to
using (BufferProviderImageSource source = new BufferProviderImageSource(fileStream.AsBufferProvider()))
{
using (JpegRenderer renderer = new JpegRenderer())
{
IImageProvider2 source1 = new EffectList() { Source = source };
renderer.Source = source1;
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
try
{
var img = await renderer.RenderAsync();
}
catch (Exception ex)
{
;
}
}
}
I am getting InvalidCastException exception. I have tried several combinations but no luck.
I don't really know what is going on with the InvalidCastException, we can continue that discussion in the comments as it will most likely need some back-and-forth.
That said, you could continue without the effect list, and chain effects in the normal way. So to rewrite your scenario:
using (var soruce = new StreamImageSource(...))
using (var renderer = new JpegRenderer(source))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
var img = await renderer.RenderAsync();
}
If you wanted to add an effect (for example a CarttonEffect), just do:
using (var soruce = new StreamImageSource(...))
using (var caroonEffect = new CartoonEffect(source))
using (var renderer = new JpegRenderer(caroonEffect))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
var img = await renderer.RenderAsync();
}
and so on. If you had effects A, B, C and D just make a chain Source -> A -> B -> C -> D -> JpegRenderer.
I am on VS 2015 community version. While digging around this, I got below code working which works exactly same as SDK 2.0. All I did was specified the Size of JpegRenderer. It works for all landscape images but fails to transform the portrait images to correct orientation. There is no exception but result of portrait image is widely stretched landscape image.
I initialized the Size for portrait images to Size(765, 1024) but no impact.
using (JpegRenderer renderer = new JpegRenderer(source))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
try
{
var info = await source.GetInfoAsync();
renderer.Size = new Size(1024, 765);
compressedImageBytes = await renderer.RenderAsync();
}
catch (Exception ex)
{
new MessageDialog("Error while compressing.").ShowAsync();
}
}
I am sorry the working code was using BufferProviderImageSource instead StreamImageSource. Below is the snippet. Few points here:
1) If I don't use Size property I get "The component cannot be found exception".
2) GetInfoAsync(): Yes it was useless for above code but I need to use it to know if image is Landscape or Portrait so that I can initialize Size property of resultant image.
3) If Size property goes beyond 1024x1024 for portrait images I get the exception "Value does not fall within the expected range"
Why lumia made this version so tricky. :(
var stream = FileIO.ReadBufferAsync(file);
using (var source = new BufferProviderImageSource(stream.AsBufferProvider()))
{
EffectList list = new EffectList() { Source = source };
using (JpegRenderer renderer = new JpegRenderer(list))
{
renderer.Quality = App.COMPRESSION_RATIO / 100.0;
renderer.OutputOption = OutputOption.PreserveAspectRatio;
try
{
var info = await source.GetInfoAsync();
double width = 0;
double height = 0;
if (info.ImageSize.Width > info.ImageSize.Height) //landscape
{
width = 1024;
height = 765;
if (info.ImageSize.Width < 1024)
width = info.ImageSize.Width;
if (info.ImageSize.Height < 765)
height = info.ImageSize.Height;
}
else //portrait..
{
width = 765;
height = 1024;
if (info.ImageSize.Width < 765)
width = info.ImageSize.Width;
if (info.ImageSize.Height < 1024)
height = info.ImageSize.Height;
}
renderer.Size = new Size(width, height);
compressedImageBytes = await renderer.RenderAsync();
}
catch (Exception ex)
{
new MessageDialog(ex.Message).ShowAsync();
}
}
}
I am using WebCamTexture class to open camera from device. I want to switch camera from front to back and vice versa in between the game play. Can anybody help?
WebCamDevice[] devices = WebCamTexture.devices;
WebCamTexture webCamTexture = new WebCamTexture(devices[1].name);
renderer.material.mainTexture = webCamTexture;
webCamTexture.filterMode = FilterMode.Trilinear;
webCamTexture.Play();
This opens my front camera. But I can't switch the camera.
I got the solution. You just need to set the name of the device. By default back camera is "Camera 0" and "Camera 1" front camera. First Stop your camera , change the set device.
if (devices.Length > 1) {
webCamTexture.Stop();
if (frontCamera == true)
{
webCamTexture.deviceName = devices[0].name;
frontCamera = false;
}
else
{
webCamTexture.deviceName = devices[1].name;
frontCamera = true;
}
webCamTexture.Play ();
}
I have used the following code for switching the front and back camera of the iPhone Device using C# script
WebCamDevice[] devices;
public WebCamTexture mCamera = null;
public GameObject plane; // this is the object where i am going to show the camera
// Use this for initialization
void Start ()
{
devices = WebCamTexture.devices;
plane = GameObject.FindWithTag ("Player");
mCamera = new WebCamTexture (Screen.width,Screen.height);
mCamera.deviceName = devices[0].name; // Front camera is at Index 1 and Back Camera is at Index 0
plane.renderer.material.mainTexture = mCamera;
mCamera.Play ();
}
the above code will show the back Camera when the Scene is loaded ,for accessing the front camera on the click of front button defined below
if (GUI.Button (new Rect (100, 1000, 120, 40), "Front"))
{
if(devices.Length>1)
{
mCamera.Stop();
mCamera.deviceName = (mCamera.deviceName == devices[0].name) ? devices[1].name : devices[0].name;
mCamera.Play();
}
}
This is button, by clicking it will redirect to front camera and again clicking it will redirect to back camera alternatively
if (devices.Length > 1) {
webCamTexture.Stop();
if (frontCamera == true)
{
webCamTexture.deviceName = devices[0].name;
frontCamera = WebCamTexture.devices[0].isFrontFacing;
}
else
{
webCamTexture.deviceName = devices[1].name;
frontCamera = WebCamTexture.devices[1].isFrontFacing;
}
webCamTexture.Play ();
}
We've used Camera Capture Kit for a social image sharing game/app to do a similar functionality to what you are describing, The other solution presented here is not 100% solid, since there might potentially be devices with different order of front and back-facing devices.
I'm trying to change the bitmap image in createjs and want to remove all children in a container when reset button is clicked. But removeAllChildren is not working in me.
function drawPhoneImage() {
stage = new createjs.Stage('canvas');
container = new createjs.Container();
phone = new createjs.Bitmap(phoneImg);
phone.x = 268;
phone.y = 64;
stage.addChild(container);
container.addChild(phone);
stage.update();
phone.addEventListener("click", function() {
console.log('phone clicked');
createjs.Ticker.addEventListener("tick", movePhoneImage);
});
}
function movePhoneImage(event) {
phone.x -=10;
if(phone.x < 156) {
phone.x =156;
showPhoneSnap();
}
stage.update(event);
}
Then after clicking the phone object, I'll need to replace it with another bitmap(which works):
function showPhoneSnap() {
snap = new createjs.Bitmap(snapImg);
snap.x = 156;
snap.y = 64;
container.removeAllChildren();
container.addChild(snap);
stage.update();
}
At first, removeAllChildren is working in the first child of the container, but when i tried resetting the stage after adding another bitmap in the container..removeAllChildren() is not working.
function resetStage() {
container.removeAllChildren();
stage.update();
}
I'm having a hard time solving this issue, thanks for anyone who can help.
Make sure that "snapImg" is an image that is loaded.
snap = new createjs.Bitmap(snapImg);
The issue is that you are not updating the stage when the image is loaded.
var image = new Image();
image.src = "path";
image.onload = showPhoneSnap;
function showPhoneSnap() {
//This will ensure that the image is ready.
var bmp = new createjs.Bitmap(this);
...
stage.update();
}
Anybody know where I can find a table of browsers and whether or not they support CSS3 animations and keyframes? Thanks
Can I Use is the place for all of this sort of thing, regularly updated, and always accurate!
http://caniuse.com/css-animation
They were implemented on these dates:
Safari 4.0: 11/06/2008
Chrome 1.0: 02/09/2008
Firefox 5: 20/04/2011
IE 10: 09/2011
They became part of the spec in 2009: http://www.w3.org/TR/css3-animations/
For more info, checkout http://css3.bradshawenterprises.com/support/ and http://css3.bradshawenterprises.com/animations/
I'm going this way. Instead of looking for the browser, I'm looking at feature detection. This nifty write-up will save me some work. So, I'm copying the code, you figure out what it all means :-).
/* Check if the Animation feature exists */
if(hasAnimation())
{
alert("Has!");
return;
}
function hasAnimation()
{
var elm = document.getElementById( 'imgDiv' )
animationstring = 'animation',
keyframeprefix = '',
domPrefixes = 'Webkit Moz O ms Khtml'.split(' '),
pfx = '';
if( elm.style.animationName === undefined )
{
var animation = false;
for( var i = 0; i < domPrefixes.length; i++ )
{
if( elm.style[ domPrefixes[i] + 'AnimationName' ] !== undefined )
{
pfx = domPrefixes[ i ];
animationstring = pfx + 'Animation';
keyframeprefix = '-' + pfx.toLowerCase() + '-';
animation = true;
break;
}
}
if( animation === false ) // animate in JavaScript fallback
return false;
}
/* Create animationstring */
elm.style[ animationstring ] = 'rotate 1s linear infinite';
var keyframes = '#' + keyframeprefix + 'keyframes rotate { '+
'from {' + keyframeprefix + 'transform:rotate( 0deg ) }'+
'to {' + keyframeprefix + 'transform:rotate( 360deg ) }'+
'}';
/* Add rule to stylesheet */
if( document.styleSheets && document.styleSheets.length )
{
document.styleSheets[0].insertRule( keyframes, 0 );
return true;
}
/* If there is no stylesheet, add rule to header */
var s = document.createElement( 'style' );
s.innerHTML = keyframes;
document.getElementsByTagName( 'head' )[ 0 ].appendChild( s );
return true;
}
Update: I've rewritten the code for clarity. Also the 'elm' element wasn't defined. The original demo code is here.
EDIT: I apologize to everyone for recommending a W3Schools link, I will NEVER do it again.
W3Schools usually has these types of table and information, check out this link.
It looks like as of now, the following browsers support CSS animations:
Firefox
Chrome
Safari
And, the ones left, which don't currently support it are:
IE
Opera
I'm using the iviewer plugin with a lightbox and I have issue to center my image everytime it load a new image.
I know that there is a pre-built method center() I just don't undertand how and where to call it.
You can find the function I'm using under. The function is called when I click on an element, it open a box div(#iviewer). In which I would like my image center. I also use a zoom pourcentage at the beginning so my image doesn't fit the box (var viewer).
function open(src, id) {
var firstZoom = true;
$("#iviewer").fadeIn().trigger('fadein');
var viewer = $("#iviewer .viewer").
width(920).
height(560).
iviewer({
src : src,
ui_disabled : true,
zoom : '50%',
initCallback : function() {
var self = this;
},
onZoom : function() {
if (!firstZoom) return;
$("#iviewer .loader").fadeOut();
$("#iviewer .viewer").fadeIn();
firstZoom = true;
}
}
);
//load new pic
viewer.iviewer('loadImage', src);
}
Thanks for the help.
The "onFinishLoad" callback hook in the initialization worked for me:
onFinishLoad: function(ev, src){ viewer.iviewer('center')}