Experimenting with the Airconsole platform and I've created a simple placeholder screen and controller. The /screen.html and /controller.html files display fine on my browser at myIPAddress:3000.
I can also access them on a mobile device connected to the same network at myIPAddress:3000.
When I attempt to use the Airconsole Simulator, with myIP (ie http://www.airconsole.com/simulator/#http://myIP:3000/), the controllers connect and display my placeholder content. The screen continues to show the Loading... screen indefinitely.
Same thing happens using the local test - http://www.airconsole.com/?http=1#http://myIP:3000/. I open that URL in my browser, then open the Airconsole mobile app and enter the game code. It detects a connection and my phone displays the placeholder. The screen continues showing the Loading... icon.
Both pages have the airconsole.js file included.
Has anyone encountered this? If so, how were you able to resolve it? Any tips for debugging it for more info is also welcomed.
You have to instantiate the AirConsole object like:
var airconsole = new AirConsole();
This internally calls the parent frame, which then removes the loading frame etc. and the screen.html is displayed. Or in other words, AirConsole knows you want to start showing your game.
Note: also until then, no communication with the loading device works either.
Related
Environment
Browser: Chrome 87.0.4280.141
OS: Windows 10 Home
Zoom Version: 5.4.6(59296.1207)
I have a website that can access the the user's camera and take a short video on request. I am attempting to achieve this using the MediaDevices web api.
This is all working fine except in two scenarios. When I am in a Zoom or MS Teams meeting on my Windows laptop (with camera on), I have noticed that my webapp fails to capture my video. If I use the web clients for zoom or msteams then it works as expected. Also, if I use mac OS instead of my Windows laptop then this works fine.
When I debug this I get the following error message thrown when trying to access userMedia.
DOMException: Could not start video source
The code that I am using to access UserMedia is the following:
return await navigator.mediaDevices.getUserMedia({video: true});
Is there anything I can do to allow me to user my webcam in the browser as well as on the MS Teams or Zoom clients?
Is there anything I can do to allow me to user my webcam in the browser as well as on the MS Teams or Zoom clients?
No, sorry to say.
The native videoconference client programs attach to your webcam, and so does the browser when you use gUM. The first process to attach wins.
A second webcam may solve your problem by letting your videoconference progam use one and your browser use the other. If you use Chrome, pick the camera you want to use with your browser from the pulldown menu on this page.
chrome://settings/content/camera
I have a web app that uses ThreeJS. I am currently trying to include WebVR to be used with Gear VR.
I am aware that I need to link to that web app using the ovrweb protocol in order to open it in Gear VR. My problem is that it does not.
Whenever I use window.location.href = "ovrweb:http://my-app-url", I am asked to attach the device to Gear VR. But once I do so, the screen remains black. I noticed that the same thing happens whenever I use some non-VR webpage as the URL (like ovrweb:https://www.google.com).
However the ovrweb protocol works fine as expected with certain URLs - such as ovrweb:https://playcanv.as/p/VNTAx5Eu/.
I am not sure what I am missing. My app has a VR button, on clicking which the display.requestPresent API call gets fired & the screen splits into two (works in Chrome Canary). Is that any list of requirements that my app needs to satisfy to be recognized via ovrweb protocol? If so, what are those?
I went through the Oculus docs, but did not find anything that could help me. How do I make my app run via ovrweb protocol?
Update: I found that ThreeJS example links (such as https://threejs.org/examples/webvr_rollercoaster.html) are not working over ovrweb protocol either.
Okay, found the solution myself.
Whenever we try to use ovrweb protocol, the device will try to open the URL provided in its Carmel Developer Preview browser (different from the "internet browser" one can use when inside Oculus Home experience).
Now the Carmel Developer Preview supports only "3D" websites. Navigating to 2D websites is not currently supported in Carmel Developer Preview. (https://developer.oculus.com/documentation/vrweb/latest/concepts/carmel-launching-content/). That's why my own web app, as well as links such as google.com, appeared black.
So all that I had to do was simply trigger display.requestPresent at the very start - thereby differentiating it from a 2D website.
Now display.requestPresent does not work without some kind of user interaction (such as click of a button). Same is the case with other JS APIs (such as fullscreen), for security concerns.
However it seems like navigating to a link over ovrweb protocol and thereby viewing it in Carmel Developer Preview also satisfies the user interaction condition. Hence now my VR scene is perfectly visible in Gear VR.
This solution should also work in the ThreeJS webvr examples (including the rollercoaster example linked in OP).
All one needs to do is trigger this snippet on page load.
display.requestPresent( [ { source: canvas } ] )
.then(function() {
// Presenting to WebVR display
}, function(e) {
// On error
});
I have written file picker code in my project. When i run the project in my windows phone by clicking on device button in visual studio, the app runs fine(I mean it opens pictures library and i can select a photo and preview it).
But when I disconnect my usb and then open the app in the phone and when i open pictures library on click of a button , the pictures library opens briefly and then the app crashes immediately(My app closes).
Can anyone please help me with this??
As written in the blog post , the AndContinue method run in a different process and to do so, the current running app goes into the background or even gets closed sometimes, that is what you are experiencing in your app as far i can tell but not sure why different things happening during debug & deploy.
There must be a callback inside app.xaml.cs specifically to handle the condition when calling application(which was sent into background) comes in foreground, read this blog post carefully & you'll understand as what you need to change in your code :
using-the-andcontinue-methods-in-windows-phone
http://blogs.msdn.com/b/wsdevsol/archive/2014/05/08/using-the-andcontinue-methods-in-windows-phone-silverlight-8-1-apps.aspx
I just dealt with this issue and one of the reasons why there are differences between debug and deploy is because of the suspending event.
During debug, the application does not actually get suspended until you manually do it through Lifecycle Events. This means that when you pick a file on debug and the app is put to the background to load the file picker, it is not actually suspended, while when the app is deployed, it actually gets suspended.
Look into your app_resuming method and OnSuspending methods in your app.xaml that may be causing errors not occurring during debug.
I am trying to build a custom Chromecast sender/receiver application, but I can't seem to connect to the device from my custom sender or even Chrome once my custom sender page is loaded.
The Chromecast device appears to be functioning properly (I can cast tabs and YouTube videos). However, when I load the custom sender, it seems to break Chromes connection to the device. The Chromecast icon in Chrome shows "No Cast devices found".
I have found if I comment out chrome.cast.initialize, I can see the Chromecast device again. There are no errors reported in the Chrome debug console and I've commented out all of my code that is called from event handlers related to that call and I still have the same problem. I've also tried resetting the Chromecast device to factory. I've tried a few of the network tweaks recommended in a few other posts as well (though I get the impression they couldn't connect to the device at all).
This was working perfectly yesterday, then it mysteriously stopped. That seems to point to something I might have done, but the only thing I changed was in the receiver app and since I can't get that to start, I don't think it is that.
I also got this error before and I solved it by closing then re-openning the Chrome browser. If you are sure 100% that your js code works as expected when calling chrome.cast.initialize (no error show in Console mode but you got no ChromeCast extension found error), then simply close your Chrome browser and re-open it. If in your console, after calling chrome.cast.initialize, you see the line ChromeCast extension found :.... then you should be able to see your ChromeCast device when clicking into ChromeCast extension icon. If re-open browser does not work, clear your browser cache and try again.
Hope it helps.
I have my app working with some very basic receiver html however I would now like to do more on the receiver end. Is there a way for me to debug what is happening on the chromecast side? At this point I'm not even sure if my web page is getting refreshed each time.
Open your Chrome browser on port 9222 of your ChromeCast device: http://192.168.0.x:9222
By default the console tab will just show the current app's output, but if you jump from one app to another or your app receiver closes for some reason, you won't know why. To fix that, click on the settings icon (lower right corner) and enable "Preserve log upon navigation"
There appears to be a bug on the ChromeCast device where it caches older versions of the reciever. Just restart the device to force it to download the latest version.