I have been working on UI tetsing lately. I worked on Espresso and UI automator. While using UI automator viewer i am able to get the hierarchy and id of the UI component when i am using Nexus tablet home screen , however whenever i am using Xperia tablet and taking device screenshot using ui automator viewer i am not able to get the UI component correctly . There is some issue as seen in below in the image . Any suitable reason why is it happening ? Is it because google Ui automator work best for google devices?
This above uiautomator viewer screenshot is for google nexus where i am clearly getting the app button id when i move my cursor over it.
This one is for xperia tablet where you can see it is pointing to empty place and still giving some heirarchical view(Image button) and id(seems as some coordinate mismatch).
Related
I see many samples and videos on how to do firebase for mobile and they call this "multiplatform". However, I don't see much on the desktop. There is one video on firebase flutter Windows by using a web and it seems to work. However, I do not see any tutorials for both mobile and desktop. Firebase would be a great example on syncing between desktop and mobile. We have such an app in development right now. Desktop development is new, but I'm surprised how little there is.
There is a library called firebase_dart, but the documentation seems weak.
The package firedart with the video listed above works in both desktop (linux and android without much modification..
What needed to be modified?
I had difficulty with the button on the very top of the phone, so I
added a sized box.
I had difficulty with debugPrint or print so I
added a Text widget with the results (to string).
That also worked.
Although I would like to not use fluent_ui, it does work for both desktop and mobile. I'm not sure what to do with realtime db, but I think I can make the firedart work for user sync between mobile and desktop.
It would be better if I could get firebase_dart to work.
https://www.youtube.com/watch?v=Tw7L2NkhwPc
Google play console wants me to add some 7-inch tablet screenshots in order to make the app easily accessible to tablets. But I don't have any tablet. Don't understand what's the problem. I have made screenshots in the final phase of development, testing the game in the Unity editor. I suppose these .png images are as good as if they were made from a phone. I don't even know how to make screenshots from a phone.
Is there a hidden feature to identify a screenshot as being made from the 7-inch tablet? Why don't they simply state the image resolution they want for the tablet "screenshot"??
Actually, it doesn't matter what resolution your image is. Just upload the images to that section, the purpose of that section is only to let your users have a first look of what your app looks like before they install it. If you don't intend to make your app for Tablet, just let that section empty, upload images only to Phone's section.
If you are using an Android phone such as Samsung, I would recommend you have a look at Settings/Advanced Features/ Smart Capture (turn it on). Then you can swipe your screen to take a screenshots (it has a tutorial there)
I have a web app that uses ThreeJS. I am currently trying to include WebVR to be used with Gear VR.
I am aware that I need to link to that web app using the ovrweb protocol in order to open it in Gear VR. My problem is that it does not.
Whenever I use window.location.href = "ovrweb:http://my-app-url", I am asked to attach the device to Gear VR. But once I do so, the screen remains black. I noticed that the same thing happens whenever I use some non-VR webpage as the URL (like ovrweb:https://www.google.com).
However the ovrweb protocol works fine as expected with certain URLs - such as ovrweb:https://playcanv.as/p/VNTAx5Eu/.
I am not sure what I am missing. My app has a VR button, on clicking which the display.requestPresent API call gets fired & the screen splits into two (works in Chrome Canary). Is that any list of requirements that my app needs to satisfy to be recognized via ovrweb protocol? If so, what are those?
I went through the Oculus docs, but did not find anything that could help me. How do I make my app run via ovrweb protocol?
Update: I found that ThreeJS example links (such as https://threejs.org/examples/webvr_rollercoaster.html) are not working over ovrweb protocol either.
Okay, found the solution myself.
Whenever we try to use ovrweb protocol, the device will try to open the URL provided in its Carmel Developer Preview browser (different from the "internet browser" one can use when inside Oculus Home experience).
Now the Carmel Developer Preview supports only "3D" websites. Navigating to 2D websites is not currently supported in Carmel Developer Preview. (https://developer.oculus.com/documentation/vrweb/latest/concepts/carmel-launching-content/). That's why my own web app, as well as links such as google.com, appeared black.
So all that I had to do was simply trigger display.requestPresent at the very start - thereby differentiating it from a 2D website.
Now display.requestPresent does not work without some kind of user interaction (such as click of a button). Same is the case with other JS APIs (such as fullscreen), for security concerns.
However it seems like navigating to a link over ovrweb protocol and thereby viewing it in Carmel Developer Preview also satisfies the user interaction condition. Hence now my VR scene is perfectly visible in Gear VR.
This solution should also work in the ThreeJS webvr examples (including the rollercoaster example linked in OP).
All one needs to do is trigger this snippet on page load.
display.requestPresent( [ { source: canvas } ] )
.then(function() {
// Presenting to WebVR display
}, function(e) {
// On error
});
I'm using the latest Google VR SDK and Unity 5.6. I've got a Daydream headset/controller and I'm trying to develop a game. I've been stuck on a problem for a while now, which in a typical Unity environment I should be able to solve very quickly, but because I'm forced to build/run the code to the device each time I want to test I'm unable to view the console nor see any error/warnings which are thrown.
Any idea on how I can debug using Unity, or even emulate the Daydream controller/headset within Unity? I've seen that a controller emulator exists, but it appears you still have to run on the device, only in that scenario you have to have two phones; one acting as the controller and another as the 'screen'.
Thanks
Check out this asset Log Viewer.
Using this asset you can see the same log console in Unity Editor in runtime. I don't know about DayDream but it worked for me on Android, Oculus, and GearVR.
I do have a Kendo UI Mobile app which I created with the VS Icenium extension. My question is if it is possible to open remote pdf documents with the native pdf viewer which is installed on a mobile device.
Currently, I'm using this code in order to open a document:
if (device.platform === "Android")
window.open(data.uri, '_blank');
}
else
window.open(data.uri, '_system');
}
This opens the browser and downloads the file, which is retrieved from a wep.api .NET MVC webservice. On iOS it works quite well. But on Android it opens the browser and starts the file download in the background and the document is not opened in the browser. It is necessary to pull down the message center in order to open the file.
But like I said, it would be great if I could check if a pdf viewer is installed on iOS / Android and ask the user to open the document with it otherwise opening the file in the browser using the code from above.
Searching the internet the only option I was able to find was pdf.js (which I wasn't able to implement into my app).
I'm an absolute beginner with HTML5, jquery, Kendo UI Mobile etc. and it would be great if you guys could give me your help!
Thank you in advance!
Regards,
Martin