PhysiJS do not work on localhost (works fine on online examples) - physijs

As can be seen in the title, when I run the example of physiJS (from github repo) it show only background, fps counter, but no physiJS functionality at all (pure three.js works fine). When I run on the: http://chandlerprall.github.io/Physijs/examples/vehicle.html everything runs ok. I have no idea right now where to start looking and where the problem is. Any ideas of what the cause could be?

PhysiJS uses a web worker to run the updating functionality, and web workers are not allowed on local systems as they require the loading of additional resources through JavaScript (and this is not allowed by cross-origin policies on some browsers). It's related to your browser, on my mac Safari allows it, but Chrome throws an error:
Uncaught SecurityError: Failed to construct 'Worker': Script at 'file://physijs_worker.js' cannot be accessed from origin 'null'.
The worker is required to run PhysiJS, so you should use a local server like MAMP to test it on your local machine.

Related

Ajax error 401 in production, but works local

I have a Net Core application that started to present a few problems lately. It was working just fine, but recently my Ajax calls will throw a 401 error.
That just happens in the production server, running on localhost everything works just fine. Also, this appears to be happening randomly, so the same ajax call will sometimes throw this error and sometimes it won't.
After digging a lot I noticed a few differences between the headers of those calls when they run local and when they run on the production server, but I don't know exaclty how to interpret and solve them.
Could you help me? None of those calls are to an external API/resource, they all call the page the user is currently on in the app itself.
I'll add the screenshot of the console showing the difference between headers. On the left is the one running local and I've used exactly the same data on both tests.
The production server is running IIS 10, if that's relevant.

Cypress tests against devtools port only

We use some third party enterprise software ("Container App"), which has an embedded Chromium browser, in which our webapp runs.
We use Cypress to test our webapp in a stand-alone browser outside of this container, however we would like to be able to test it inside, as it interacts with the container in various ways through javascript.
The only thing the container exposes is a "remote devtools-url" to the target (our) browser, which can be pasted to a native browser outside of the container and then debugged in devtools. Like this:
The Container provides 2 different url's for above debugging purposes, and they both work and seemingly similarly. They are something like the following (not precise, unfortunately I am not at work atm):
devtools://...inspector.html?id=xxx
http://ip/...inspector.html?id=xxx
Is it possible to setup Cypress to test "as normal", only having access to this remote devtools-url/port?
The target browser inside the container cannot be started by Cypress, as only the container can start and close it. So the target browser will already be running (with a --remote-debugging-port). I can get the devtools-id dynamically through a call to /json/list.
If not possible, any other way to achieve the goal of testing the browser/app running inside the container?
It is not possible. Testing with Cypress a web page in embedded Chromium running in your application means Cypress needs to connect to already running browser. Cypress doesn't have that possibility.
The documentation states:
When you run tests in Cypress, we launch a browser for you. This enables us to:
Create a clean, pristine testing environment.
Access the privileged browser APIs for automation.
There is a request in Cypress issue tracker to add the option to connect to already running browser. But there is no response on it from Cypress developers.

Running selenium webdriver test in Remote Desktop Connection is taking very long time

I am running a Selenium WebDriver test in the Remote Desktop using maven command. The test is taking very long time to load the URL and login into the site whereas when I try to run the same test in my local both URL loading and user Login where very quick. Can someone please tell me what would be the reason for that slowness.
In my experience using Remote VM as UI tests host, has always been slower compared to local environment. Mainly because the dedicated VMs are missing the GPU and they try to render the requested browser(s) through the CPU. If you open your remote machine monitoring tool, most likely you'll see a lot of spikes when the browser launches. Similar to the one shown bellow.
In order to optimize performance, you can employ headless execution (HtmlUnitDriver, PhantomJS) or block certain content from loading, like images, animations, videos etc. However when doing this, try to keep their placeholders.

Selenium setup on Windows for Chrome and IE

As far as i understand, there are two possible ways of setting up a selenium server (just a node) on windows:
As a windows service
Using the task scheduler to start the server running within a local user account
However, using the service way (where no desktop is available) the Internet Explorer can not be used.
Therefore, i created a local user account and a scheduled task to start the selenium server at startup connected to the user account (using the selenium-standalone package, selenium-standalone start --drivers.ie.arch=ia32).
Unfortunately, i ran into the "Session 0" problem, which requires a real login for the local user account. Otherwise, i would receive a timeout error for
Chrome and black screenshots for IE and have the max resolution of 1024x768...
However, with a active user session, i still get the timeout error for IE (Chrome works). The browser makes the initial GET request (retrieving the login page) but keeps stuck after this (next step would be to fill the form with credentials using protractor).
I read about the Headless Selenium for Windows that gives me some connecting layer between the driver and the GUI. Though, i do not know if this would help and how to integrate this into the selenium-standalone package.
So, my question is, what is the missing puzzle in the setup?
I would suggest you to move away from Session 0, as Chrome is trying to move away from Session 0 too in the near future.
You can find further references of this here (comment 21 in the link below, but actually the whole thread is a good read in respect to this subject): https://bugs.chromium.org/p/chromium/issues/detail?id=615396#c21
You could try using this setup for Chrome for now, however there is no guarantee that it will still work while Chrome is started Session 0.
var chromeOptions = new ChromeOptions();
chromeOptions.AddArguments("test-type");
chromeOptions.AddArguments("--disable-extensions");
chromeOptions.AddArguments("no-sandbox");
var driver = new ChromeDriver(chromeOptions);
I had the same issues with using Microsoft's Test Agent, and moving the agent from a windows service to a process, solved all the issues and headaches that I had.
As stated above, there are two ways to accomplish the setup. However, only with using a scheduled task i was able to workaround the session 0 issue (as #Cosmin stated). Using NSSM and FireDeamon Pro was a dead end.
I reconfigured the server, to automatically login the local user account and changed the scheduled task, to run if and only if this user is logged in (starting Selenium). So, after the server starts, the user gets logged in, which triggers the task scheduler (at this point a simple startup script should work, too) to start Selenium.
And for the screen resolution problem: The VM setup uses Hyper-V, where the default resolution is 1024x768. This could easily be changed (to the max resolution the screen adapter is providing) to 1600x1200.
PS: The Headless Selenium for Windows did not work either (can not be used with Protractor). However, even this is no longer necessary. IE works this way.

Proxy could not connect to the destination in time

I have been having issues with loading pages from the website I created. The strange thing is that after I reload a page (e.g. Ctrl+R) especially if I do it multiple times in a row, sometimes that page loads flawlessly fast, sometimes it takes 10-20 seconds and sometimes it doesn't load at all, because, based on the Network tab in the Developer panel, some files are 'pending' to load, but never load. I'm then getting a "proxy could not connect to the destination in time" error if the very first index.php doesn't load, or the page loads only partially and I'm getting an error in the Console "could not connect to [filename]'
Few facts to keep in mind:
This issue occurs on all browsers I tested (Chrome, Firefox, IE, and Opera)
I am not focused on asynchronizing the stylesheets or script files at this point. As a result, my pages are not loaded until all files are loaded, but at this stage of development, this is not a top priority
I'm using GoDaddy shared hosting for this website
I am accessing the website through a corporate proxy server (I believe they use McAffee Enterprise)
If I test the loading speed using external tools (tools.pingdom.com or google developers tool), there appears to be no issue with the loading speed
Even under the corporate proxy server, I have never had this issue with any other websites
My questions is whether anybody else had this issue and knows how to mitigate it. If it's the proxy, I'm not sure why other sites work just fine. Any thoughts?

Resources