Protractor tests Fail when i run in headless mode on Chrome - jasmine

My protractor tests run ok with chrome browser, but fails when i run them in headless mode
protractor config
directConnect: false,
capabilities:
{
browserName: "chrome",
chromeOptions: {
args: [ "--headless", "--disable-gpu", "--window-size=1920,1080","--no-sandbox" ]
}
},
chrome Version: 90.0.4430.85
Chrome Driver: chromedriver_90.0.4430.24
My App runs on an insecure URL (ie. every time i have tests to accept security exceptions )

Related

heroku knex migration/seed error: no pg_hba.conf entry for host ssl false

Trying to use the Heroku console to run knex migration and seeds. Everything works in the development environment but it doesn't in Heroku. I get an SSL error and I don't know how to solve it without paying for a higher database tier.
Because node-Postgres enables SSL validation by default while free Heroku hosting doesn’t provide it automatically, you need to turn it off. disable SSL in Heroku:
CLI solution:
heroku config:set PGSSLMODE=no-verify --app <app name>
Sources:
https://dpletzke.medium.com/configuring-free-heroku-node-postgresql-hosting-with-knex-b0e97a05c6af
https://help.heroku.com/DR0TTWWD/seeing-fatal-no-pg_hba-conf-entry-errors-in-postgres
Resolved with:
{
production: {
client: 'pg',
connection: {
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: false,
},
},
},
}

Rollup proxy configuration causes the short circuit in the client-server

In my rollup.config.js file, I have added the following lines to proxy the requests destined to the client-server to the backend as such:
!production && localdev({
dirs: ['public'],
host: 'localhost',
port: 9876,
proxy: [{ from: '/api/*', to: 'localhost:8080' }],
}),
, but this causes the server to short circuit and give the prompt below:
yarn run v1.22.11
warning package.json: No license field
$ rollup -c -w
rollup v2.58.0
bundles src/main.ts → public\build\bundle.js...
LiveReload enabled
[2021-10-13 12:03:20] ÔÜí´©Ädev-server proxying from /api/* to localhost:8080
Done in 3.63s.
it should be working until it is explicitly interrupted, though.
change localhost:8080 to http://localhost:8080. it's bad that it doesn't shout the error.
See https://github.com/sveltejs/svelte/issues/3717
and https://github.com/pearofducks/rollup-plugin-dev

Docker on Windows 10 networking issue

I'm using Docker on a Window 10 laptop. I recently tried to get some code to run in a container to connect to another server on the network. I ended up making a Ubuntu container and found the issue is a IP conflict between the docker network and the server resource (172.17.1.3).
There appears to be an additional layer of networking on the Windows Docker setup with isn't present on the Unix system, and the docker comments to "simply using a bridge network" doesn't resolve this issue.
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "d60dd1169153e8299a7039e798d9c313f860e33af1b604d05566da0396e5db19",
"Created": "2020-02-28T15:24:32.531675705Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Is it possible to change the subnet/gateway to avoid the IP conflict? If so how? I tried the simple thing and making a new docker network:
docker network create --driver=bridge --subnet=172.15.0.0/28 --gateway=172.15.0.1 new_subnet_1
There still appears to have a conflict somewhere, I can reach other devices just nothing in 172.17.0.0/16. I assume guessing it's somewhere in the HyperV, vEthernet adapter, or vswitch.
UPDATE 1
I took a look at wireshark (PC level) with the new_subnet_1 network and I did not see these packets leave the vSwitch interface or the PC's NIC.
I did see this Docker forum which is indicating an issue with the Hyper-V and V-switch that could be the issue.
Docker Engine v19.03.5
DockerDesktopVM created by Docker for Windows install
UPDATE 2
After several Hyper-v edits and putting the environment back together I check the DockerDesktopVm. After getting in from a privileged container I found that the docker0 network had the IP conflict. Docker0 is appears to be the same default bridge network that I was avoiding, because it is a pre-defined network it cannot be removed, and all my traffic is being sent to it.
After several offshoots, and breaking my environment at least once, I found that the solution was easier then I had though.
Tuned off Docker Desktop Services
Added the following line to the %userprofile%\.docker\deamon.json file in windows 10
....lse,
"bip": "172.15.1.6/24" <<new non conflicting range
}
Restarted Docker Desktop Service
Easy solution after chasing options in Hyper-V and the Docker Host Linux VM.

How to Xdebug Laravel composer tests

I have Laravel Lumen in Docker container and Sublime Text in OSX host.
I was able to make the Xdebug working when running PHP urls from the browsers,
but how to get it working with scripts run from the docker machine?
I saw that Xdebug was disabled by default in composer, but was able to make env variable to enable it. Still it cannot connect to my Sublime Text's Xdebug plugin.
What are the extra steps to make it work?
php ini:
xdebug.remote_autostart=1
xdebug.remote_enable=On
xdebug.remote_handler=dbgp
xdebug.remote_connect_back=0
xdebug.remote_host=host.docker.internal
xdebug.remote_mode=req
xdebug.remote_port=9002
xdebug.remote_log = /var/www/html/xdebug.log
Sublime's Xdebug config:
{
"path_mapping": {
"/var/www/html" : "/Users/mark/projects/todolist/api/"
},
"url": "http://localhost",
"super_globals": true,
"close_on_stop": true,
"port": 9002,
"debug": true,
"host":"0.0.0.0"
}

valet local domain with browser sync Laravel mix

I have load laravel project which runs fine with valet domain something.dev
Tried to implement browser sync with laravel-mix
mix.browserSync({
proxy: 'something.dev'
});
After running npm run watch it is pointing me to http://localhost:3000/
Can i point to to valet domain instead of localhost:3000 ?
Here is the output of npm run watch
Asset Size Chunks Chunk Names
mix.js 2.59 kB 0 [emitted] mix
[Browsersync] Proxying: http://something.dev
[Browsersync] Access URLs:
--------------------------------------
Local: http://localhost:3000
External: http://192.168.1.131:3000
--------------------------------------
UI: http://localhost:3001
UI External: http://192.168.1.131:3001
--------------------------------------
[Browsersync] Watching files...
I had similar issues myself getting browserSync working with Valet, but the options I use are:
mix.browserSync({
proxy: 'something.test',
host: 'something.test',
open: 'external'
});
host overrides any detection of hostname in browserSync
open tells it which URL to open (local by default)
If you're using valet and ran into an issue where the browsersync doesn't start, here's the answer. I was banging my head on the wall over this but found a clue in their Upgrade to Mix 6 docs. After reading this, here's my setup:
// package.json
{
...
"scripts": {
"build": "mix --production",
"dev": "mix watch" // <--- add this script, you need to run mix watch
},
...
}
Then in my webpack.mix.js
// webpack.mix.js
const mix = require('laravel-mix');
const homedir = require('os').homedir();
const host = 'your-local-domain.test';
// ... other mix stuff
mix.browserSync({
hot: true,
ui: false,
proxy: `https://${host}`,
host,
port: 8080,
open: 'external',
notify: true,
files: ['**/*.php', 'dist/**/*.(js,css)'],
// if you're using valet with https, point to cert & key
https: {
key: `${homedir}/.config/valet/Certificates/${host}.key`,
cert: `${homedir}/.config/valet/Certificates/${host}.crt`,
},
});
Let me know if I'm missing anything!

Resources