Homestead server times out when route not found - laravel

Accessing any invalid/non existing route on fresh laravel app returns halts & timeout after 60 seconds.
Error: The process "git status -s" exceeded the timeout of 60 seconds.
Same code works fine fine locally on xampp and returns method/controller not found within a couple of seconds. Please guide.
P.S: Seems like git status takes a lot of time inside vagrant ssh, but it works fine on host machine. What also bothers me is why git status command is being run when accessing route?
Host: Windows 10
Box: v9.2.0.
Virtualbox v6.1.2
Vagrant: v2.2.6

Ok, so turns out the laravel error page package ignition is causing all the problems.
There is an ongoing issue on there github page and multiple solution including the git status issue.

Related

Acumos Installation

need some help getting acumos running.
Have VM with Ubuntu 20.04.4
Followed flow-1 here: https://docs.acumos.org/en/latest/submodules/system-integration/docs/z2a/tl-dr.html#flow-1
acumos-k8s-portal-be-569bf85dc-kx89b-shows pending
When I go to https://localhost/ on VM, I get 504 gateway timeout - nginx
gateway timeout
Not sure where to look for the issue?
Any help would be appreciated.
Just in case someone runs into the same issue, I havent resolved it on my local VM but got all all setup and working as follows:
So I used Oracle VirtualBox for my VM's with Ubuntu desktop 20.04.4 - the issue persists no matter what I tried, even tried Ubuntu 18 with same results.
My current solution:
Got an Ubuntu cloud server from Hetzner (ubuntu server 20.04), went thru the same process and it's all up and running :), just bizarre.
Not sure if different results will come from vmware, will give it a try sometime.
Cheers.

Angular cli stopped live recompiling when saving changes

I tried few solutions on my Mac for solving DNS_PROBE_FINISHED_NXDOMAIN error, and since that my Angular projects are running, but when I save changes, it is not auto recompiling(looks like the watch is not responding).
The steps that I did before it started were:
- renew DHCP lease in the TCP/IP of network settings tab.
- adding DNS server address.
- running this command in the terminal: dscacheutil -flushcache.
EDIT - When I run ng serve --watch the recompiling works, but without the --watch it is not working.
Problem solved!
after running once the ng serve -- watch running only the ng-serve started to work again!
Hope I manage to help someone with this issue and solution...

Docker and WSL error: UNAUTHORIZED

Recently started to use Docker in Windows because that's the enviroument where I work. I don't have any problem connecting WSL with Windows Docker an using it for first instance.
After reboot my laptop the problems cames to me. When I'm trying to create an image I get the next error:
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
Is so weird beacause if uninstall windows docker and reinstall it, everything works but if reboot or shut down my laptop, the problem comes again.
It seems like DNS resolve issues about Docker registry.
Sometimes DNS on DHCP information or your network settings makes this trouble on windows.
Try to change DNS options from Automatic to Fixed GoogleDNS(8.8.8.8) on Docker for Windows settings.
Please refer this screenshot.
Fixed DNS Settings on Docker for windows

Running dusk tests on Homestead for Windows

I'm using homestead on windows 10 and installed laravel 5.4 when I'm trying to run dusk tests I get the following error:
1) Tests\Feature\ViewProductListingTest::user_can_view_product_listing
Facebook\WebDriver\Exception\WebDriverCurlException: Curl error thrown for http POST to /session with params: {"desiredCapabilities":{"browserName":"chrome","platform":"ANY"}}
Failed to connect to localhost port 9515: Connection refused
Has anybody had any luck getting around this?
Thanx.
I encountered this very problem today. I spent like 2 hours researching and finally resolved it by following the steps posted on this comment.
I ran the original basic test and it passed. I'll try to run more complex test and hopefully the solution still works.
It seems to me that the Homestead lacks some necessary software(Chrome, xvfb etc.) to run browser tests with Laravel Dusk. That's what that comment is trying to install.
Hope this helps!
I ran into this issue before and was not able to completely resolve it.
The connection refused error occured for me because the execution scripts for dusk in /vendor/laravel/dusk/bin were not set executable inside Homestead.
So i used a chmod 777 on those scripts.
After that it complained that if couldn't find an executable chrome binary, so i installed google-chrome in Homestead.
After I installed google chrome, the tests ran, but timed out before they could finish which i am researching now.
I ran into the same issue (but I'm on macOS Sierra). If you can, instead of running Dusk from the Homestead VM, run it from your host machine. Just make sure the DB_HOST value in your .env file is set to the hostname you use to access the site in your browser.
So for example, if you configured the Homestead site to be accessible at mycoolsite.app, use that as your DB_HOST value.
I know this is more of a workaround for situations where your host machine might be able to run it OK, but it's working for me at the moment if you can give it a try.

cloudera host with bad health during install

Trying again & again with all required steps completed but cluster Installation when install selected Parcels, always shows every host with bad health. setup never completed at full.
i am installing cm 5.5 on CentOS 6.7 using virtualbox.
The Error
Host is in bad health cm.feuni.edu
Host is in bad health dn1.feuni.edu
Host is in bad health dn2.feuni.edu
Host is in bad health nn1.feuni.edu
Host is in bad health nn2.feuni.edu
Host is in bad health rm.feuni.edu
above error are shown on step 6 where setup says
The selected parcels are being downloaded and installed on all the hosts in the cluster
in previous step 5 all hosts were completed with heartbeat checks in the end
memory distributions
cm 8GB
all others with 1GB
i could not find proper answer anywhere else. What reason could be for the bad health?
I don't know if it will help you...
For me, after a few days I struggled with it,
I found the log files (at )
It had a comment there is a mismatch of the guid,
so I uninstalled everything from both machines (using the script they give,/usr/share/cmf/uninstall-cloudera-manager.sh , yum remove 'cloudera-manager-*' and deletion of every directory related to cloudera I found...)
and then removed the guid file:
rm /var/lib/cloudera-scm-agent/cm_guid
Afterwards I re-installed everything, and that fixed that issue for me...
I read online that there can be issues with the hostname and things like that, but I guess that if you get to this part of the installation, you already fixed all the domain/FDQN/hosname/hosts issues.
It saddens me there is no real manual/FAQ for this product.. :(
Good luck!
I faced the same problem. This is my solution:
First I edited config.ini
$ nano /etc/cloudera-scm-agent/config.ini
so that the hostname where the same as the command $ hostname returned.
then I restarted the agent and the server of cloudera:
$ service cloudera-scm-agent restart
$ service cloudera-scm-server restart
then in cloudera manager I deleted the cluster and added again. The wizard continued to run normally.

Resources