WSL2 ,Ubuntu DNS problem , can not resolve host - macos

Hey i am quiet devastated at this point spent my whole day to setup that ubuntu still nothing.
I downloaded ubuntu 20.04 everything is set up but it just can not connect to nothing, like i can not even get updated it just fails to connect.
I TRIED literally everything i changed the etc/resolv.conf to nameserver 8.8.8.8 , i changed my local internet protocol, and still nothing like what am i supposed to do? Why its not working?
All the terminal does is " could not resolve host " or "just failed to connect" i uninstalled multiple times , i had 18.04 before , got rid of that now 20.04 not working at all, i just want to develop my ruby application on that but nothing seems to work.

Related

Acumos Installation

need some help getting acumos running.
Have VM with Ubuntu 20.04.4
Followed flow-1 here: https://docs.acumos.org/en/latest/submodules/system-integration/docs/z2a/tl-dr.html#flow-1
acumos-k8s-portal-be-569bf85dc-kx89b-shows pending
When I go to https://localhost/ on VM, I get 504 gateway timeout - nginx
gateway timeout
Not sure where to look for the issue?
Any help would be appreciated.
Just in case someone runs into the same issue, I havent resolved it on my local VM but got all all setup and working as follows:
So I used Oracle VirtualBox for my VM's with Ubuntu desktop 20.04.4 - the issue persists no matter what I tried, even tried Ubuntu 18 with same results.
My current solution:
Got an Ubuntu cloud server from Hetzner (ubuntu server 20.04), went thru the same process and it's all up and running :), just bizarre.
Not sure if different results will come from vmware, will give it a try sometime.
Cheers.

Homestead server times out when route not found

Accessing any invalid/non existing route on fresh laravel app returns halts & timeout after 60 seconds.
Error: The process "git status -s" exceeded the timeout of 60 seconds.
Same code works fine fine locally on xampp and returns method/controller not found within a couple of seconds. Please guide.
P.S: Seems like git status takes a lot of time inside vagrant ssh, but it works fine on host machine. What also bothers me is why git status command is being run when accessing route?
Host: Windows 10
Box: v9.2.0.
Virtualbox v6.1.2
Vagrant: v2.2.6
Ok, so turns out the laravel error page package ignition is causing all the problems.
There is an ongoing issue on there github page and multiple solution including the git status issue.

Running dusk tests on Homestead for Windows

I'm using homestead on windows 10 and installed laravel 5.4 when I'm trying to run dusk tests I get the following error:
1) Tests\Feature\ViewProductListingTest::user_can_view_product_listing
Facebook\WebDriver\Exception\WebDriverCurlException: Curl error thrown for http POST to /session with params: {"desiredCapabilities":{"browserName":"chrome","platform":"ANY"}}
Failed to connect to localhost port 9515: Connection refused
Has anybody had any luck getting around this?
Thanx.
I encountered this very problem today. I spent like 2 hours researching and finally resolved it by following the steps posted on this comment.
I ran the original basic test and it passed. I'll try to run more complex test and hopefully the solution still works.
It seems to me that the Homestead lacks some necessary software(Chrome, xvfb etc.) to run browser tests with Laravel Dusk. That's what that comment is trying to install.
Hope this helps!
I ran into this issue before and was not able to completely resolve it.
The connection refused error occured for me because the execution scripts for dusk in /vendor/laravel/dusk/bin were not set executable inside Homestead.
So i used a chmod 777 on those scripts.
After that it complained that if couldn't find an executable chrome binary, so i installed google-chrome in Homestead.
After I installed google chrome, the tests ran, but timed out before they could finish which i am researching now.
I ran into the same issue (but I'm on macOS Sierra). If you can, instead of running Dusk from the Homestead VM, run it from your host machine. Just make sure the DB_HOST value in your .env file is set to the hostname you use to access the site in your browser.
So for example, if you configured the Homestead site to be accessible at mycoolsite.app, use that as your DB_HOST value.
I know this is more of a workaround for situations where your host machine might be able to run it OK, but it's working for me at the moment if you can give it a try.

Vagrant not working with localhost extension on Chrome

I have a 2015 mac with macOs Sierra.
After too many problems with apache and php, I've decided to run Vagrant.
I'm running box.scotch.io for my work
Before Vagrant I configured the hosts file as follows:
127.0.0.1 devsite.localhost
127.0.0.1 sub.devsite.localhost
Remember I can't change the domains and extensions because it's not my project and I have to use those in order for some redirects and API's to work.
After Vagrant I changed it to:
192.168.33.10 devsite.localhost
192.168.33.10 sub.devsite.localhost
After editing the conf files inside vagrant, it worked fine. BUT not on chrome. I've tested in safari and firefox and it works fine.
For some reason, on Chrome those two were still showing me apache2 files.
So I've went and deleted the conf files from my local apache (for some reason). The only thing that changed was that now it shows me 403 forbidden (so still apache). I've tried shutting down apache. Now it shows me "This site can't be reached"
I've pinged them and it showed the correct IP (vagrant IP).
I've flushed the DNS (from terminal and from chrome) - still doesn't work.
I tried restarting chrome - nope. I tried restarting the laptop - nope.
So I thought that chrome dooesn't reload the hosts file, so I changed it from .localhost to .localhost2 or .local . Now it shows me the 404 from vagrant.
The weird part -> Everything i put with .localhost as extension doesn't work on Chrome... a.b.c.localhost will not work.
If I start apache, a.b.c.localhost will show me 403 forbidden from apache, even though it's not in the hosts file.
Note that in firefox and safari it works fine. But I really need chrome and the .localhost extension
I've already lost almost 2 days on this issue and I can't afford to lose another one
Find related information here: https://bugs.chromium.org/p/chromium/issues/detail?id=489973
In short: this seem to be a known Chrome feature: /etc/hosts is ignored for resolution of host names ending with .localhost, as OS X specific security mitigation. Comment 22 indicates a workaround: add 127.0.0.1 localhost. into /etc/hosts (and note the trailing dot after "localhost.").
For host management you can use a special plugin https://github.com/devopsgroup-io/vagrant-hostmanager AFAIR they resolved Chrome + MacOS issue

cloudera host with bad health during install

Trying again & again with all required steps completed but cluster Installation when install selected Parcels, always shows every host with bad health. setup never completed at full.
i am installing cm 5.5 on CentOS 6.7 using virtualbox.
The Error
Host is in bad health cm.feuni.edu
Host is in bad health dn1.feuni.edu
Host is in bad health dn2.feuni.edu
Host is in bad health nn1.feuni.edu
Host is in bad health nn2.feuni.edu
Host is in bad health rm.feuni.edu
above error are shown on step 6 where setup says
The selected parcels are being downloaded and installed on all the hosts in the cluster
in previous step 5 all hosts were completed with heartbeat checks in the end
memory distributions
cm 8GB
all others with 1GB
i could not find proper answer anywhere else. What reason could be for the bad health?
I don't know if it will help you...
For me, after a few days I struggled with it,
I found the log files (at )
It had a comment there is a mismatch of the guid,
so I uninstalled everything from both machines (using the script they give,/usr/share/cmf/uninstall-cloudera-manager.sh , yum remove 'cloudera-manager-*' and deletion of every directory related to cloudera I found...)
and then removed the guid file:
rm /var/lib/cloudera-scm-agent/cm_guid
Afterwards I re-installed everything, and that fixed that issue for me...
I read online that there can be issues with the hostname and things like that, but I guess that if you get to this part of the installation, you already fixed all the domain/FDQN/hosname/hosts issues.
It saddens me there is no real manual/FAQ for this product.. :(
Good luck!
I faced the same problem. This is my solution:
First I edited config.ini
$ nano /etc/cloudera-scm-agent/config.ini
so that the hostname where the same as the command $ hostname returned.
then I restarted the agent and the server of cloudera:
$ service cloudera-scm-agent restart
$ service cloudera-scm-server restart
then in cloudera manager I deleted the cluster and added again. The wizard continued to run normally.

Resources