how to verify the port mesos is listening on - mesos

After I start memos-master on Ubuntu 14.04, I'm unable to get to http://:5050
therefore I want to verify if Mesos is listening on the default port 5050.
I'm following the instructions here.
vagrant#master2:~$ sudo start mesos-master
mesos-master start/running, process 5272
vagrant#master1:~$ mesos help
Usage: mesos <command> [OPTIONS]
Available commands:
help
start-agents.sh
daemon.sh
stop-masters.sh
start-masters.sh
start-slaves.sh
start-cluster.sh
master
stop-slaves.sh
agent
stop-cluster.sh
stop-agents.sh
log
execute
scp
tail
resolve
ps
init-wrapper
local
cat
I tried this to verify, but no result.
vagrant#master1:~$ sudo netstat -tnlp | grep 5050
I know Mesos is running but I get connection refused.
vagrant#master1:~$ curl http://192.168.2.1:5050
curl: (7) Failed to connect to 192.168.2.1 port 5050: Connection refused

I see you are using Vagrant so go the the browser in the host machine and type <master2_ip>:5050
<master2_ip> - replace it with the IP address of the master2 using ipaddr or ifconfig to find the ip.
If the mesos is up and running you will get the mesos dashboard or else port unreachable error.
Post your vagrantFile here.

Related

how to enable port forward with micrpk8s

I'm playing around with microk8s and I simply want to run an apache server and navigate to its default page on the same machine. I'm on a mac arm m1:
microk8s kubectl run test-pod --image=ubuntu/apache2:2.4-20.04_beta --port=80
~ $ microk8s kubectl get pods 2
NAME READY STATUS RESTARTS AGE
test-pod 1/1 Running 0 8m43s
then I try to enable the forward:
◼ ~ $ microk8s kubectl port-forward test-pod :80
Forwarding from 127.0.0.1:37551 -> 80
but:
◼ ~ $ wget http://localhost:37551
--2022-12-24 18:54:37-- http://localhost:37551/
Resolving localhost (localhost)... 127.0.0.1, ::1
Connecting to localhost (localhost)|127.0.0.1|:8080... failed: Connection refused.
Connecting to localhost (localhost)|::1|:8080... failed: Connection refused.
the logs looks ok:
◼ ~ $ microk8s kubectl logs test-pod 130
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 10.1.254.96. Set the 'ServerName' directive globally to suppress this message
dashboard proxy does work fine and I can navigate to it:
◼ ~ $ microk8s dashboard-proxy
Checking if Dashboard is running.
Dashboard will be available at https://192.168.64.2:10443
Answering myself:
I should use the Multipass' guest machine assigned IP. This is not docker :)
For some reason I haven't figured out, as asked here, the forwarding from the guest does not work properly on mac. I should open a guest's shell and forward from there. that way, it will work. See the answer on the linked post.
Hope this will spare some time on future mac users.

cURL error 28: Failed to connect to systemb port 80: Timed out for http://systemb/api/push_products [duplicate]

This question shows research effort; it is useful and clear
I have checked the cURL not working properly
When I run the command curl -I https://www.example.com/sitemap.xml
curl: (7) Failed to connect
Failed to connect on all port
this error only on one domain, all other domain working fine, curl: (7) Failed to connect to port 80, and 443
Thanks...
First Check your /etc/hosts file entries, may be the URL which You're requesting, is pointing to your localhost.
If the URL is not listed in your /etc/hosts file, then try to execute following command to understand the flow of Curl Execution for the particular URL:
curl --ipv4 -v "https://example.com/";
After many search, I found that Hosts settings not correct
Then I check nano /etc/hosts
The Domain point to wrong IP in hosts file
I change the wrong IP and its working Fine
This is new error Related to curl: (7) Failed to connect
curl: (7) Failed to connect
The above error message means that your web-server (at least the one specified with curl) is not running at all — no web-server is running on the specified port and the specified (or implied) port. (So, XML doesn't have anything to do with that.)
you can download the key with browser
then open terminal in downloads
then type sudo apt-key add <key_name>.asc
Mine is Red Hat Enterprise(RHEL) Virtual Machine and I was getting something like the following.
Error "curl: (7) Failed to connect to localhost port 80: Connection refused"
I stopped the firewall by running the following commands and it started working.
sudo systemctl stop firewalld
sudo systemctl disable firewalld
If the curl is to the outside world, like:
curl www.google.com
I have to restart my cntlm service:
systemctl restart cntlm
If it's within my network:
curl inside.server.local
Then a docker network is overlapping something with my CNTLM proxy, and I just remove all docker networks to fix it - you can also just remove the last network you just created, but I'm lazy.
docker network rm $(docker network ls -q)
And then I can work again.

Multipass VM not reachable via specific http ports from host

I'm running an Ubuntu VM with multipass hyperkit do run microk8s. Within the VM all things checkout and available with skaffold/kubectl port forwarding. For instance:
$ multipass list
Name State IPv4 Image
microk8s-vm Running 192.168.64.2 Ubuntu 20.04 LTS
10.0.1.1
172.17.0.1
10.1.254.64
Port forwarding service/my-app in namespace default, remote port 80 -> 127.0.0.1:4503
Within the VM:curl localhost:4503 ✅
From the host: curl 192.168.64.2:4503🛑
I know the VM is reachable on port 80 because curl 192.168.64.2 returns default ngnix not found page. FWIW I never installed ngnix and the service doesn't seem to be running /cannot turn it off.
I've been at this for a day and I'm stumped. I even tried the Vbox driver and manually configured a bridge adapter. I even created my own adapter...
$ multipass exec -- microk8s-vm sudo bash -c "cat > /etc/netplan/60-bridge.yaml" <<EOF
network:
ethernets:
enp0s8: # this is the interface name from above
dhcp4: true
dhcp4-overrides: # this is needed so the default gateway
route-metric: 200 # remains with the first interface
version: 2
EOF
$ multipass exec microk8s-vm sudo netplan apply
How can I reach this VM from the host?
You cant access your pod ip /portlike this.
If you want to access your pods port over the nodes ip address, you need to define a service type NodePort and then use ipaddressOfNode:NodePort.
curl http://ipaddressOfNode:NodePort
With port-forward you must use the localhost of your host system.
kubectl port-forward svc/myservice 8000:yourServicePort
then
curl http://localhost:8000

Connecting PostgreSQL installed in docker inside Hyper-V Ubuntu from Windows 10 PgAdmin

I need help in connecting PostgreSQL which is installed in Docker inside HyperV ubuntu 18.4 from Windows 10 PgAdmin. So far I tried the following
Step 1: Install Postgres in Docker (Ubuntu running on Hyper-V)
sudo docker run -p 5432:5432 --name pg_test -e POSTGRES_PASSWORD=admin -d postgres
Step 2: Create a database
docker exec -it pg_test bash
psql -U postgres
create database mytestdb
Step 3: Get the ip address
sudo docker inspect pg_test | grep IPAddress
//returned with 172.17.0.2
Step 4: pg_hba.conf
host all all 0.0.0.0/0 md5
Step 5: When I try to connect from Windows PgAdmin 4, I get this below error -
Note: I have also tried using UBUNTU VM IP address, but no luck
Your's is a case where you are trying to connect to postgres from another subnet, i.e windows subnet to hyper visor subnet if you are not using bridged protocol.
So case 1:
If this is on NAT\HOST and not on bridge then you need to make sure you are able to ping the ubuntu server from windows server.
next is make sure that port is open from ubuntu's end. How do you check that, do a telnet on the port number from windows cmd prompt.
telnet 192.168.0.10 5432
if you are bridged and you can ping ping the server as well, checked that port is opened which is telnet works. You need to make sure that in the postgres.conf file
"listen address" is to "*". which is all.
Again from OS level in ubuntu run the command systemctl stop firewalld to stop firewall and then try to connect. IF this works then you need to open the port in the firewall using this command:
firewall-cmd --permanent --add-port 5432/tcp
I can see from you docker image that 5432 is already opened. This is more of port mapping and firewalld stuff.
You may want to check that pg_hba.conf is not restricted to local. It should not be the case for docker image but you never know.
See: https://www.postgresql.org/docs/9.1/auth-pg-hba-conf.html
Also, there is a typo: POSTGRES_PASSWOR=admin is missing D, it should be POSTGRES_PASSWORD=admin.
You don't need container IP. Since you have mapped container port to host machine (Ubuntu) anyone outsider just needs the Ubuntu machine IP, and on Ubuntu itself you can use localhost.

Can't add workers to Docker Swarm

So I am running VMs on VirtualBox to try to get Docker to work in distributed mode. As per this tutorial (https://docs.docker.com/get-started/part4/#configure-a-docker-machine-shell-to-the-swarm-manager) , I set VM called "myvm1" to the be swarm manager with ssh myvm1 "docker swarm init --advertise-addr 10.0.2.15",
however, when I try to add workers to that swarm, I get an error:
Error response from daemon: rpc error: code = Unavailable desc =
all SubConns are in TransientFailure, latest connection error:
connection error: desc = "transport: Error while dialing dial tcp
10.0.2.15:2377: connect: connection refused"
exit status 1
where 10.0.2.15 is the IP of the manager VM I got from running VBoxManage guestproperty get myvm1 "/VirtualBox/GuestInfo/Net/0/V4/IP"
Anyone know what could be the cause? Is my IP wrong? Do I need to open ports?
FYI: To add attempt adding a worker, I tired:
docker-machine ssh myvm2 "docker swarm join --token [token returned by swarm init on myvm1] 10.0.2.15:2377"
Not sure what else I can do.
This is probably because you are running Virtual Box. This mean that some of your interfaces are shared with other VMs and the host.
If you run ifconfig on your VMs and host, Choose the interface which exhibit different IPs for every machine VM.
I had this problem, too and figured out that the eth0 IPs were the same on every machine. Of course, this cannot work.
eth1 also had different IPs for every machine.
Hope this helps.

Resources