I have installed Spinnaker on a Vagrant Machine running Ubuntu 14.04
All my components are running successfully (Checked Active Ports , all logs)
I am also binding Deck UI and Gate on all network interfaces by specifying custom settings
When i access Deck UI from the host machine on VagrantIP at 9000, the UI comes up successfully. But Deck UI tries to access gate on localhost at 8084 and gets a "Connection Refused".
My Gate is running at "http://VagrantIP:8084"
Where do i modify the URL with which Deck accesses Gate?
Thanks for your help
You need to bind spinnaker to the 0.0.0.0 network interface so it will be available when accessed from your local machine.
You can read the following blog post https://blog.spinnaker.io/exposing-spinnaker-to-end-users-4808bc936698 but basically the following should do the trick
We’ll specify the 0.0.0.0 host in both gate.yml and deck.yml in our
default Halyard deployment with this command:
echo "host: 0.0.0.0" | tee \
~/.hal/default/service-settings/gate.yml \
~/.hal/default/service-settings/deck.yml
sudo hal deploy apply
Related
Hi what I am actually trying is to connect remotly from a MySQL Client in Windows Subsystem for Linux mysql -h 172.18.0.2 -P 3306 -u root -p and before that I started the Docker Container as follows: docker container run --name testdb --network testnetwork -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlRootPassword -e MYSQL_DATABASE=localtestdb -d mariadb/server.
The purpose why I put the container in a own network, is because I also have a dockerized Spring Boot Application (GraphQL-Server) which shall communicated with this db. But always when I try to connect from my built-in mysql client, in my Windows Subsystem for Linux, with the above shown command. I got the error message: ERROR 2002 (HY000): Can't connect to MySQL server on '172.18.0.2' (115).
What I already tried, to solve the problem on my own is, look up whether the configuration file line (bind-address) is commented out. But it wont work. Interestingly it already worked to set up a docker container with MariaDB and connect from the outside, but now when I try exactly the same, only with the difference that I now put the container in a own existing network, it wont work.
Hopefully there some one out there which is able to help me with this annonying problem.
Thanks!
So far,
Daniel
//edit:
Now I tried the solution advice from a guy from this topic: How to configure containers in one network to connect to each other (server -> mysql)?. Futhermore I linked my Spring Boot (server) application with the "--link databaseContainerName" parameter to the MariaDB container.
Now I am able to start both containers without any error, but I am still not able to connect remotly to the MariaDB container. Which is now running in a virtual docker network with his own subnet.
I explored this recently - this is by design - container isolation. Usually only main (service httpd) host is accessible externally, hiding internal connections (hosts it communicates to deliver response).
Container created in own network is not accessible from external adresses, even from containers in the same bridge but other network (172.19.0.0/16).
Your container should be accessible on docker host address (127.0.0.1 if run locally) and mapped ("-p 3306:3306") port - 3306. But of course it won't work if many running db containers have the same mapping to the same host port.
Isolation is done using firewall - iptables. You can list rules (iptables -L) to see that - from docker host level.
You can modify firewall to allow external access to internal networks. I used this rule:
iptables -A DOCKER -d 172.16.0.0/12 -j ACCEPT
After that your MySQL containerized engine should be accessible using internal address 172.18.0.2 and source (not mapped) port 3306.
Warnings
it disables all isolation, dont't use it on production;
you have to run this after every docker start - rules created/modified by docker on the fly
not every docker container will respond on ping, check it from docker host (linux subsystem in this case) first, from windows cmd later
I used this option (in docker.service) to make rule permanent:
ExecStartPost=/bin/sh -c '/etc/iptables/accept172_16.sh'
For docker on external(shared in lan) host you should use route add (or hosts file on your machine or router) to forward 172.x.x.x addresses into lan docker host.
Hint: use portainer project (with restart policy - always) to manage docker containers. It's easier to see config errors, too.
For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.
I figure configuring kube-proxy is the way to go. I did the following
kubeadm config view > ~/cluster.yaml
# edit proxy bind address
vi ~/cluster.yaml
kubeadm reset
rm -rf /data/minikube
kubeadm init --config cluster.yaml
Upon doing netstat -ln | grep 8443 i see tcp 0 0 :::8443 :::* LISTEN which means it didn't take the IP.
I have also tried kubeadm init --apiserver-advertise-address 127.0.0.1 but that only changes the advertised address to 10.x.x.x in the kubeadm config view. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.
I have also tried doing this kubeadm config upload from-file --config ~/cluster.yaml and then attempting to manually restart the docker running kube-proxy.
Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).
Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.
There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).
Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.
Thanks
you could limit it via the local network configuration. (Firewall, Routes)
As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".
So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.
To your initial question topic, did you look into this issue? https://github.com/kubernetes/kubernetes/issues/39586
I am running IBM Cloud Private using 5 VMs on my laptop. My home network subnet is 192.168.100 whereas the subnet used by all 5 VMs is 192.168.142. I am port forwarding 8443 from the VMware Workstation from host to the master node which is 192.168.142.103. My laptop IP is 192.168.100.201.
I was hoping that I should be able to access this Web UI from any other machine in my home network and I tried this URL from other machine:
https://192.168.100.201:8443
And, it directs properly to the guest VM as I see the url changes to :
https://192.168.100.201:8443/console/
But, after few seconds, I get the message that the site cannot be reached. I noticed that the url has changed from original host laptop address of 192.168.100.201 address to the Guest VM address 192.168.142.103 as shown:
https://192.168.142.103:8443/idauth/oidc/endpoint/OP/authorize?client_id=617a0480d5e506a5e797f852bea1df38&response_type=code&scope=openid%20email%20profile&redirect_uri=https://192.168.100.201:8443/auth/liberty/callback
This seems like that the redirection in the Web UI is not handled properly.
However, I installed kubectl for Windows on another machine and I did the port 8001 forward from 192.168.100.201 to the VM's master Guest 192.168.142.103 and added kubectl set config commands (from web UI Client Configure option) on my other laptop (192.168.100.202).
kubectl config set-cluster pot_icp_cluster.icp --server=https://192.168.100.201:8001 --insecure-skip-tls-verify=true
kubectl config set-context pot_icp_cluster.icp-context --cluster=pot_icp_cluster.icp
kubectl config set-credentials admin --token=<token>
kubectl config set-context pot_icp_cluster.icp-context --user=admin --namespace=default
kubectl config use-context pot_icp_cluster.icp-context
And, this works perfect as I am able to run kubectl commands from the other laptop (192.168.100.202) to the VMs running on another laptop (192.168.100.201) using port forwarding same way I did for the Web UI.
My question is: Is there something that I can do to get this redirection problem fixed in the Web UI?
I received a reply from an expert that liberty server that authenticates and verifies a login has only the master node's IP address registered with it as a callback URL during the installation. In the version of IBM Cloud Private 2.1.0.1, there is no direct way to register the new clients. However, this limitation is being fixed and starting next upgrade, we should be able to register new clients dynamically post install also.
I am in the process of setting up a Hadoop cluster of virtual machines on my LAN and a process on one of the vms (the ResourceManager) provides a Web UI which is exhibiting strange behavior. All vms run from my desktop and have been assigned ips.
The URL I am targeting is resourcemanager:8088 and here is the behavior.
From other vms running on my desktop:
curl -v resourcemanager:8088
returns an HTTP 302 Found response with Location: http://resourcemanager:8088/cluster. Looking this up I saw this is a redirect, and curl -L resourcemanager:8088 successfully retrieves the HTML.
From the desktop running the vms:
Trying to reach the URL from (Chrome) browser gives net::ERR_CONNECTION_REFUSED. Also
curl resourcemanager:8088
returns curl: (7) Failed to connect to resourcemanager port 8088: Connection refused.
Each vm has the same /etc/hosts:
::1 localhost
127.0.0.1 localhost
10.0.0.3 namenode
10.0.0.4 resourcemanager
10.0.0.5 datanode1
and the .../drivers/etc/hosts file on my (Windows) desktop looks the same minus the localhost lines.
To make matters more complicated, a second process (the NameNode) also provides a web ui, call it namenode:50070, and I am able to curl it from both the desktop and vms, and I can get to it via browser from my desktop.
Any ideas?
EDIT
Specs:
Desktop OS: Windows 10
VMs OS: Arch Linux latest (Linux kernel 4.5.4)
An initial Arch+hadoop VM was created with Hyper-V, then cloned to create the three "cluster" vms listed above. After cloning, each vm was given a unique hostname (listed above) and assigned a reserved IP address from my router (also listed above). All VMs use an "external vm switch".
I cannot comment, because I do not have 50 reputation yet, but that might have to do with the configuration of the service behind port 8088: The VM probably got a 'small' netmask from the virtual dhcp server, which presumably covered the IP range of all other VMs, not including the host machine. If that had happened and the service was configured like many others -to listen on all interfaces- it would not react on requests and your connection would reach a closed port, causing a 'connection refused' error. How is that?
I followed the instructions here and was able to succesfully (I think) install the gitlab vagrant virtual machine on OSX 10.8 using virtualbox.
I can do vagrant up to get the VM running, and everything seems to work fine. After that I can do vagrant ssh without a problem. Also, after sshing into the VM I was able to do bundle exec rake gitlab:test, which completed with results being 1584 examples, 0 failures.
I would like to see the gitlab web interface from my OSX host machine. I thought I could just direct my browser to the IP indicated in the VagrantFile (http://192.168.3.14), but that didn't work.
Any ideas?
Also any other usage tips for this setup would be appriciated (things like where the repositories are stored on my host machine so I can back them up, if anyone set the gitlab-vagrant-vm up for external access from either another computer on the network or a remote source, ect.)
You have to connect a second interface for vagrant. To do this you've to edit the VagrantFile.
For example if you want to conenct to the host wifi add the following line after 192.168.3.14
config.vm.network :bridged, bridge: "en0: Wi-Fi (AirPort)"
You also can bridge to the ethernet interface. Use ifconfig on the host machine to determine the right interface. After that the dyndns-server of the host network will assign an IP to the Vagrant-Box. Then you can access GitLab on that IP.
Did you actually start the server? You can do that with
bundle exec foreman start -p 3000
This will start the server on port 3000, you would then access it from the host with
http://192.168.3.14:3000/
Hope this helps,
Chris