Web UI redirection issue - ibm-cloud-private

I am running IBM Cloud Private using 5 VMs on my laptop. My home network subnet is 192.168.100 whereas the subnet used by all 5 VMs is 192.168.142. I am port forwarding 8443 from the VMware Workstation from host to the master node which is 192.168.142.103. My laptop IP is 192.168.100.201.
I was hoping that I should be able to access this Web UI from any other machine in my home network and I tried this URL from other machine:
https://192.168.100.201:8443
And, it directs properly to the guest VM as I see the url changes to :
https://192.168.100.201:8443/console/
But, after few seconds, I get the message that the site cannot be reached. I noticed that the url has changed from original host laptop address of 192.168.100.201 address to the Guest VM address 192.168.142.103 as shown:
https://192.168.142.103:8443/idauth/oidc/endpoint/OP/authorize?client_id=617a0480d5e506a5e797f852bea1df38&response_type=code&scope=openid%20email%20profile&redirect_uri=https://192.168.100.201:8443/auth/liberty/callback
This seems like that the redirection in the Web UI is not handled properly.
However, I installed kubectl for Windows on another machine and I did the port 8001 forward from 192.168.100.201 to the VM's master Guest 192.168.142.103 and added kubectl set config commands (from web UI Client Configure option) on my other laptop (192.168.100.202).
kubectl config set-cluster pot_icp_cluster.icp --server=https://192.168.100.201:8001 --insecure-skip-tls-verify=true
kubectl config set-context pot_icp_cluster.icp-context --cluster=pot_icp_cluster.icp
kubectl config set-credentials admin --token=<token>
kubectl config set-context pot_icp_cluster.icp-context --user=admin --namespace=default
kubectl config use-context pot_icp_cluster.icp-context
And, this works perfect as I am able to run kubectl commands from the other laptop (192.168.100.202) to the VMs running on another laptop (192.168.100.201) using port forwarding same way I did for the Web UI.
My question is: Is there something that I can do to get this redirection problem fixed in the Web UI?

I received a reply from an expert that liberty server that authenticates and verifies a login has only the master node's IP address registered with it as a callback URL during the installation. In the version of IBM Cloud Private 2.1.0.1, there is no direct way to register the new clients. However, this limitation is being fixed and starting next upgrade, we should be able to register new clients dynamically post install also.

Related

Microk8s Access nginx pod to other Host Machines

I am using MIcrok8s 1.26v using Hyperv over windows 10. I am unable to access nginx pod to other host machines. I have exposed nginx using this cmd “microk8s kubectl expose deployment nginx-webserver --type=“NodePort” --port 80”. Its exposed to the clusterIP which i am able to access. What should i do to make the pod access to other host machines on the same network.
Microk8s version: 1.26v
windows version: 10 Pro
Hypervisor: HyperV
Using Multipass
I tried to access the pod with vm IP address. But was not able to access to other host machine.
Also not accessible to the host ip address where vm is deployed.
I got the solution after lots of research.
Step1: Because the IP address keeps on changing I have take this step.
To make microk8s work on DNS instead of IP Address
Edit the config file after login into microk8s-vm shell using multipass shell micrk8s-vm in cmd. Login to root user.
sudo su
vi /var/snap/microk8s/current/certs/csr.conf.template
add line >>>>>> under alt.names>>>>DNS.6 = microk8s-vm.mshome.net
exit the vim editor
Update the .kube/config and Microk8s/config . Replace the IP Address with the given dns name(eg: microk8s-vm.mshome.net)
Microk8s stop
Restart the Host machine.
Step 2: Because Microk8s port forwarding fails i have to opt for windows port forwarding.
Configure Windows port forwarding :https://woshub.com/port-forwarding-in-windows/
Now i am able to access the nginx web server on other windows machine.

How to access a web application running as a container in ubuntu from my windows system

I am running a web application(copied from github examples) that is running as a container in a remote ubuntu VM. The application is a Node JS application that is using mysql database. I brought the application up using docker-compose in ubuntu.
The application came up as http://172....:3000 using a network port. The ip address is displayed in the docker-compose terminal. In the ubuntu system, when i do curl http://172....:3000, it gives a proper success response. The ip address is a container network address. It is not the VM's ip address. There is no firewall.
How to access the web application from my windows 7 machine. When I tried accessing using http://VM Ip address:3000, it is not hitting ubuntu system. I am not getting any message in the docker-compose terminal. Can anyone help here ?
ports:
- "3031:3000"
similar line in your docker compose means you have published port 3000 of your container to port 3031 of your Ubuntu VM.
now you can access your client service as http://<ubuntu-ip>:3031 but before this, you need to allow access to port 3031

How to configure kube-proxy bind IP address?

For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.
I figure configuring kube-proxy is the way to go. I did the following
kubeadm config view > ~/cluster.yaml
# edit proxy bind address
vi ~/cluster.yaml
kubeadm reset
rm -rf /data/minikube
kubeadm init --config cluster.yaml
Upon doing netstat -ln | grep 8443 i see tcp 0 0 :::8443 :::* LISTEN which means it didn't take the IP.
I have also tried kubeadm init --apiserver-advertise-address 127.0.0.1 but that only changes the advertised address to 10.x.x.x in the kubeadm config view. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.
I have also tried doing this kubeadm config upload from-file --config ~/cluster.yaml and then attempting to manually restart the docker running kube-proxy.
Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).
Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.
There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).
Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.
Thanks
you could limit it via the local network configuration. (Firewall, Routes)
As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".
So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.
To your initial question topic, did you look into this issue? https://github.com/kubernetes/kubernetes/issues/39586

Hue UI is not accessible from a remote host

I'am trying to use Hue as a file browser for HDFS. So for that I have clone the hue repository and build the app with the following commands given in README.md of the hue repository.
git clone https://github.com/cloudera/hue.git
cd hue
make apps
build/env/bin/hue runserver
Hue UI is accessible in local machine using default port using the url http://localhost:8000 and everything works fine. But when I use my machine ip address http://x.x.x.x:8000 and try to access the Hue UI it keeps on processing and waiting.
Other observations -:
I can ping from remote machine to the host machine.
There is no firewall blocking the ports. (checked with nmap port scanner)
Machines are in same network.
I can access other ports for Hadoop NameNodes UI and DataNodes.
Changing the http_host in hue.ini doesn't affect the result
The ideal setup for Hue is configuring a reverse proxy (Nginx or Apache HTTP, for example)
However, you should refer to the Configuration documentation to externally run the server outside of 127.0.0.1
[desktop]
# Webserver listens on this address and port
http_host=0.0.0.0
http_port=8888
I was able to find a solution to the issue.. First hue run on a CherryPy web server so starting server by command build/env/bin/hue runserver will start the development server where hue.ini configuration is neglected.
So the correct command to start the production server after setting up correct configuration in hue.ini file is build/env/bin/hue runcpserver. Then I was able to access it using remote host without any problem. You also can use supervisor to start the production server. More information about that can be found here

Web App on LAN VM: curl -L works from other vms, browser/curl on host doesn't

I am in the process of setting up a Hadoop cluster of virtual machines on my LAN and a process on one of the vms (the ResourceManager) provides a Web UI which is exhibiting strange behavior. All vms run from my desktop and have been assigned ips.
The URL I am targeting is resourcemanager:8088 and here is the behavior.
From other vms running on my desktop:
curl -v resourcemanager:8088
returns an HTTP 302 Found response with Location: http://resourcemanager:8088/cluster. Looking this up I saw this is a redirect, and curl -L resourcemanager:8088 successfully retrieves the HTML.
From the desktop running the vms:
Trying to reach the URL from (Chrome) browser gives net::ERR_CONNECTION_REFUSED. Also
curl resourcemanager:8088
returns curl: (7) Failed to connect to resourcemanager port 8088: Connection refused.
Each vm has the same /etc/hosts:
::1 localhost
127.0.0.1 localhost
10.0.0.3 namenode
10.0.0.4 resourcemanager
10.0.0.5 datanode1
and the .../drivers/etc/hosts file on my (Windows) desktop looks the same minus the localhost lines.
To make matters more complicated, a second process (the NameNode) also provides a web ui, call it namenode:50070, and I am able to curl it from both the desktop and vms, and I can get to it via browser from my desktop.
Any ideas?
EDIT
Specs:
Desktop OS: Windows 10
VMs OS: Arch Linux latest (Linux kernel 4.5.4)
An initial Arch+hadoop VM was created with Hyper-V, then cloned to create the three "cluster" vms listed above. After cloning, each vm was given a unique hostname (listed above) and assigned a reserved IP address from my router (also listed above). All VMs use an "external vm switch".
I cannot comment, because I do not have 50 reputation yet, but that might have to do with the configuration of the service behind port 8088: The VM probably got a 'small' netmask from the virtual dhcp server, which presumably covered the IP range of all other VMs, not including the host machine. If that had happened and the service was configured like many others -to listen on all interfaces- it would not react on requests and your connection would reach a closed port, causing a 'connection refused' error. How is that?

Resources