This is on EC2. I have an init script that does some basic setup like installing rabbitmq, creating a virtual host, user, setting permissions, etc. So basically it goes:
sudo yum --enablerepo=epel install rabbitmq-server
/etc/init.d/rabbitmq-server start
rabbitmqctl add_user username password
rabbitmqctl add_vhost vhost
rabbitmqctl set_permissions -p vhost username ".*" ".*" ".*"
rabbitmqctl stop
Then I exit the shell, and create an EBS image from the instance. Amazon automatically reboots the server to create the image.
Now the weird part...after a reboot everything was still set except the permissions.
Then when I started a new instance from the image, there was no username or host in rabbitmq.
Is there something that needs to be done in rabbitmq to save changes?
If the settings disappear when you "stop" and "restart" the instance opposed to rebooting it, it is because the ip address is changing and RabbitMQ settings are bound to the ip.
See RabbitMQ on Amazon EC2 instances
I think it maybe be this, from http://www.rabbitmq.com/ec2.html
Persistent data on EBS device
RabbitMQ writes data to the following directories on Ubuntu:
/var/lib/rabbitmq/ to store persistent data like the messages or queues
/var/log/rabbitmq/ to store logs
If you want to use EBS block device to store RabbitMQ data, just link these directories to your EBS device. Stop RabbitMQ before making any changes to the data directory:
$ /etc/init.d/rabbitmq-server stop
Related
For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.
I figure configuring kube-proxy is the way to go. I did the following
kubeadm config view > ~/cluster.yaml
# edit proxy bind address
vi ~/cluster.yaml
kubeadm reset
rm -rf /data/minikube
kubeadm init --config cluster.yaml
Upon doing netstat -ln | grep 8443 i see tcp 0 0 :::8443 :::* LISTEN which means it didn't take the IP.
I have also tried kubeadm init --apiserver-advertise-address 127.0.0.1 but that only changes the advertised address to 10.x.x.x in the kubeadm config view. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.
I have also tried doing this kubeadm config upload from-file --config ~/cluster.yaml and then attempting to manually restart the docker running kube-proxy.
Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).
Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.
There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).
Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.
Thanks
you could limit it via the local network configuration. (Firewall, Routes)
As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".
So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.
To your initial question topic, did you look into this issue? https://github.com/kubernetes/kubernetes/issues/39586
I am running IBM Cloud Private using 5 VMs on my laptop. My home network subnet is 192.168.100 whereas the subnet used by all 5 VMs is 192.168.142. I am port forwarding 8443 from the VMware Workstation from host to the master node which is 192.168.142.103. My laptop IP is 192.168.100.201.
I was hoping that I should be able to access this Web UI from any other machine in my home network and I tried this URL from other machine:
https://192.168.100.201:8443
And, it directs properly to the guest VM as I see the url changes to :
https://192.168.100.201:8443/console/
But, after few seconds, I get the message that the site cannot be reached. I noticed that the url has changed from original host laptop address of 192.168.100.201 address to the Guest VM address 192.168.142.103 as shown:
https://192.168.142.103:8443/idauth/oidc/endpoint/OP/authorize?client_id=617a0480d5e506a5e797f852bea1df38&response_type=code&scope=openid%20email%20profile&redirect_uri=https://192.168.100.201:8443/auth/liberty/callback
This seems like that the redirection in the Web UI is not handled properly.
However, I installed kubectl for Windows on another machine and I did the port 8001 forward from 192.168.100.201 to the VM's master Guest 192.168.142.103 and added kubectl set config commands (from web UI Client Configure option) on my other laptop (192.168.100.202).
kubectl config set-cluster pot_icp_cluster.icp --server=https://192.168.100.201:8001 --insecure-skip-tls-verify=true
kubectl config set-context pot_icp_cluster.icp-context --cluster=pot_icp_cluster.icp
kubectl config set-credentials admin --token=<token>
kubectl config set-context pot_icp_cluster.icp-context --user=admin --namespace=default
kubectl config use-context pot_icp_cluster.icp-context
And, this works perfect as I am able to run kubectl commands from the other laptop (192.168.100.202) to the VMs running on another laptop (192.168.100.201) using port forwarding same way I did for the Web UI.
My question is: Is there something that I can do to get this redirection problem fixed in the Web UI?
I received a reply from an expert that liberty server that authenticates and verifies a login has only the master node's IP address registered with it as a callback URL during the installation. In the version of IBM Cloud Private 2.1.0.1, there is no direct way to register the new clients. However, this limitation is being fixed and starting next upgrade, we should be able to register new clients dynamically post install also.
I have an ICp installation on some bare metal to educate myself with. So I don't need to keep it running all the time. What is the proper way to shut it down while I am not using it? I have two physical nodes; master and worker. Currently I just ssh into each and issue a sudo shutdown now command.
When I bring the cluster back on line later, the I can't get to the admin UI. It responds with a 502 bad gateway error. When I load https://master:9443 I get the Welcome to Liberty page (indicating that at least the web server is running).
If you stop docker containers or the docker runtime, then the kubelet will attempt to restart them.
If you want to shutdown the system, you must stop the kubelet on each node. On Ubuntu, you would use systemctl:
sudo systemctl stop kubelet
sudo systemctl stop docker
Confirm that all processes are shutdown:
top
And that all related network ports are no longer in use:
netstat -antp
(Note that netstat's "-p" option requires root privileges to inspect the pid holding onto the port).
To restart the cluster, start docker and then the kubelet. Again for Ubuntu:
sudo start docker
sudo start kubelet
And of course you can follow the logs for the kubelet:
sudo journalctl -e -u kubelet
Stop Docker to shut it down, I hope this helped.
systemctl stop docker
I'm using the cloudera distribution of Hadoop and recently had to change the IP addresses of a few nodes in the cluster. After the change, on one of the nodes (Old IP:10.88.76.223, New IP: 10.88.69.31) the following error comes up when I try to start the data node service.
Initialization failed for block pool Block pool BP-77624948-10.88.65.174-13492342342 (storage id DS-820323624-10.88.76.223-50010-142302323234) service to hadoop-name-node-01/10.88.65.174:6666
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode: DatanodeRegistration(10.88.69.31, storageID=DS-820323624-10.88.76.223-50010-142302323234, infoPort=50075, ipcPort=50020, storageInfo=lv=-40;cid=cluster25;nsid=1486084428;c=0)
at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:656)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:3593)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:899)
at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:91), I was unable to start the datanode service due to the following error:
Has anyone had success with changing the IP address of a hadoop data node and join it back to the cluster without data loss?
CHANGE HOST IP IN CLOUDERA MANAGER
Change Host IP on all node
sudo nano /etc/hosts
Edit the ip cloudera config.ini on all node if the master node ip changes
sudo nano /etc/cloudera-scm-agent/config.ini
Change IP in PostgreSQL Database
For the password Open PostgreSQL password
cat /etc/cloudera-scm-server/db.properties
Find the password lines
Example. com.cloudera.cmf.db.password=gUHHwvJdoE
Open PostgreSQL
psql -h localhost -p 7432 -U scm
Select table in PostgreSQL
select name,host_id,ip_address from hosts;
Update table IP
update hosts set ip_address = 'xxx.xxx.xxx.xxx' where host_id=x;
Exit the tool
\q
Restart the service on all node
service cloudera-scm-agent restart
Restart the service on master node
service cloudera-scm-server restart
Turns out its better to:
Decommission the server from the cluster to ensure that all blocks are replicated to other nodes in the cluster.
Remove the server from the cluster
Connect to the server and change the IP address then restart the cloudera agent
Notice that cloudera manager now shows two entries for this server. Delete the entry with the old IP and longest heartbeat time
Add the server to the required cluster and add required roles back to the server (e.g. HDFS datanode, HBASE RS, Yarn)
HDFS will read all data disks and recognize the block pool and cluster IDs, then register the datanode.
All data will be available and the process will be transparent to any client.
NOTE: If you run into name resolution errors from HDFS clients, the application has likely cached the old IP and will most likely need be restarted. Particularly Java clients that previously referenced this server e.g. HBASE clients must be restarted due to the JVM caching IPs indefinitely. Java based clients will likely throw errors relating to connectivity to the server with changed IP because they have the old IP cached until they are restarted.
I am admittedly relatively new to using Docker for environment isolation, but I've run into a problem I am yet to solve, and I'm looking for some advice on how to proceed. Apologies if this is dirt simple.
I have an image built with this Dockerfile:
FROM java:7-jre
MAINTAINER me <email redacted>
ENV CATALINA_HOME="/usr/local/tomcat"
ENV PATH=$CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
#Add tomcat tarball with configs
#need to figure out if war files should be auto-deploy or manual-deploy via manager
ADD ./ $CATALINA_HOME
WORKDIR $CATALINA_HOME
RUN tar -xmvf tomcat.tar.gz --strip-components=1 \
&& rm bin/*.bat \
&& rm tomcat.tar.gz*
EXPOSE 8080
#quite possibly unnecessary to expose 61616
EXPOSE 61616
CMD catalina.sh run
Because my host is Mac OSX, I'm using the boot2docker package. The port forwarding is a real PITA, but for now I'm just binding host 8080 to container 8080 when I run the container (-p 8080:8080) and I have 8080 forwarded in the boot2docker networking setup.
This image runs a container just fine, and I am able to manually upload and deploy .war files to this container while it's running.
On my local machine, I am running ActiveMQ. Eventually I'll put this in a container but I need to get past this hurdle first. ActiveMQ is running with the default port 61616 listening, as shown in this netstat output:
14:14 $ netstat -a | grep 6161
tcp46 0 0 *.61616 *.* LISTEN
The problem I'm having is that deployed war files in my tomcat container are unable to talk to the physical host on 61616. Here is the actual error from the catalina.out log on the container (I added some line breaks to make it easier to read):
Could not refresh JMS Connection for destination 'request' - retrying in 5000 ms.
Cause: Error while attempting to add new Connection to the pool; nested exception is javax.jms.JMSException:
Could not connect to broker URL: tcp://localhost:61616.
Reason: java.net.ConnectException: Connection refused
Admittedly, I think it's because the war file is configured to use localhost:61616 to connect to AMQ -- it doesn't feel right for localhost inside the container to "work" reaching back to the host. I'm not sure what variable value I should set that to, or if that's even the actual issue. I would think that if it's a dynamically-allocated black-magic IP address, it'd be relatively painful to keep reconfiguring inside war files.
Corollary: are there other considerations I would need to make beyond this configuration if I wanted to link this tomcat container with an AMQ one?
Thanks in advance for your attention. ~P
First, you shouldn't need to EXPOSE 61616 on the container. (That would allow the container to listen on port 61616, which is not what you want.)
What you do need though is to access docker's localhost (your boot2docker VM) from within the docker container. The best way I've found to do this, so far, from this answer, is to run inside your docker container:
export DOCKER_HOST_IP=$(route -n | awk '/UG[ \t]/{print $2}')
That is going to give you the IP address of your boot2docker VM, as seen from within the current docker container. I'll leave it up to you to figure out how to configure your JMS client to connect to that IP address, but one idea that comes to mind is something like:
echo $DOCKER_HOST_IP my-jms-hostname >> /etc/hosts
And then you can hardcode your JMS configuration to hit my-jms-hostname:61616
I recommend that you put the above two commands into a start script that you use to startup your application server in the container.
Next, you will need to find a way to tunnel that port on your boot2docker VM to your local host OS. For example, on your local host OS, run
boot2docker ssh -R61616:localhost:61616
That will listen on the remote (boot2docker VM's) port 61616 and forward it to your local host OS's localhost:61616, which is where ActiveMQ is hopefully listening happily for an incoming connection from your application server's JMS client.