I am running solr inside minikube for a POC. I am trying to figure out how to access the solr inside the minikube. As per my knowledge I cant access solr using my host IP, it is only accessible using the minikube IP - 192.168.99.100:8983/solr. My objective is to hit the solr server when accessing it from a remote box.
One of my team mates suggested that I can maybe use something that will forward incoming request to a local IP.
Any suggestions??
Thanks
You would need to expose the solr service using the kubectl expose command for external access.
There are four ways to expose a service for external access in k8s:
LoadBalancer service type which sets the ExternalIP automatically. This is used when there is an external non-k8s, cloud-provider's load-balancer like CGE, AWS or Azure, and this external load-balancer would provide the ExternalIP for the nginx ingress service per https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types.
ExternalIPs per https://kubernetes.io/docs/concepts/services-networking/service/#external-ips.
NodePort: In this approach, the service can be hit from outside the cluster using NodeIP:NodePort/url/of/the/service.
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
First you need to get URL of your Service for solr.
$ minikube service <service-name> --url
http://192.168.99.100:30000
Here, 30000 is your solr Service NodePort.
Now you need to create SSH tunnel.
For that try this
$ ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip) -L \*:30000:0.0.0.0:30000
Note: To keep SSH tunneling in Background, add & at the end of (ssh -i .....) &
Now you can access this solr Service using your Host IP address
Related
I'm used to connect to my cluster using telepresence and access cluster services locally.
Now, I need to make services in the cluster available to a group of applications that are running in docker containers locally. We can say that it's the inverse use case.
I've an app that is running in a docker container. It access services that are deploy using docker-compose. It has been done by using a network:
docker network create myNetwork
// Make app 1 to use it
docker network connect myNetwork app1
// App 2 uses docker compose, so myNetwork is defined in it and here I just:
docker-compose up
My app1 access correctly the containers/services running in app2. However, I still need it to access a service from my cluster!
I've tried make a tunnel from my host to the cluster with telepresence and then try to access the service as if it were in my host. However it seems not to work. If I go into my app1 container and do a curl to see if the service name resolves:
curl: (6) Could not resolve host: my_cluster_service_name
Is my approach wrong? Am I missing an operation or consideration? How could I accomplish it?
Docker version: Docker version 19.03.8 for Mac
I've find a way to solve the problem.
Instead of trying to use telepresence as for the inverse use case, solution comes by using a port-forward with k9s. When creating it, it's important to do not leave the default interface, that is set to localhost, and put 0.0.0.0 instead to ensure that it listens traffic from all interfaces.
Then I've changed my containers from inside, making the services to point to my host's IP when trying to resolve the service names. Use the method that better fits your case for this: since it's not a production environment I just tried hardcoding my host IP manually to check if the connectivity was achieved.
To point to an specific service of your cluster you need to use different ports since they will be all mapped to your host with different port-forwards. Name resolving is no longer needed.
With this configuration, your container request will reach your host, where the port-forward routes it to the cluster. Connectivity is OK with this setup and the problem is solved.
I have Windows EC2 instance I use for my public-facing C# API. The VPC(and related Internet Gateway, subnets, etc) are all default.
I've now setup an AWS ElasticSearch service using their more secure VPC Endpoint option (instead of public-facing) and I've associated it to the same subnet and vpc as my above Windows EC2 instance.
I'd like to get them to talk to each other.
Reading from https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-vpc.html
It seems what you'd do is ssh tunnel / port forward traffic from localhost:9200 on the EC2 instance to the actual Elastic Search service (via that VPC endpoint).
It seems this command is where the magic happens:
ssh -i ~/.ssh/your-key.pem ec2-user#your-ec2-instance-public-ip -N -L 9200:vpc-your-amazon-es-domain.region.es.amazonaws.com:443
but that is for a Linux EC2 instance.
If I am Remote Desktopped into my Windows EC2 instance (the API), how can I make it so when I go to a browser, http://localhost:9200
will send traffic to my VPC Endpoint:
vpc-your-amazon-es-domain.region.es.amazonaws.com:443
Thanks!
Alright, so I'll answer my two questions:
First, it's actually quite easy, just RDP to your box and access the instance directly via the VPC endpoint. You don't need to do anything wacky like port forwarding using the netsh command or anything like that. Simply make sure the server (in my case my API) is on the same VPC and you're fine. I just had an error in my connection string that's why it didn't connect. To confirm, I RDP'D in and was able to hit the endpoint directly in a browser on port 80. While it's true the actual Elasticsearch runs on port 9200, you don't need to forward to localhost:9200 --> vpc:9200.
Now, regarding the second question, about hitting it locally, I think the problem is that because this service lacks a public IP address and you can't access it, that you can go through some complicated setup on AWS, or easier is just set it up to run locally for now until you are ready to use the VPC one (and thus your code will just run). Another option is to use security groups and make a publicly accessible cluster for now, and then when your code is done, search service/layer done, etc, you can start anew with a VPC/secure Elasticsearch service and that should be it.
Another thing that many mention is that it is cheaper/you have more control of things if you setup your own Elasticsearch on your local machine, and then set one up on EC2 (this is just reading blogs and seeing people mention how much frustration they had with it).
I have a 4 node hadoop cluster on ec2. We have configured Hortonworks Hadoop (HDP version 2.4) through Ambari.
I have opened all traffic for our all four instances internally and the office external IP.
Whenever I do telnet within the cluster using internal IP:
telnet <internal_ip> 2181
It is able to connect to the specific port I have my service(zookeeper) running on.
When I use public IP of the same instance(Elastic IP) instead of internal IP, I am not able to telnet either within the cluster or from my office IP:
telnet <elastic_ip> 2181
I have already configured security group to allow all traffic. I am using Ubuntu 14.04. We are not using any other firewall except AWS security group.
Please suggest how can I connect using Elastic IP/Public IP of my instance on this port.
Please find the screenshot of Security Group of EC2:
Do you use the default VPC ?
If not, check if the VPC has an Internet Gateway, the Route table (you need a route to the Internet Gateway) and the Networks ACLs.
The Route table and Network ACLs are applied to a subnet.
The default VPC is configured to allow outside traffic, not a new VPC.
Or, the Elastic IP is linked to the same network interface ? The Elastic IP is linked to a network interface of an instance.
EDIT: you can take a look on AWS doc for a better explanation:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html
I have installed rundeck in docker using ec2 instance.
When I run the image and start rundeck. It's fine.
Lynx http:localhost:4440
Us able to show rundeck dashboard.
But, how can I access this rundeck from Windows browser?
I tried using address but connection refused.
In order to access this from outside for your setup, you might have to ensure the following things:
Ensure that host server (ec2) is forwarding ports to the docker container. You should have used -p or -ports when launching the container for this.
Test: From your EC2 instance, you should be able to access: http://localhost:4440
Ensure you have a public IP assigned to your EC2. You should be able to see that from your aws ec2 console: http://console.aws.amazon.com/ec2
Ensure that your security group(s) for that instance has InBound connections to accept 4440 from your IP or rest of the world.
After this, your http://:4440 should work.
I hope I got your question correct.
Let me know how it goes,
Thanks,
Anoop
I'm struggling to get EC2 and ElasticSearch up and running. Specifically I'm trying to reach my node from outside Amazon's cloud for verification purposes. I've set up the security group so that I have a "Custom TCP" rule on port 9200 and ElasticSearch is listening on that port, which I can see with netstat -l. When I curl -XGET https://localhost:9200 I get the response expected from ElasticSearch. When I curl -XGET https://publicIP:9200 from WITHIN Amazon (ie. another node that I have running) I get the response expected from ElasticSearch. When I try to do the same request from my desktop I get "no response". I can not, for the life of me, figure out why this is happening.
There are several things to check:
Accessing the public URL of an instance from inside the amazon cloud will map to its private IP. In you test above, where you specify publicIP, did you use the public IP or public domain name? Make sure to test with the IP, not the domain name.
If access to the public IP works from the same machine, try the same thing from another EC2 instance.
Finally, you may have a firewall rule on your desktop, or your work network, preventing outgoing access on port 9200.
If you are running Elasticsearch as a service, then go to /etc/elasticsearch/elasticsearch.yml and make the
network.host: "0.0.0.0"
This solution worked for me.