How to expose 2 ports in OpenShift for JMeter client - jmeter

I built a Docker image which provides an JMeter server. In the Dockerfile I exposed 2 ports by
[...]
EXPOSE 1099 50000
[...]
When running the image on my local machine with
docker run --rm --name [name] -d -p 1099:1099 -p 50000:50000 [name]
I can access the server from the JMeter controller.
When I try to run the image in OpenShift I cannot find a way to expose the 2 ports in the Route's definition. It seems as if only one port per Route is allowed.
Is there a work around in OpenShift to access the JMeter server from my client analogous to my local setup?

There's an open issue to support multiple ports per route in OpenShift
Is it possible for a route to expose multiple ports? #16529
There's a workaround to defined multiple routes for different ports
To have multiple routers for different ports, copy router YAML, change every occurence of port and router name, and import the YAML as a new router.
#tocosonic This way you can use the same host for different routes (because, different ports will be served by different routers)

Suppose you have created one route with following
oc expose srv foo --port 8080 --hostname foo.bar.com
This will take service name and use it also for route name and expose the service
oc get routes
NAME . HOST/PORT . SERVICES . PORT .
foo . foo.bar.com . foo . 8080
suppose you want to create another route at port 9000; if you run following command
oc expose srv foo --port 9000 --hostname api.foo.bar.com
You will get following error
Error from server (AlreadyExists): "foo" already exist
As you can see the command took srv name and used it for route name as well
Instead, you can do following to provide unique route name
oc expose srv foo --port 900 --hostname api.foo.bar.com --name api
Name of the service must be unique, and since by default service name is used for route name as well, another route request raises exception.
You can avoid this exception by providing a unique name with --name

Related

Connecting two dockerized microservices without using Docker Compose file [Spring Boot application]

I am learning how to connect two microservices, first without using a Docker Compose file.
When I include the following IP address
eureka.client.serviceUrl.defaultZone=http://172.17.0.2:8761/eureka in the application.properties file
and run docker run --name vehicle -p 8080:8080 vehicle -- I manage to connect Eureka and Vehicle Microservices.
BUT if I use the name of the Eureka microservice instead of its address in the application.properties file: eureka.client.serviceUrl.defaultZone=http://discovery:8761/eureka
and run docker run --name vehicle -p 8080:8080 vehicle -- it does not work.
ONLY if I override the env variable like this:
docker run --name vehicle -p 8080:8080 -e EUREKA_CLIENT_SERVICEURL_DEFAULTZONE=http://172.17.0.2:8761/eureka
OR if I add a link:
docker run --name vehicle -p 8080:8080 --link discovery:discovery vehicle -- it works fine.
My question is why does it work immediately when I include the IP address into the URL in the application.properties file, but I need to use a link or override the URL address if I specify the service name?
My reasoning is that Docker has an embedded DNS system as shown in the picture, thus it should be able to recognize microservices by both their names and addresses.
Please help me clarify my understanding.

Cannot connect to Container-optimized-os (running a spring-boot application using docker) using external ip

I have created a Google compute instance with Container-optimized-OS image.
I have configured the firewall to allow http and https.
I am using the docker image with spring boot application which connects to cloudsql. When I use run command on compute engine instance ssh, i.e. (docker run --rm name), the spring boot app is started successfully.
When I try to access the webservices through compute engine instance external ip, it is not working.
I went through a different question, and found that I should try using the sudo wget http://localhost command on the instance cli first and if it is good then everything should be good. But I am getting a connection refused message on 127.0.0.1:80.
I also tried the command to open port from Container optimized OS, I.E.
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT , nothing is working.
The default port for Spring Boot is 8080 and not 80.
Run this command inside the instance container to see what ports are in LISTENING state:
sudo netstat -tulpn | grep LISTEN
You can redirect port 80 to port 8080 with this command:
sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
Note: This iptables command only redirects port 80 to 8080 on network interfaces. This has no effect for localhost or 127.0.0.1.
For Google Compute Engine instances you do not need to enable ports using iptables. This is done via Google VPC firewall rules. You can use both but make sure you understand exactly what you are configuring and the side effects.
Note: Your Spring Boot application needs to listen on 0.0.0.0 and not 127.0.0.1 nor localhost. The last two are internal only addresses. 0.0.0.0 means listen on all network interfaces.
Note: Do not use sudo in front of wget. This is not necessary.
First, confirm what port your springboot application uses - if it's 8080 or 80. This depends on what you have configured inside application.properties file. This port is referred to as ContainerPort in below steps.
Execute docker run <image-name>:<tag>. This will run the image and show container logs on the console. If there is something wrong with your spring-boot app, the logs will show that and the container will shutdown. Press Ctrl+C to stop the container and return to shell.
If there is no error in step 1 run docker run -d -p<HostPort>:<ContainerPort> <image-name>:<tag>. Here HostPort is any free port in your GCP host VM and ContainerPort is the port used by your spring boot application within the container. Option d starts your container in detached mode.
Run docker ps and make sure that the container started in step 2 is running. It may not run if there is an error - for example if the HostPort you specified is already in use.
If step 3 shows that the container is running, execute curl http://localhost:<HostPort>/<End-Point-Path>. Here End-Point-Path is a valid path to a working endpoint within the container. If the endpoint is correct you should see expected result from the spring-boot app in the console.
Navigate to Google Cloud Console -> VPC network -> Firewall rules and add a firewall rule to open HostPort on your GCP VM.
Access your endpoint via the VM's external IP with URL - http://<VM-External-IP>:<HostPort>/<End-Point-Path>
Unless there is an application issue with your spring-boot app these steps should get you going.
I was able to build the correct solution by your help (John Hanley and Cyac).
I am combining both solutions in order to help the next person facing this.
As told by John, by default Spring boot uses port 8080, not 80 and as specified by Cyac you need to specify the port as 80 explicitly in application.properties file using
server.port=80
Make sure you expose the port 80 in docker image
On GCP Contaier optimized OS make sure you have allowed traffic for HTTP and HTTPs
Run command:
sudo iptables -w -A INPUT -p tcp --dport 80 -j ACCEPT
Run docker using:
docker run -p 80:80 SPRING_IMAGE.
Where SPRING_IMAGE is the name of the docker image with spring boot build.
Test by using curl http://localhost/ENDPOINT_NAME , e.g. http://localhost/shops/all

How to visit a docker service by ip address

I'm new with docker and I'm probably missing a lot, although i went through the basic documentation and I'm trying to deploy a simple Spring Boot API
I've deployed my API as a docker-spring-boot .jar file , then i installed docker and pushed it with the following commands:
sudo docker login
sudo docker tag docker-spring-boot phillalexakis/myfirstapi:01
sudo docker push phillalexakis/myfirstapi:01
Then i started the API with the docker run command:
sudo docker run -p 7777:8085 phillalexakis/myfirstapi:01
When i visit localhost:7777/hello I'm getting the desired response
This is my Dockerfile
FROM openjdk:8
ADD target/docker-spring-boot.jar docker-spring-boot.jar
EXPOSE 8085
ENTRYPOINT ["java","-jar","docker-spring-boot.jar"]
Based on this answered post this the command to get the ip address
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id
So, i run it with container_name_or_id = phillalexakis/myfirstapi:01 and I'm getting this error
Template parsing error: template: :1:24: executing "" at <.NetworkSettings.Networks>: map has no entry for key "NetworkSettings"
If i manage somehow to get the IP will i be able to visit it and get the same response?
This is how i have it in my mind: ip:7777/hello
You have used the image name and not the container name.
Get the container name by executing docker ps.
The container ID is the value in the first column, the container name is the value in the last column. You can use both.
Then, when you have the IP, you will be able to access your API at IP:8085/hello, not IP:7777/hello
The port 7777 is available on the Docker Host and maps to the port 8085 on the container. If you are accessing the container directly - which you do, when you use its IP address - you need to use the port that the container exposes.
There is also another alternative:
You can give the container a name when you start it by specifying the --name parameter:
sudo docker run -p 7777:8085 --name spring_api phillalexakis/myfirstapi:01
Now, from your Docker host, you can access your API by using that name: spring_api:8085/hello
You should never need to look up that IP address, and it often isn't useful.
If you're trying to call the service from outside Docker space, you've done the right thing: use the docker run -p option to publish its port to the host, and use the name of the host to access it. If you're trying to call it from another container, create a network, make sure to run both containers with a --net option pointing at that network, and they can reach other using the other's --name as a hostname, and the container-internal port the other service is listening on (-p options have no effect and aren't required).
The Docker-internal IP address just doesn't work in a variety of common situations. If you're on a different host, it will be unreachable. If your local Docker setup uses a virtual machine (Docker Machine, Docker for Mac, minikube, ...) you can't reach the IP address directly from the host. Even if it does work, when you delete and recreate the container, it's likely to change. Looking it up as you note also requires an additional (privileged) operation, which the docker run -p path avoids.
The invocation you have matches the docker inspect documentation (as #DanielHilgarth notes, make sure to run it on the container and not the image). In the specific situation where it will work (you are on the same native-Linux host as the container) you will need to use the unmapped port, e.g. http://172.17.0.2:8085/hello.

docker ports not available

I have a spring-config-sever project that I am trying to run via Docker. I can run it from the command line and my other services and browser successfully connect via:
http://localhost:8980/aservice/dev
However, if I run it via Docker, the call fails.
My config-server has a Dockerfile:
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=build/libs/my-config-server-0.1.0.jar
ADD ${JAR_FILE} my-config-server-0.1.0.jar
EXPOSE 8980
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/my-config-server-0.1.0.jar"]
I build via:
docker build -t my-config-server .
I am running it via:
docker run my-config-server -p 8980:8980
And then I confirm it is running via
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1cecafdf99fe my-config-server "java -Djava.securit…" 14 seconds ago Up 13 seconds 8980/tcp suspicious_brahmagupta
When I run it via Docker, the browse fails with a "ERR_CONNECTION_REFUSED" and my calling services fails with:
Could not locate PropertySource: I/O error on GET request for
"http://localhost:8980/aservice/dev": Connection refused (Connection
refused);
Adding full answer based on comments.
First, you have to specify -p before image name.
docker run -p 8980:8980 my-config-server.
Second, just configuring localhost with host port won't make your my-service container to talk to other container. locahost in container is within itself(not host). You will need to use appropriate docker networking model so both containers can talk to each other.
If you are on Linux, the default is Bridge so you can configure my-config-server container ip docker inspect {containerIp-of-config-server} as your config server endpoint.
Example if your my-config-server ip is 172.17.0.2 then endpoint is - http://172.17.0.2:8980/
spring:
cloud:
config:
uri: http://172.17.0.2:8980
Just follow the docker documentation for little bit more understanding on how networking works.
https://docs.docker.com/network/network-tutorial-standalone/
https://docs.docker.com/v17.09/engine/userguide/networking/
If you want to spin up both containers using docker-compose, then you can link both containers using service name. Just follow Networking in Compose.
I could imagine that the application only listens on localhost, ie 127.0.0.1.
You might want to try setting the property server.address to 0.0.0.0.
Then port 8980 should also be available externally.

Accessing AWS EC2 instances through ELB

I'm trying to set up two instances under an elastic load balancer, but cannot figure out how I'm supposed to access the instances through the load balancer.
I've set up the instances with a security group to allow access from anywhere to certain ports. I can access the instances directly using their "Public DNS" (publicdns) host name and the port PORT:
http://[publicdns]:PORT/
The load balancer contains the two instances and they are both "In Service" and it's forwarding the port (PORT) onto the same port on the instances.
However, if I request
http://[dnsname]:PORT (where dnsname is the A Record listed for the ELB)
it doesn't connect to the instance (connection times out).
Is this not the correct way to use the load balancer, or do I need to do anything to allow access to the load balancer? The only mention of security groups in relation to the load balancer is to restrict access to the instances to the load balancer only, but I don't want that. I want to be able to access them individually as well.
I'm sure there's something simple and silly that I've forgotten, not realised or done wrong :P
Cheers,
Svend.
Extra info added:
The Port Configuration for the Load Balancer looks like this (actually 3 ports):
10060 (HTTP) forwarding to 10060 (HTTP)
Stickiness: Disabled(edit)
10061 (HTTP) forwarding to 10061 (HTTP)
Stickiness: Disabled(edit)
10062 (HTTP) forwarding to 10062 (HTTP)
Stickiness: Disabled(edit)
And it's using the standard/default elb security group (amazon-elb-sg).
The instances have two security groups. One external looking like this:
22 (SSH) 0.0.0.0/0
10060 - 10061 0.0.0.0/0
10062 0.0.0.0/0
and one internal, allowing anything within the internal group to communicate on all ports:
0 - 65535 sg-xxxxxxxx (security group ID)
Not sure it makes any difference, but the instances are m1.small types of image ami-31814f58.
Something that might have relevance:
My health check used to be HTTP:PORT/ but the load balancer kept saying that the instances were "Out of Service", even though I seem to get a 200 response on the request on that port.
I then changed it to TCP:PORT and it then changed to say they were "In Service".
Is there something very specific that should be returned for the HTTP one, or is it simply a HTTP 200 response that's required? ... and does the fact that it wasn't working hint towards why the load balancing itself wasn't working either?
It sounds like you have everything set up correctly. Are they the same ports going into the loadbalancer as the instance? Or are you forwarding the request to another port?
As a side note, when I configure my loadbalancers I don't generally like to open up my instances on any port for the general public. I only allow the loadbalancer to make requests to those instances. I've noticed in the past that many people will make malicious requests to the IP of the instance trying to find a security breach. I've even seen people trying to brute force login into my windows machines....
To create a security rule only for the loadbalancers run the following commands and remove any other rules you have in the security-group for the port the loadbalancer is using. If you're not using the commandline to run these commands then just let me know which interface you're trying to use and i can try to come up with a sample that will work for you.
elb-create-lb-listeners <load-balancer> --listener "protocol=http, lb-port=<port>, instance-port=<port>"
ec2-authorize <security-group> -o amazon-elb-sg -u amazon-elb
Back to your question. Like I said, the steps you explained are correct, opening the port on the instance and forwarding the port to the instance should be enough. Maybe you need to post the full configuration of your instance's security group and the loadbalancer so that I can see if there is something else affecting your situation.
I went ahead and created a script that will reproduce the same exact steps that i'm using. This assumes you're using linux as an operating system and that the AWS CLI tools are already installed. If you don't have this setup already I recommend starting a new Amazon Linux micro instance and running the script from there since they have everything already installed.
Download the X.509 certificate files from amazon https://aws-portal.amazon.com/gp/aws/securityCredentials
Copy the certificate files to the machine where you will run the commands
Save two variables that are required in the script
aws_account=<aws account id>
keypair="<key pair name>"
Export the certificates as environmental variables
export EC2_PRIVATE_KEY=<private_Key_file>
export EC2_CERT=<cert_file>
export EC2_URL=https://ec2.us-east-1.amazonaws.com
Create the security groups
ec2-create-group loadbalancer-sg -d "Loadbalancer Test group"
ec2-authorize loadbalancer-sg -o loadbalancer-sg -u $aws_account
ec2-authorize loadbalancer-sg -p 80 -s 0.0.0.0/0
Create the user-data-file for the instance so that apache is started and the index.html file is created
mkdir -p ~/temp/
echo '#! /bin/sh
yum -qy install httpd
touch /var/www/html/index.html
/etc/init.d/httpd start' > ~/temp/user-data.sh
Start the new instance and save the instanceid
instanceid=`ec2-run-instances ami-31814f58 -k "$keypair" -t t1.micro -g loadbalancer-sg -g default -z us-east-1a -f ~/temp/user-data.sh | grep INSTANCE | awk '{ print $2 }'`
Create the loadbalancer and attach the instance
elb-create-lb test-lb --availability-zones us-east-1a --listener "protocol=http, lb-port=80, instance-port=80"
elb-register-instances-with-lb test-lb --instances $instanceid
Wait until your instance state in the loabalancer is "InService" and try to access the urls

Resources