I'm trying to deploy a spring boot project to azure using docker containers.
If I do az container logs --name ... --resource-group ... I see
Tomcat started on port(s): 80 (http) with context path ''
az container show --name ... --resource-group ...
"ipAddress": {
"dnsNameLabel": "X",
"dnsNameLabelReusePolicy": "Unsecure",
"fqdn": "X.centralus.azurecontainer.io",
"ip": "Y",
"ports": [
{
"port": 80,
"protocol": "TCP"
}
],
"type": "Public"
},
Now if I go to X.centralus.azurecontainer.io I only see 404 page not found and don't see any request being made in the logs for the spring container (have logs set to debug to see if it serves anything)
On the azure portal it also confirms that my container is in state "Running"
For what reasons does "page not found" show instead of just forwarding to the container on the same port? Anyone know what could be wrong here?
• I would highly recommend you to please refer to the below Microsoft documentation links for troubleshooting common Azure Spring cloud application deployment issues and in general issues faced while deploying an Azure Container Instance. As they are quite helpful in resolving the common issues faced by users regarding Azure Container Instances.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-troubleshooting
https://learn.microsoft.com/en-us/azure/spring-cloud/troubleshoot
Related
I have recently started exploring the microservice architecture using jhipster and was trying to install and run the jhipster-registry from docker hub. Docker shows that the registry is running, but I am unable to access it on port 8761.
Pulled the image with docker pull jhipster/jhipster-registry
Started the container with docker run --name jhipster-registry -d jhipster/jhipster-registry
Here's a snapshot of what docker container ls returns:
Am I missing something over here?
You are starting the JHipster Registry container, but you aren't exposing the port.
You can expose a port by passing the port flag -p 8761:8761 which will enable you to connect to it via localhost:8761 or 127.0.0.1:8761 in a browser.
You may need to configure some environment variables for the JHipster Registry to start correctly. These may depend on your generated app's options, such as authentication type. For convenience JHipster apps come with a docker-compose.yml file. You can start it with docker-compose -f src/main/docker/jhipster-registry.yml up, as documented.
This is what I am trying to achieve but getting Zuul forwarding error.
Zuul GitHub
UserRegistration Microservice - which will call another Microservice. Also, it has some other APIs'. GitHub Link
UserSearchDelete: above UserRegistration microservice will call this service. GitHub Link
Eureka Server: GitHub Link
If I run the services in Springboot STS at localhost then eveything is working fine.
But if I dockarise all the services and run different containers then I am getting Zuul forrwarding error.
Refer the application.yml files in the Github repos. All the services are getting registered with Eureka.
Could please help? Is it a bug or I am doing something wrong?
GitHub issue reference: https://github.com/spring-cloud/spring-cloud-netflix/issues/3408
Getting the below errors:
"cause": {
"cause": null,
"stackTrace": [
{....
"nStatusCode": 500,
"errorCause": "GENERAL",
"message": "Forwarding error",
"localizedMessage": "Forwarding error",
"suppressed": []
}```
Verify that your Zuul paths are setup properly.
If running on a Docker network, each docker deployment must be connected to the same docker network using the --net myCommonNet tag in the run command. Note that you will have to create this network first. Then you can reference the container hosts on their names.
If you are using kubernetes as your deployment environment, you can access the different microservices using the service name. Then you configure your Zuul's properties.yml as:
zuul:
routes:
myService:
path: /myService/**
url: http://myService.default.svc.cluster.local:8086
I was able to get onDomain working, but someone in the Slack channel stated that onDomain is deprecated for Traefik, though there is no mention of deprecation in the Traefik documentation.
[edit]
There is a reference to this deprecation here: https://github.com/containous/traefik/issues/2212
I am using the Consul catalog backend with host rules for my services, being set with tags:
ex:
{
"service": {
"name": "application-java",
"tags": ["application-java", "env-SUBDOMAIN", "traefik.tags=loadbalanced", "traefik.frontend.rule=Host:SUBDOMAIN.domain.com"],
"address": "",
"port": 8080,
"enable_tag_override": false,
"checks": [{
"http": "http://localhost:8080/api/health",
"interval": "10s"
}]
}
}
However, no certificate is generated for SUBDOMAIN.domain.com - requests just use the TRAEFIK DEFAULT CERT.
What is the recommended method for getting Traefik to generate certificates for Consul catalog services automatically?
It looks like this might only work with the frontEndRule option in the main config, rather than with the "traefik.frontend.rule" override tag.
I added this line:
frontEndRule = "Host:{{getTag \"traefik.subdomain\" .Attributes
.ServiceName }}.{{.Domain}}"
and this Consul catalog tag:
traefik.subdomain=SUBDOMAIN
and I'm getting the Fake certificate from the LE staging server now.
i am new to kubernetes. i have just followed this guide and have a vagrant/kubernetes cluster: https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
i was interested viewing the ui, so i followed the instructions here: http://kubernetes.io/docs/user-guide/ui/#deploying-the-dashboard-ui
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
upon browsing to the above IP:PORT, <h3>Unauthorized</h3> is served. so, i suffix /ui to the URI, and we get:
// 127.0.0.1:8001/ui redirected to http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "no endpoints available for service \"kubernetes-dashboard\"",
"reason": "ServiceUnavailable",
"code": 503
}
perhaps relevant is:
$ kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.3.0.1 <none> 443/TCP 36m
i saw another SO thread, Kubernetes dashboard keeps pending with message: no endpoints available for service "kubernetes-dashboard", and discovered get pods and describe pod <pod-name> --namespace=kube-system.
so, i ran kubectl describe pod kubernetes-dashboard-3543765157-94gj9 --namespace="kube-system" which yielded: https://gist.github.com/cdaringe/b972bf5a95c9f2a7cb8386ef6bf2252b
ultimately, my cluster had no nodes, so the UI service had no place to land and run! the API still attempts to proxy to it, which is why it reported "no endpoints"--there was no host endpoint serving the content. still haven't figured out why my vagrant cluster deployed no nodes. im going to guess that the workers never downloaded the kubelet and joined.
I installed the docker-beata (https://beta.docker.com/) for osx.
Next, I created a folder with this file docker-compose.yml :
web:
image: nginx:latest
ports:
- "8080:80"
After, I used this command : docker-compose up.
Container start with success.
But the problem is to access in my container. I don't know what ip use.
I try to find ip with docker ps and docker inspect ...:
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "6342cefc977f260f0ac65cab01c223985c6a3e5d68184e98f0c2ba546cc602f9",
"EndpointID": "8bc7334eff91d159f595b7a7966a2b0659b0fe512c36ee9271b9d5a1ad39c251",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02"
}
}
So I try to use http://172.17.0.2:8080/ to access, but I have a ERR_CONNECTION_TIMED_OUT error.
But, if I usehttp://localhost:8080/, I can access to my container !
(But my localhost is already use by my native config on my mac, so if I want use localhost I must stop my native apache).
Why it's doesn't work with the ip ?
As #Javier-Segura mentioned, on with native Docker on Linux you should be able to hit the container via it's IP and port, so in your case http://172.17.0.2:80 - the 8080 port would be on the host IP.
With Docker for Mac Beta it does not appear to work the same way for the container. It changes a bit with every release but right now it appears you can not reach a container by ip via conventional means.
Unfortunately, due to limtations in OSX, we’re unable to route traffic
to containers, and from containers back to the host.
Your best bet is to use a different non-conflicting port as mentioned. You can use different Compose config files for different environments, so as in the example above, use 8081 for development and 8080 for production, if that is the desire. You would start Compose in production via something like docker-compose -f docker-compose.yml -f production.yml up -d where production.yml has the overrides for that environment.
When you map a port (like done with "8080:80") you are basically saying that "Forward the port 8080 on my localhost to the 80 port on the container".
Then you can access your nginx via:
http://localhost:8080
http://172.17.0.2:80/ (depending on the network configuration)
If the port 8080 is already used by apache on your mac, you can change your configuration to "8081:80" and nginx will be available on 8081
Here is one more tip to add to the good ones already provided. You can use the -p option to include IP mapping in addition to your port mapping. If you include no IP (something like -p 8080:80), then your telling docker to route traffic entering all interfaces on port 8080 to your docker internal network (172.17.0.2 in your case). This includes, but is not limited to, localhost. If you'd like this mapping to apply to only a certain IP, for example an IP dynamically assigned to your workstation through DHCP, you can specify the IP in the option as -p 10.11.12.13:8080:80 (where 10.11.12.13 is a fictional IP). Then localhost or any other interface would not be routed.
Likewise, you could use the option to restrict to localhost with -p 127.0.0.1:8080:80 so that other interface traffic is not routed to your docker container's 172.17.0.2 interface.
#pglezen is right. Providing full IP within compose file is solving the issue.
Image IP addresses that were generated by docker-compose dose not work (now) on MAC OSX.
Providing specific ip within compose file allowed to access container image:
nginx:
image: nginx:latest
ports:
- "127.0.0.1:80:80"
links:
- php-fpm
docker-compose still assigned generic 172.* IP address to image that was not accessable. But real hardcoded 127.0.0.1 was working and returns correct container response.