I was testing on a kubernetes setup with AWS EKS on Fargate, and encountered an issue on the container startup.
It is a java application making use of hibernate. It seems it failed to connect to the MySQL server on startup, giving a "Communications link failure" error. The database server is running properly on AWS RDS, and the docker image can be run as expected in local.
I wonder if this is caused by the MySQL port 3306 not being configured properly on the container/node/service. Would like to see if you can spot out what the issue is and please don't hesitate to point out any mis-configuration, thank you very much.
Pod startup log
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
2020-08-13 11:39:39.930 INFO 1 --- [ main] com.example.demo.DemoApplication : The following profiles are active: prod
2020-08-13 11:39:58.536 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFERRED mode.
...
......
2020-08-13 11:41:27.606 ERROR 1 --- [ task-1] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-3.4.5.jar!/:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:68) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:101) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:176) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:118) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:1224) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1255) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:391) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_212]
...
......
Service
patricks-mbp:test patrick$ kubectl get services -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test NodePort 10.100.160.22 <none> 80:31176/TCP 4h57m
service.yaml
kind: Service
apiVersion: v1
metadata:
name: test
namespace: test
spec:
selector:
app: test
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
Deployment
patricks-mbp:test patrick$ kubectl get deployments -n test
NAME READY UP-TO-DATE AVAILABLE AGE
test 0/1 1 0 4h42m
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: <image location>
ports:
- containerPort: 8080
resources: {}
Pods
patricks-mbp:test patrick$ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
test-8648f7959-4gdvm 1/1 Running 6 21m
patricks-mbp:test patrick$ kubectl describe pod test-8648f7959-4gdvm -n test
Name: test-8648f7959-4gdvm
Namespace: test
Priority: 2000001000
Priority Class Name: system-node-critical
Node: fargate-ip-192-168-123-170.ec2.internal/192.168.123.170
Start Time: Thu, 13 Aug 2020 21:29:07 +1000
Labels: app=test
eks.amazonaws.com/fargate-profile=fp-1a0330f1
pod-template-hash=8648f7959
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.123.170
IPs:
IP: 192.168.123.170
Controlled By: ReplicaSet/test-8648f7959
Containers:
test:
Container ID: containerd://a1517a13d66274e1d7f8efcea950d0fe3d944d1f7208d057494e208223a895a7
Image: <image location>
Image ID: <image ID>
Port: 8080/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:48:07 +1000
Finished: Thu, 13 Aug 2020 21:50:28 +1000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:43:04 +1000
Finished: Thu, 13 Aug 2020 21:45:22 +1000
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5hdzd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5hdzd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5hdzd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> fargate-scheduler Successfully assigned test/test-8648f7959-4gdvm to fargate-ip-192-168-123-170.ec2.internal
Normal Pulling 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Pulling image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Pulled 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Successfully pulled image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Created 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Created container test
Normal Started 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Started container test
Normal Pulled 11m (x4 over 19m) kubelet, fargate-ip-192-168-123-170.ec2.internal Container image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2" already present on machine
Warning BackOff 11s (x27 over 17m) kubelet, fargate-ip-192-168-123-170.ec2.internal Back-off restarting failed container
Ingress
patricks-mbp:~ patrick$ kubectl describe ing -n test test
Name: test
Namespace: test
Address: <ALB public address>
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ test:80 (192.168.72.15:8080)
Annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"test","namespace":"test"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"test","servicePort":80},"path":"/"}]}}]}}
kubernetes.io/ingress.class: alb
Events: <none>
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: test
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: test
servicePort: 80
AWS ALB ingress controller
Permission for ALB ingress controller to communicate with cluster
-> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml
Creation of Ingress Controller which uses ALB
-> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml
To allow pod from Fargate to connect to RDS you need to open security group.
Find the security group ID of your Fargate service
In your RDS security group rules, instead of putting a CIDR in your source field, put the Fargate service security group ID. Port 3306
Related
I'm using eureka-client, eureka-server, spring-cloud-starter-gateway and kafka to build my api. Using microservices, it works like this: the command sends a request to kafka for it to run, the Kafka that is installed on my machine and is not in a container. Command example:
#Autowired
private KafkaTemplate<String, ContactAdmSaveDto> kafkaTemplate;
#Override
public String create(ContactAdmSaveDto data) {
kafkaTemplate.send("contact-adm-insert", data);
return "Cadastrado com sucesso!";
}
application.properties command producer:
spring.kafka.producer.bootstrap-servers=springboot:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
server.port = 30006
spring.application.name = contact-adm-command
eureka.client.serviceUrl.defaultZone = http://springboot:30002/eureka
eureka.instance.hostname=springboot
eureka.instance.prefer-ip-address=true
Example from Kafka:
#KafkaListener(topics = {"contact-adm-insert"}, groupId = "contact-adm")
public void consume(String record){
try {
ObjectMapper mapper = new ObjectMapper();
ContactAdm data = mapper.readValue(record, ContactAdm.class);
ContactAdm cat = new ContactAdm();
cat.setCell_phone(data.getCell_phone());
cat.setEmail(data.getEmail());
cat.setTelephone(data.getTelephone());
ContactAdm c = contactAdmRepository.save(cat);
ContactAdmMongo catm = new ContactAdmMongo();
catm.setCell_phone(data.getCell_phone());
catm.setEmail(data.getEmail());
catm.setTelephone(data.getTelephone());
catm.setContact_id(c.getContact_id());
contactAdmRepositoryMongo.save(catm);
} catch (Exception e) {
logger.info(e.toString());
}
}
application.properties kafka consumer:
server.port = 30005
spring.kafka.consumer.bootstrap-servers=springboot:9092
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.group-id=contact-adm
springboot is a host name for my machine's ip
follow my gateway. Remembering that all kafka services are not registered in the gateway they are only to run when they are called by the command:
server.port=30000
spring.application.name=routing
eureka.client.serviceUrl.defaultZone=http://springboot:30002/eureka/
eureka.instance.hostname=springboot
eureka.instance.prefer-ip-address=true
#spring.cloud.gateway.discovery.locator.enabled=true
#spring.main.web-application-type=reactive
spring.cloud.gateway.enabled=true
spring.cloud.gateway.routes[0].id=user
spring.cloud.gateway.routes[0].uri=lb://USER
spring.cloud.gateway.routes[0].predicates=Path=/user/**
spring.cloud.gateway.routes[1].id=testes
spring.cloud.gateway.routes[1].uri=lb://TESTES
spring.cloud.gateway.routes[1].predicates=Path=/testes/**
spring.cloud.gateway.routes[2].id=user-command
spring.cloud.gateway.routes[2].uri=lb://USER-COMMAND
spring.cloud.gateway.routes[2].predicates=Path=/user-command/**
spring.cloud.gateway.routes[3].id=category-product-command
spring.cloud.gateway.routes[3].uri=lb://CATEGORY-PRODUCT-COMMAND
spring.cloud.gateway.routes[3].predicates=Path=/category-product-command/**
spring.cloud.gateway.routes[4].id=category-product-query
spring.cloud.gateway.routes[4].uri=lb://CATEGORY-PRODUCT-QUERY
spring.cloud.gateway.routes[4].predicates=Path=/category-product-query/**
spring.cloud.gateway.routes[5].id=cart-purchase-command
spring.cloud.gateway.routes[5].uri=lb://CART-PURCHASE-COMMAND
spring.cloud.gateway.routes[5].predicates=Path=/cart-purchase-command/**
spring.cloud.gateway.routes[6].id=cart-purchase-query
spring.cloud.gateway.routes[6].uri=lb://CART-PURCHASE-QUERY
spring.cloud.gateway.routes[6].predicates=Path=/cart-purchase-query/**
spring.cloud.gateway.routes[7].id=contact-adm-command
spring.cloud.gateway.routes[7].uri=lb://CONTACT-ADM-COMMAND
spring.cloud.gateway.routes[7].predicates=Path=/contact-adm-command/**
spring.cloud.gateway.routes[8].id=contact-adm-query
spring.cloud.gateway.routes[8].uri=lb://CONTACT-ADM-QUERY
spring.cloud.gateway.routes[8].predicates=Path=/contact-adm-query/**
This all works fine but I want to put it on kubernetes so I created the services images with the following command: mvn spring-boot:build-image and with the DockerFile:
FROM openjdk:17-alpine
EXPOSE 30000
ADD src/main/resources/routing/public.pem src/main/resources/routing/public.pem
ADD /target/routes-0.0.1-SNAPSHOT.jar routes-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","routes-0.0.1-SNAPSHOT.jar"]
Generating all services images and placing them in the docker hub to be pulled by docker kubernetes with the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cart-purchase-kafka
labels:
app: cart-purchase-kafka
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: cart-purchase-kafka
template:
metadata:
labels:
run: cart-purchase-kafka
spec:
containers:
- name: cart-purchase-kafka
image: rafaelribeirosouza86/shopping:cart-purchase-kafka
imagePullPolicy: Always
ports:
- containerPort: 30011
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cart-purchase-command
labels:
app: cart-purchase-command
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: cart-purchase-command
template:
metadata:
labels:
run: cart-purchase-command
spec:
containers:
- name: cart-purchase-command
image: rafaelribeirosouza86/shopping:cart-purchase-command
imagePullPolicy: Always
ports:
- containerPort: 30012
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cart-purchase-query
labels:
app: cart-purchase-query
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: cart-purchase-query
template:
metadata:
labels:
run: cart-purchase-query
spec:
containers:
- name: cart-purchase-query
image: rafaelribeirosouza86/shopping:cart-purchase-query
imagePullPolicy: Always
ports:
- containerPort: 30010
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user
labels:
app: user
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: user
template:
metadata:
labels:
run: user
spec:
containers:
- name: user
image: rafaelribeirosouza86/shopping:user
imagePullPolicy: Always
ports:
- containerPort: 30015
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-command
labels:
app: user-command
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: user-command
template:
metadata:
labels:
run: user-command
spec:
containers:
- name: user-command
image: rafaelribeirosouza86/shopping:user-command
imagePullPolicy: Always
ports:
- containerPort: 30004
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-command-insert
labels:
app: user-command-insert
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: user-command-insert
template:
metadata:
labels:
run: user-command-insert
spec:
containers:
- name: user-command-insert
image: rafaelribeirosouza86/shopping:user-command-insert
imagePullPolicy: Always
ports:
- containerPort: 30003
protocol: TCP
imagePullSecrets:
- name: regcred
The big problem so far is that when I run it without Kubernetes it works fine but when I generate the pods it gives errors like:
NAME READY STATUS RESTARTS AGE
category-product-command-565f758d5d-4wwnf 0/1 Evicted 0 10m
category-product-command-565f758d5d-54pd5 0/1 Error 0 29m
category-product-command-565f758d5d-hmb8k 0/1 Pending 0 2m47s
category-product-command-565f758d5d-k6gmf 0/1 Evicted 0 10m
category-product-command-565f758d5d-lkd25 0/1 Error 0 41m
category-product-command-565f758d5d-ltbnl 0/1 Evicted 0 10m
category-product-command-565f758d5d-m7wwx 0/1 ContainerStatusUnknown 1 35m
category-product-command-565f758d5d-p42td 0/1 Error 0 54m
category-product-command-565f758d5d-pmfmh 0/1 Error 0 10m
category-product-command-565f758d5d-qbthd 0/1 Evicted 0 10m
category-product-command-565f758d5d-qf969 0/1 Evicted 0 10m
category-product-command-565f758d5d-twjvq 0/1 Evicted 0 10m
category-product-command-565f758d5d-vfrwq 0/1 ContainerStatusUnknown 1 22m
category-product-command-565f758d5d-xftpq 0/1 Error 0 47m
category-product-command-565f758d5d-xsg47 0/1 Evicted 0 10m
category-product-kafka-67d4fdbf76-262n8 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-2klh8 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-2mgp8 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-2rlmm 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-2z57p 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-424pj 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-4cnp2 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-4v586 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-5d7sg 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-5mndm 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-5rcgg 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-5rlz7 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-69w7h 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-6czbj 0/1 Evicted 0 36m
category-product-kafka-67d4fdbf76-6rtvb 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-6t4km 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-7pkd7 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-99z2b 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-9lfqq 0/1 Error 1 (42m ago) 53m
category-product-kafka-67d4fdbf76-9nrm4 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-bzx52 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-d62b5 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-dbhp4 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-dscdk 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-fnjdd 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-gnbnp 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-gsrs8 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-h69px 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-hcljj 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-hmxmk 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-hqngl 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-j2bx2 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-jjpkl 0/1 ContainerStatusUnknown 1 35m
category-product-kafka-67d4fdbf76-jqzlr 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-kbc25 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-khljn 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-kqht4 0/1 Error 0 54m
category-product-kafka-67d4fdbf76-kqxf5 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-l52p9 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-l8x4p 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-ljhrm 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-m6l8c 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-n49br 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-q4z79 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-qgqch 0/1 ContainerStatusUnknown 1 15m
category-product-kafka-67d4fdbf76-qjrf8 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-qntzw 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-qv7s9 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-rkhq6 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-rl2g6 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-rl7dl 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-sbpw6 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-slww4 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-ssm24 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-txtjw 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-v9976 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-vl9gp 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-vns2z 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-vqcz9 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-vst56 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-w5hpg 1/1 Running 0 8m53s
category-product-kafka-67d4fdbf76-w8tbb 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-wpkwb 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-wvmtt 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-xp5t6 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-xtqwp 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-z56s4 0/1 Error 0 23m
category-product-query-58897978b9-7csd7 1/1 Running 0 54m
contact-adm-command-56bb8f75db-9pvvz 1/1 Running 0 54m
contact-adm-kafka-858d968996-tgqkn 1/1 Running 0 54m
contact-adm-query-6b6b7487bb-2mqp6 1/1 Running 0 54m
gateway-7cbcb7bc4c-48b42 0/1 Pending 0 3m35s
gateway-7cbcb7bc4c-672mb 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-d9hxn 0/1 ContainerStatusUnknown 1 42m
gateway-7cbcb7bc4c-g97cs 0/1 Error 0 16m
gateway-7cbcb7bc4c-hpntm 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-js7nc 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-lctsk 0/1 Error 0 30m
gateway-7cbcb7bc4c-stwbk 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-zl4rb 0/1 Error 0 54m
routes-cb9ffbb47-tmmw9 1/1 Running 0
an error in the container:
Caused by: org.apache.hc.core5.http.NoHttpResponseException: springboot:30002 failed to respond
Does anyone have any idea what the problem might be?
[SOLVED]
I made it work with StatefulSet see how my deployments look:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gateway
spec:
serviceName: "gateway"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","gateway-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:gateway
imagePullPolicy: Always
ports:
- containerPort: 30002
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: routes
spec:
serviceName: "routes"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: routes
template:
metadata:
labels:
app: routes
spec:
containers:
- name: routes
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","routes-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:routes
imagePullPolicy: Always
ports:
- containerPort: 30000
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user
spec:
serviceName: "user"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","user-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:user
imagePullPolicy: Always
ports:
- containerPort: 30015
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user-command
spec:
serviceName: "user-command"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: user-command
template:
metadata:
labels:
app: user-command
spec:
containers:
- name: user-command
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","user-command-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:user-command
imagePullPolicy: Always
ports:
- containerPort: 30004
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user-command-insert
spec:
serviceName: "user-command-insert"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: user-command-insert
template:
metadata:
labels:
app: user-command-insert
spec:
containers:
- name: user-command-insert
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","user-command-insert-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:user-command-insert
imagePullPolicy: Always
ports:
- containerPort: 30003
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: category-product-kafka
spec:
serviceName: "category-product-kafka"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: category-product-kafka
template:
metadata:
labels:
app: category-product-kafka
spec:
containers:
- name: category-product-kafka
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","category-product-kafka-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:category-product-kafka
imagePullPolicy: Always
ports:
- containerPort: 30008
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: category-product-command
spec:
serviceName: "category-product-command"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: category-product-command
template:
metadata:
labels:
app: category-product-command
spec:
containers:
- name: category-product-command
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","category-product-command-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:category-product-command
imagePullPolicy: Always
ports:
- containerPort: 31533
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: category-product-query
spec:
serviceName: "category-product-query"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: category-product-query
template:
metadata:
labels:
app: category-product-query
spec:
containers:
- name: category-product-query
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","category-product-query-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:category-product-query
imagePullPolicy: Always
ports:
- containerPort: 30007
imagePullSecrets:
- name: regcred
Services:
apiVersion: v1
kind: Service
metadata:
name: gateway-service
namespace: default
spec:
# clusterIP: 10.99.233.224
ports:
- port: 30002
protocol: TCP
targetPort: 30002
nodePort: 30002
# externalTrafficPolicy: Local
type: NodePort
selector:
app: gateway
# type: LoadBalancer
#status:
# loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
name: routes-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30000
protocol: TCP
targetPort: 30000
nodePort: 30000
# externalTrafficPolicy: Local
type: NodePort
selector:
app: routes
# type: LoadBalancer
#status:
# loadBalancer: {}
# type: ClusterIP
# type: LoadBalancer
# type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: contact-adm-query-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30001
protocol: TCP
targetPort: 30001
nodePort: 30001
# externalTrafficPolicy: Local
type: NodePort
selector:
app: contact-adm-query
# type: LoadBalancer
#status:
# loadBalancer: {}
# type: ClusterIP
# type: LoadBalancer
# type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: contact-adm-kafka-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30005
protocol: TCP
targetPort: 30005
nodePort: 30005
# externalTrafficPolicy: Local
type: NodePort
selector:
app: contact-adm-kafka
---
apiVersion: v1
kind: Service
metadata:
name: contact-adm-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30006
protocol: TCP
targetPort: 30006
nodePort: 30006
# externalTrafficPolicy: Local
type: NodePort
selector:
app: contact-adm-command
---
apiVersion: v1
kind: Service
metadata:
name: category-product-kafka-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30008
protocol: TCP
targetPort: 30008
nodePort: 30008
# externalTrafficPolicy: Local
type: NodePort
selector:
app: category-product-kafka
---
apiVersion: v1
kind: Service
metadata:
name: category-product-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 31533
protocol: TCP
targetPort: 31533
nodePort: 31533
# externalTrafficPolicy: Local
type: NodePort
selector:
app: category-product-command
---
apiVersion: v1
kind: Service
metadata:
name: category-product-query-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30007
protocol: TCP
targetPort: 30007
nodePort: 30007
# externalTrafficPolicy: Local
type: NodePort
selector:
app: category-product-query
---
apiVersion: v1
kind: Service
metadata:
name: cart-purchase-kafka-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30011
protocol: TCP
targetPort: 30011
nodePort: 30011
# externalTrafficPolicy: Local
type: NodePort
selector:
app: cart-purchase-kafka
---
apiVersion: v1
kind: Service
metadata:
name: cart-purchase-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30012
protocol: TCP
targetPort: 30012
nodePort: 30012
# externalTrafficPolicy: Local
type: NodePort
selector:
app: cart-purchase-command
---
apiVersion: v1
kind: Service
metadata:
name: cart-purchase-query-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30010
protocol: TCP
targetPort: 30010
nodePort: 30010
# externalTrafficPolicy: Local
type: NodePort
selector:
app: cart-purchase-query
---
apiVersion: v1
kind: Service
metadata:
name: user-command-insert-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30003
protocol: TCP
targetPort: 30003
nodePort: 30003
# externalTrafficPolicy: Local
type: NodePort
selector:
app: user-command-insert
---
apiVersion: v1
kind: Service
metadata:
name: user-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30004
protocol: TCP
targetPort: 30004
nodePort: 30004
# externalTrafficPolicy: Local
type: NodePort
selector:
app: user-command
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30015
protocol: TCP
targetPort: 30015
nodePort: 30015
# externalTrafficPolicy: Local
type: NodePort
selector:
app: user
I have cloned a repository from GitHub, a Laravel project that already has Sail.
Then in order to install composer dependencies, I ran:
docker run --rm \
-u "$(id -u):$(id -g)" \
-v "$(pwd):/var/www/html" \
-w /var/www/html \
laravelsail/php81-composer:latest \
composer install --ignore-platform-reqs
After that, I ran sail up.
All images pulled and built.
Now I have access to the project and its route through the browser, even I can use sail mysql commands. However the problem is that when I run sail artisan commands, this message shows up:
service "laravel.test" is not running container #1.
I am using windows and also wsl2 which is using ubuntu 20 as the default Linux distribution.
tip: In another fresh project I do not have any problem with Sail.
I did these things before, but they didn't solve the problem:
adding APP_SERVICE=laravel.test to .env.
running composer update.
To clarify my question I will add more codes below.
docker-compose.yml:
version: "3.7"
services:
#Laravel App
app:
build:
context: ./docker/php/${DOCKER_PHP_VERSION}
dockerfile: Dockerfile
args:
xdebug_enabled: ${DOCKER_PHP_XDEBUG_ENABLED}
image: ${COMPOSE_PROJECT_NAME}-app
restart: unless-stopped
tty: true
working_dir: /var/www/html
environment:
XDEBUG_MODE: '${DOCKER_PHP_XDEBUG_MODE:-off}'
volumes:
- ./:/var/www/html
networks:
- app_network
depends_on:
- mysql
- redis
- meilisearch
- minio
nginx:
image: nginx:alpine
restart: unless-stopped
tty: true
ports:
- '${DOCKER_NGINX_PORT:-80}:80'
volumes:
- ./:/var/www/html
- ./docker/nginx/dev/:/etc/nginx/conf.d/
networks:
- app_network
depends_on:
- app
# S3 Development
minio:
image: 'minio/minio:latest'
ports:
- '${DOCKER_MINIO_PORT:-9000}:9000'
- '${DOCKER_MINIO_CONSOLE_PORT:-8900}:8900'
environment:
MINIO_ROOT_USER: 'laravel'
MINIO_ROOT_PASSWORD: 'password'
volumes:
- 'appminio:/data/minio'
networks:
- app_network
command: minio server /data/minio --console-address ":8900"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
retries: 3
timeout: 5s
# Laravel Scout Search Provider
meilisearch:
image: 'getmeili/meilisearch:latest'
platform: linux/x86_64
environment:
- PUID=${DOCKER_PUID:-1000}
- PGID=${DOCKER_PGID:-1000}
- TZ=${DOCKER_TZ:-Australia/Brisbane}
restart: unless-stopped
ports:
- '${DOCKER_MEILISEARCH_PORT:-7700}:7700'
volumes:
- 'appmeilisearch:/data.ms'
networks:
- app_network
# Database
mysql:
image: 'mysql/mysql-server:8.0'
command: --default-authentication-plugin=mysql_native_password
ports:
- '${DOCKER_MYSQL_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD:-abc123}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE:-laravel}'
MYSQL_USER: '${DB_USERNAME:-laravel}'
MYSQL_PASSWORD: '${DB_PASSWORD:-abc123}'
MYSQL_ALLOW_EMPTY_PASSWORD: 1
restart: unless-stopped
volumes:
- 'appmysql:/var/lib/mysql'
networks:
- app_network
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
# Debug emails sent from the app
mailcatcher:
restart: unless-stopped
image: dockage/mailcatcher
environment:
- PUID=${DOCKER_PUID:-1000}
- PGID=${DOCKER_PGID:-1000}
- TZ=${DOCKER_TZ:-Australia/Brisbane}
ports:
- "${DOCKER_MAILCATCHER_WEB_PORT:-1080}:1080"
- "${DOCKER_MAILCATCHER_SMTP_PORT:-1025}:1025"
networks:
- app_network
# Redis Database
redis:
healthcheck:
test: [ "CMD", "redis-cli", "ping" ]
interval: 1m
timeout: 10s
retries: 3
start_period: 30s
image: redis
restart: unless-stopped
volumes:
- 'appredis:/data'
environment:
- PUID=${DOCKER_PUID:-1000}
- PGID=${DOCKER_PGID:-1000}
- TZ=${DOCKER_TZ:-Australia/Brisbane}
ports:
- ${DOCKER_REDIS_PORT:-6379}:6379
networks:
- app_network
volumes:
appredis:
driver: local
appmysql:
driver: local
appmeilisearch:
driver: local
appminio:
driver: local
networks:
app_network:
driver: bridge
.env:
APP_NAME="Boilerplate"
APP_ENV=local
APP_KEY=base64:vnhPCkeEz8MOUqKv7dYsZvTluoB3bra/aH+MONTUM9I=
APP_DEBUG=true
APP_URL=http://127.0.0.1:8000
FRONTEND_URL=http://127.0.0.1:8000
EMAIL_VERIFICATION_REQUIRED=true
TOKEN_ON_REGISTER=false
LOG_CHANNEL=stack
DB_CONNECTION=mysql
DB_HOST=mysql
DB_PORT=3306
DB_DATABASE=safe_proud
DB_USERNAME=sail
DB_PASSWORD=password
BROADCAST_DRIVER=log
CACHE_DRIVER=file
QUEUE_CONNECTION=database
SESSION_DRIVER=file
SESSION_LIFETIME=120
SESSION_CONNECTION=localhost
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
MAIL_MAILER=smtp
MAIL_HOST=mailcatcher
MAIL_PORT=1025
MAIL_USERNAME=null
MAIL_PASSWORD=null
MAIL_ENCRYPTION=null
MAIL_FROM_ADDRESS=developers#presentcompany.co
MAIL_FROM_NAME="${APP_NAME}"
SCOUT_DRIVER=meilisearch
MEILISEARCH_HOST=http://127.0.0.1:7700
MEILISEARCH_KEY=masterKey
#FILESYSTEM_DRIVER=s3
AWS_ACCESS_KEY_ID=laravel
AWS_SECRET_ACCESS_KEY=password
AWS_DEFAULT_REGION=us-east-1
AWS_BUCKET=store
AWS_ENDPOINT=http://s3:9000
AWS_USE_PATH_STYLE_ENDPOINT=true
PUSHER_APP_ID=
PUSHER_APP_KEY=
PUSHER_APP_SECRET=
PUSHER_APP_CLUSTER=mt1
MIX_PUSHER_APP_KEY="${PUSHER_APP_KEY}"
MIX_PUSHER_APP_CLUSTER="${PUSHER_APP_CLUSTER}"
DOCKER_PUID=1000
DOCKER_PGID=1000
DOCKER_TZ=Australia/Brisbane
DOCKER_NGINX_PORT=8000
DOCKER_REDIS_PORT=6379
DOCKER_MAILCATCHER_WEB_PORT=1080
DOCKER_MAILCATCHER_SMTP_PORT=1025
DOCKER_MEILISEARCH_PORT=7700
DOCKER_MYSQL_PORT=3306
DOCKER_MINIO_PORT=9000
DOCKER_MINIO_CONSOLE_PORT=8900
COMPOSE_PROJECT_NAME=boilerplate
DOCKER_PHP_VERSION=8.1
DOCKER_PHP_XDEBUG_ENABLED=false
DOCKER_PHP_XDEBUG_MODE=develop,debug
composer.json:
{
"name": "laravel/laravel",
"type": "project",
"description": "Safe Proud API",
"keywords": ["framework", "laravel"],
"license": "MIT",
"require": {
"php": "^8.0|^8.1|^8.2",
"ext-curl": "*",
"ext-json": "*",
"aws/aws-sdk-php": "^3.144",
"balping/laravel-hashslug": "^2.2",
"bolechen/nova-activitylog": "^v0.3.0",
"classic-o/nova-media-library": "^1.0",
"cloudcake/nova-snowball": "^1.2",
"dcblogdev/laravel-sent-emails": "^2.0",
"emilianotisato/nova-tinymce": "^1",
"eminiarts/nova-tabs": "^1.5",
"guzzlehttp/guzzle": "^7.2",
"johnathan/nova-trumbowyg": "^1.0",
"kutia-software-company/larafirebase": "^1.3",
"laravel/framework": "^9.19",
"laravel/nova": "*",
"laravel/sanctum": "^3.0",
"laravel/scout": "^9.4",
"laravel/tinker": "^2.7",
"laravel/vapor-cli": "^1.13",
"laravel/vapor-core": "^2.22",
"laravel/vapor-ui": "^1.5",
"league/flysystem-aws-s3-v3": "~3.0",
"mpociot/versionable": "^4.3",
"nnjeim/world": "^1.1",
"optimistdigital/nova-page-manager": "^3.1",
"outl1ne/nova-settings": "^3.5",
"spatie/laravel-activitylog": "^4.5",
"spatie/laravel-permission": "^5.0",
"vinkla/hashids": "^10.0",
"vyuldashev/nova-permission": "^3.1",
"whitecube/nova-flexible-content": "^0.2.6",
"yab/laravel-scout-mysql-driver": "^5.1"
},
"require-dev": {
"fakerphp/faker": "^1.9.1",
"laravel/pint": "^1.1",
"laravel/sail": "^1.0.1",
"mockery/mockery": "^1.4.4",
"nunomaduro/collision": "^6.1",
"phpunit/phpunit": "^9.5.10",
"spatie/laravel-ignition": "^1.0"
},
"repositories": [
{
"type": "path",
"url": "./nova"
}
],
"autoload": {
"psr-4": {
"App\\": "app/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/"
},
"files": [
"app/Http/helpers.php"
]
},
"autoload-dev": {
"psr-4": {
"Tests\\": "tests/"
}
},
"scripts": {
"post-autoload-dump": [
"Illuminate\\Foundation\\ComposerScripts::postAutoloadDump",
"#php artisan package:discover --ansi"
],
"post-update-cmd": [
"#php artisan vendor:publish --tag=laravel-assets --ansi --force",
"#php artisan vapor-ui:publish --ansi"
],
"post-root-package-install": [
"#php -r \"file_exists('.env') || copy('.env.example', '.env');\""
],
"post-create-project-cmd": [
"#php artisan key:generate --ansi"
]
},
"extra": {
"laravel": {
"dont-discover": []
}
},
"config": {
"optimize-autoloader": true,
"preferred-install": "dist",
"sort-packages": true,
"allow-plugins": {
"pestphp/pest-plugin": true
}
},
"minimum-stability": "dev",
"prefer-stable": true
}
Also, when I run sail up:
[+] Running 7/0
⠿ Container boilerplate-meilisearch-1 Created 0.0s
⠿ Container boilerplate-redis-1 Created 0.0s
⠿ Container boilerplate-minio-1 Created 0.0s
⠿ Container boilerplate-mysql-1 Created 0.0s
⠿ Container boilerplate-mailcatcher-1 Created 0.0s
⠿ Container boilerplate-app-1 Created 0.0s
⠿ Container boilerplate-nginx-1 Created 0.0s
Attaching to boilerplate-app-1, boilerplate-mailcatcher-1, boilerplate-meilisearch-1, boilerplate-minio-1, boilerplate-mysql-1, boilerplate-nginx-1, boilerplate-redis-1
boilerplate-redis-1 | 1:C 18 Nov 2022 20:44:41.772 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
boilerplate-redis-1 | 1:C 18 Nov 2022 20:44:41.772 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
boilerplate-redis-1 | 1:C 18 Nov 2022 20:44:41.772 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.774 * monotonic clock: POSIX clock_gettime
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.775 * Running mode=standalone, port=6379.
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.775 # Server initialized
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.775 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * Loading RDB produced by version 7.0.5
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * RDB age 53 seconds
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * RDB memory usage when created 0.85 Mb
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * Done loading RDB, keys loaded: 0, keys expired: 0.
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * DB loaded from disk: 0.000 seconds
boilerplate-redis-1 | 1:M 18 Nov 2022 20:44:41.776 * Ready to accept connections
boilerplate-mailcatcher-1 | Starting MailCatcher v0.8.2
boilerplate-mailcatcher-1 | ==> smtp://0.0.0.0:1025
boilerplate-mysql-1 | [Entrypoint] MySQL Docker Image 8.0.31-1.2.10-server
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | 888b d888 d8b 888 d8b
888
boilerplate-meilisearch-1 | 8888b d8888 Y8P 888 Y8P
888
boilerplate-meilisearch-1 | 88888b.d88888 888
888
boilerplate-meilisearch-1 | 888Y88888P888 .d88b. 888 888 888 .d8888b .d88b. 8888b. 888d888 .d8888b 88888b.
boilerplate-meilisearch-1 | 888 Y888P 888 d8P Y8b 888 888 888 88K d8P Y8b "88b 888P" d88P" 888 "88b
boilerplate-meilisearch-1 | 888 Y8P 888 88888888 888 888 888 "Y8888b. 88888888 .d888888 888 888 888 888
boilerplate-meilisearch-1 | 888 " 888 Y8b. 888 888 888 X88 Y8b. 888 888 888 Y88b. 888 888
boilerplate-meilisearch-1 | 888 888 "Y8888 888 888 888 88888P' "Y8888 "Y888888 888 "Y8888P 888 888
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Database path: "./data.ms"
boilerplate-meilisearch-1 | Server listening on: "http://0.0.0.0:7700"
boilerplate-meilisearch-1 | Environment: "development"
boilerplate-meilisearch-1 | Commit SHA: "unknown"
boilerplate-meilisearch-1 | Commit date: "unknown"
boilerplate-meilisearch-1 | Package version: "0.29.1"
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Thank you for using Meilisearch!
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | We collect anonymized analytics to improve our product and your experience. To learn more, including how to turn off analytics, visit our dedicated documentation page: https://docs.meilisearch.com/learn/what_is_meilisearch/telemetry.html
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Anonymous telemetry: "Enabled"
boilerplate-meilisearch-1 | Instance UID: "1fa4148e-c0dc-46c7-9f61-f4abb8f0354c"
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | No master key found; The server will accept unidentified requests. If you need some protection in development mode, please export a key: export MEILI_MASTER_KEY=xxx
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | Documentation: https://docs.meilisearch.com
boilerplate-meilisearch-1 | Source code: https://github.com/meilisearch/meilisearch
boilerplate-meilisearch-1 | Contact: https://docs.meilisearch.com/resources/contact.html
boilerplate-meilisearch-1 |
boilerplate-meilisearch-1 | [2022-11-18T10:44:42Z INFO actix_server::builder] Starting 4 workers
boilerplate-meilisearch-1 | [2022-11-18T10:44:42Z INFO actix_server::server] Actix runtime found; starting in Actix runtime
boilerplate-mailcatcher-1 | ==> http://0.0.0.0:1080
boilerplate-mysql-1 | [Entrypoint] Starting MySQL 8.0.31-1.2.10-server
boilerplate-minio-1 | Warning: Default parity set to 0. This can lead to data loss.
boilerplate-minio-1 | MinIO Object Storage Server
boilerplate-minio-1 | Copyright: 2015-2022 MinIO, Inc.
boilerplate-minio-1 | License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
boilerplate-minio-1 | Version: RELEASE.2022-11-11T03-44-20Z (go1.19.3 linux/amd64)
boilerplate-minio-1 |
boilerplate-minio-1 | Status: 1 Online, 0 Offline.
boilerplate-minio-1 | API: http://172.19.0.5:9000 http://127.0.0.1:9000
boilerplate-minio-1 | Console: http://172.19.0.5:8900 http://127.0.0.1:8900
boilerplate-minio-1 |
boilerplate-minio-1 | Documentation: https://min.io/docs/minio/linux/index.html
boilerplate-mysql-1 | 2022-11-18T10:44:42.878570Z 0 [Warning] [MY-011068] [Server] The syntax '--skip-host-cache' is deprecated and will be removed in a future release. Please use SET GLOBAL host_cache_size=0 instead.
boilerplate-mysql-1 | 2022-11-18T10:44:42.879744Z 0 [Warning] [MY-010918] [Server] 'default_authentication_plugin' is deprecated and will be removed in a future release. Please use authentication_policy instead.
boilerplate-mysql-1 | 2022-11-18T10:44:42.879769Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.0.31) starting as process 1
boilerplate-mysql-1 | 2022-11-18T10:44:42.886273Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
boilerplate-mysql-1 | 2022-11-18T10:44:43.007089Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
boilerplate-app-1 | Installing Package Dependencies
boilerplate-app-1 | Installing dependencies from lock file (including require-dev)
boilerplate-app-1 | Verifying lock file contents can be installed on current platform.
boilerplate-app-1 | Nothing to install, update or remove
boilerplate-mysql-1 | 2022-11-18T10:44:43.275853Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
boilerplate-mysql-1 | 2022-11-18T10:44:43.275917Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
boilerplate-app-1 | Package gregoriohc/laravel-nova-theme-responsive is abandoned, you should avoid using it. No replacement was suggested.
boilerplate-app-1 | Generating optimized autoload files
boilerplate-mysql-1 | 2022-11-18T10:44:43.316329Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
boilerplate-mysql-1 | 2022-11-18T10:44:43.316463Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.0.31' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server - GPL.
boilerplate-nginx-1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
boilerplate-nginx-1 | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
boilerplate-nginx-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
boilerplate-nginx-1 | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
boilerplate-nginx-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
boilerplate-nginx-1 | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
boilerplate-nginx-1 | /docker-entrypoint.sh: Configuration complete; ready for start up
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: using the "epoll" event method
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: nginx/1.23.2
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: built by gcc 11.2.1 20220219 (Alpine 11.2.1_git20220219)
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: OS: Linux 5.10.102.1-microsoft-standard-WSL2
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker processes
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 20
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 21
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 22
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 23
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 24
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 25
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 26
boilerplate-nginx-1 | 2022/11/18 10:44:43 [notice] 1#1: start worker process 27
boilerplate-minio-1 |
boilerplate-minio-1 | You are running an older version of MinIO released 6 days ago
boilerplate-minio-1 | Update: Run `mc admin update`
boilerplate-minio-1 |
boilerplate-minio-1 |
boilerplate-app-1 | Class App\Http\Resources\Api\v1\AddressResource located in ./app/Http/Resources/Api/V1/AddressResource.php does not comply with psr-4 autoloading standard. Skipping.
boilerplate-app-1 | > Illuminate\Foundation\ComposerScripts::postAutoloadDump
boilerplate-app-1 | > #php artisan package:discover --ansi
boilerplate-app-1 |
boilerplate-app-1 | INFO Discovering packages.
boilerplate-app-1 |
boilerplate-app-1 | bolechen/nova-activitylog ............................................. DONE
boilerplate-app-1 | classic-o/nova-media-library .......................................... DONE
boilerplate-app-1 | cloudcake/nova-fixed-bars ............................................. DONE
boilerplate-app-1 | cloudcake/nova-snowball ............................................... DONE
boilerplate-app-1 | dcblogdev/laravel-sent-emails ......................................... DONE
boilerplate-app-1 | emilianotisato/nova-tinymce ........................................... DONE
boilerplate-app-1 | eminiarts/nova-tabs ................................................... DONE
boilerplate-app-1 | gregoriohc/laravel-nova-theme-responsive .............................. DONE
boilerplate-app-1 | intervention/image .................................................... DONE
boilerplate-app-1 | johnathan/nova-trumbowyg .............................................. DONE
boilerplate-app-1 | kutia-software-company/larafirebase ................................... DONE
boilerplate-app-1 | laravel/nova .......................................................... DONE
boilerplate-app-1 | laravel/sail .......................................................... DONE
boilerplate-app-1 | laravel/sanctum ....................................................... DONE
boilerplate-app-1 | laravel/scout ......................................................... DONE
boilerplate-app-1 | laravel/tinker ........................................................ DONE
boilerplate-app-1 | laravel/ui ............................................................ DONE
boilerplate-app-1 | laravel/vapor-core .................................................... DONE
boilerplate-app-1 | laravel/vapor-ui ...................................................... DONE
boilerplate-app-1 | mpociot/versionable ................................................... DONE
boilerplate-app-1 | nesbot/carbon ......................................................... DONE
boilerplate-app-1 | nnjeim/world .......................................................... DONE
boilerplate-app-1 | nunomaduro/collision .................................................. DONE
boilerplate-app-1 | nunomaduro/termwind ................................................... DONE
boilerplate-app-1 | optimistdigital/nova-locale-field ..................................... DONE
boilerplate-app-1 | optimistdigital/nova-page-manager ..................................... DONE
boilerplate-app-1 | optimistdigital/nova-translations-loader .............................. DONE
boilerplate-app-1 | outl1ne/nova-settings ................................................. DONE
boilerplate-app-1 | spatie/laravel-activitylog ............................................ DONE
boilerplate-app-1 | spatie/laravel-ignition ............................................... DONE
boilerplate-app-1 | spatie/laravel-permission ............................................. DONE
boilerplate-app-1 | vinkla/hashids ........................................................ DONE
boilerplate-app-1 | vyuldashev/nova-permission ............................................ DONE
boilerplate-app-1 | whitecube/nova-flexible-content ....................................... DONE
boilerplate-app-1 | yab/laravel-scout-mysql-driver ........................................ DONE
boilerplate-app-1 |
boilerplate-app-1 | 100 packages you are using are looking for funding.
boilerplate-app-1 | Use the `composer fund` command to find out more!
boilerplate-app-1 | Running database migrations
boilerplate-app-1 |
boilerplate-app-1 | INFO Nothing to migrate.
boilerplate-app-1 |
boilerplate-app-1 | Linking Storage
boilerplate-app-1 |
boilerplate-app-1 | ERROR The [public/storage] link already exists.
boilerplate-app-1 |
boilerplate-app-1 | Generating IDE Helper Stubs
boilerplate-app-1 |
boilerplate-app-1 | ERROR There are no commands defined in the "ide-helper" namespace.
boilerplate-app-1 |
boilerplate-app-1 |
boilerplate-app-1 | ERROR There are no commands defined in the "ide-helper" namespace.
boilerplate-app-1 |
boilerplate-app-1 | 2022-11-18 10:44:49,582 INFO Set uid to user 0 succeeded
boilerplate-app-1 | 2022-11-18 10:44:49,584 INFO supervisord started with pid 39
boilerplate-app-1 | 2022-11-18 10:44:50,586 INFO spawned: 'php' with pid 40
boilerplate-app-1 | [18-Nov-2022 10:44:50] NOTICE: fpm is running, pid 40
boilerplate-app-1 | [18-Nov-2022 10:44:50] NOTICE: ready to handle connections
boilerplate-app-1 | 2022-11-18 10:44:51,610 INFO success: php entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
boilerplate-nginx-1 | 172.19.0.1 - - [18/Nov/2022:10:45:13 +0000] "GET /admin/dashboard HTTP/1.1" 200 17616 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
boilerplate-app-1 | 172.19.0.8 - 18/Nov/2022:10:45:13 +0000 "GET /index.php" 200
boilerplate-nginx-1 | 172.19.0.1 - - [18/Nov/2022:10:45:16 +0000] "GET / HTTP/1.1" 200 17601 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36"
boilerplate-app-1 | 172.19.0.8 - 18/Nov/2022:10:45:16 +0000 "GET /index.php" 200
I found the solution.
After looking at this part of docker-compose.yml:
version: "3.7"
services:
#Laravel App
app:
build:
context: ./docker/php/${DOCKER_PHP_VERSION}
dockerfile: Dockerfile
args:
xdebug_enabled: ${DOCKER_PHP_XDEBUG_ENABLED}
image: ${COMPOSE_PROJECT_NAME}-app
restart: unless-stopped
tty: true
working_dir: /var/www/html
I realized that Laravel service has a different name which in this project is app.
So in .env file, I added this:
APP_SERVICE=app
Provided configmap works fine (filebeat->logstash->elasticsearch), but I want to modify it in order to use kubernetes.pod.labels instead of kubernetes.container.name.
Could you please suggest me how to do it, I tried different approaches with not success :(
5 apiVersion: v1
6 data:
7 filebeat.yml: |
8 filebeat.autodiscover:
9 providers:
10 - type: kubernetes
11 templates:
12 - condition:
13 equals:
14 kubernetes.container.name: "controller"
15 config:
16 - module: nginx
17 ingress_controller:
18 enabled: true
19 input:
20 type: container
21 paths:
22 - /var/log/containers/*-${data.kubernetes.container.id}.log
23
24 output.logstash:
25 hosts: ["XX.XX.XX.XX:5044"]
Additionally I provided pod labels below:
1 apiVersion: v1
2 kind: Pod
3 metadata:
4 annotations:
5 cni.projectcalico.org/podIP: 10.113.132.3/32
6 creationTimestamp: "2021-09-28T15:02:39Z"
7 generateName: ingress-nginx-controller-c7d64c64d
8 labels:
9 app.kubernetes.io/component: controller
10 app.kubernetes.io/instance: ingress-nginx
11 app.kubernetes.io/name: ingress-nginx
12 pod-template-hash: c7d64c64d
13 managedFields:
14 ....
I recently asked this question on how to upgrade Istio 1.1.11 from using http1.1 to http2.
I followed the advice and my resultant services YAML looks like this.
##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
service: details
spec:
ports:
- port: 9080
name: http2
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
labels:
app: details
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: istio/examples-bookinfo-details-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http2
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
labels:
app: ratings
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: istio/examples-bookinfo-ratings-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http2
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
labels:
app: reviews
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
labels:
app: reviews
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v2:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http2
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
labels:
app: productpage
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
I successfully followed this tutorial to curl the service using HTTPS.
curl before:
curl -o /dev/null -s -v -w "%{http_code}\n" -HHost:localhost --resolve
localhost:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert example.com.crt -HHost:localhost https://localhost:443/productpage
* Address in 'localhost:443:localhost' found illegal!
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: example.com.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [215 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [96 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [740 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=localhost; O=Localhost organization
* start date: Jan 13 05:22:09 2020 GMT
* expire date: Jan 12 05:22:09 2021 GMT
* common name: localhost (matched)
* issuer: O=example Inc.; CN=example.com
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe244006400)
> GET /productpage HTTP/2
> Host:localhost
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< content-type: text/html; charset=utf-8
< content-length: 4415
< server: istio-envoy
< date: Tue, 14 Jan 2020 03:22:30 GMT
< x-envoy-upstream-service-time: 1294
<
{ [4415 bytes data]
* Connection #0 to host localhost left intact
200
If I hit the service from a browser it works perfectly fine using url https://localhost/productpage
But, it stops working after I apply the above YAML. The browser just says
"upstream connect error or disconnect/reset before headers. reset reason: connection termination"
curl after:
curl -o /dev/null -s -v -w "%{http_code}\n" -HHost:localhost --resolve localhost:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert example.com.crt -HHost:localhost https://localhost:443/productpage
* Address in 'localhost:443:localhost' found illegal!
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: example.com.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [215 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [96 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [740 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=localhost; O=Localhost organization
* start date: Jan 13 05:22:09 2020 GMT
* expire date: Jan 12 05:22:09 2021 GMT
* common name: localhost (matched)
* issuer: O=example Inc.; CN=example.com
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe13a005200)
> GET /productpage HTTP/2
> Host:localhost
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503
< content-length: 95
< content-type: text/plain
< date: Tue, 14 Jan 2020 03:16:49 GMT
< server: istio-envoy
< x-envoy-upstream-service-time: 57
<
{ [95 bytes data]
* Connection #0 to host localhost left intact
503
My destination rules look like this
(Note: It fails only if I change the above YAML, designation rules seem to be working just fine):
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings
spec:
host: ratings
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v2-mysql
labels:
version: v2-mysql
- name: v2-mysql-vm
labels:
version: v2-mysql-vm
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details
spec:
host: details
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
Few questions:
1) What could be the cause? How can I fix this? Is this a bug in Istio?
2) I'm able to hit the service from the browser before making the changes and I've read here that modern browsers only support HTTP2. Does that mean I'm automatically compliant to HTTP2? How to verify this?
3) How to gather the relevant logs to track what protocol is being used and for inter-pod communication?
The issue here is that You are most likely trying to serve HTTP (bookinfo app) content via HTTP2 protocol deployment/cluster configuration.
The bookinfo sample application from istio documentation does not support HTTP2 in its base configuration.
You can verify if You web-server supports HTTP2 protocol with this web tool: http2-test
From the other case You linked it appears You are looking into switching internal cluster communication from HTTP to HTTP2.
If You chose to continue going this path I suggest deploying service like nginx with with HTTP2 configuration similar to this found in nginx documentation for debugging purposes.
This can have alternative approach as described in google cloud documentation. In this case You can use HTTP as internal protocol in Your cluster configuration and web-server and then translate the traffic to HTTP2 on istio gateway/external loadbalancer.
I have a SpringBoot project with graceful shutdown configured. Deployed on k8s 1.12.7 Here are the logs,
2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event
2019-07-20 10:23:16.180 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish
2019-07-20 10:23:16.273 INFO [service,fd964ebaa631a860,75a07c123397e4ff,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=59
2019-07-20 10:23:16.374 INFO [service,9a569ecd8c448e98,00bc11ef2776d7fb,false] 1 --- [nio-8080-exec-1] com.jay.resource.ProductResource : GET /products?id=68
...
2019-07-20 10:23:33.711 INFO [service,1532d6298acce718,08cfb8085553b02e,false] 1 --- [nio-8080-exec-9] com.jay.resource.ProductResource : GET /products?id=209
2019-07-20 10:23:46.181 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation
2019-07-20 10:23:46.216 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
Application has received the SIGTERM at 10:23:16.180 from Kubernetes. As per Termination of Pods point#5 says that the terminating pod is removed from the endpoints list of service, but it is contradicting that it forwarded the requests for 17 seconds (until 10:23:33.711) after sending SIGTERM signal. Is there any configuration missing?
Dockerfile
FROM openjdk:8-jre-slim
MAINTAINER Jay
RUN apt update && apt install -y curl libtcnative-1 gcc && apt clean
ADD build/libs/sample-service.jar /
CMD ["java", "-jar" , "sample-service.jar"]
GracefulShutdown
// https://github.com/spring-projects/spring-boot/issues/4657
class GracefulShutdown(val waitTime: Long, val timeout: Long) : TomcatConnectorCustomizer, ApplicationListener<ContextClosedEvent> {
#Volatile
private var connector: Connector? = null
override fun customize(connector: Connector) {
this.connector = connector
}
override fun onApplicationEvent(event: ContextClosedEvent) {
log.info("Received shutdown event")
val executor = this.connector?.protocolHandler?.executor
if (executor is ThreadPoolExecutor) {
try {
val threadPoolExecutor: ThreadPoolExecutor = executor
log.info("Waiting for ${waitTime}s to finish")
hibernate(waitTime * 1000)
log.info("Resumed after hibernation")
this.connector?.pause()
threadPoolExecutor.shutdown()
if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) {
log.warn("Tomcat thread pool did not shut down gracefully within $timeout seconds. Proceeding with forceful shutdown")
threadPoolExecutor.shutdownNow()
if (!threadPoolExecutor.awaitTermination(timeout, TimeUnit.SECONDS)) {
log.error("Tomcat thread pool did not terminate")
}
}
} catch (ex: InterruptedException) {
log.info("Interrupted")
Thread.currentThread().interrupt()
}
}else
this.connector?.pause()
}
private fun hibernate(time: Long){
try {
Thread.sleep(time)
}catch (ex: Exception){}
}
companion object {
private val log = LoggerFactory.getLogger(GracefulShutdown::class.java)
}
}
#Configuration
class GracefulShutdownConfig(#Value("\${app.shutdown.graceful.wait-time:30}") val waitTime: Long,
#Value("\${app.shutdown.graceful.timeout:30}") val timeout: Long) {
companion object {
private val log = LoggerFactory.getLogger(GracefulShutdownConfig::class.java)
}
#Bean
fun gracefulShutdown(): GracefulShutdown {
return GracefulShutdown(waitTime, timeout)
}
#Bean
fun webServerFactory(gracefulShutdown: GracefulShutdown): ConfigurableServletWebServerFactory {
log.info("GracefulShutdown configured with wait: ${waitTime}s and timeout: ${timeout}s")
val factory = TomcatServletWebServerFactory()
factory.addConnectorCustomizers(gracefulShutdown)
return factory
}
}
deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: service
name: service
spec:
progressDeadlineSeconds: 420
replicas: 1
revisionHistoryLimit: 1
selector:
matchLabels:
k8s-app: service
strategy:
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
k8s-app: service
spec:
terminationGracePeriodSeconds: 60
containers:
- env:
- name: SPRING_PROFILES_ACTIVE
value: dev
image: service:2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 20
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 60
periodSeconds: 30
timeoutSeconds: 5
name: service
ports:
- containerPort: 8080
protocol: TCP
readinessProbe:
failureThreshold: 60
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
UPDATE:
Added custom health check endpoint
#RestControllerEndpoint(id = "live")
#Component
class LiveEndpoint {
companion object {
private val log = LoggerFactory.getLogger(LiveEndpoint::class.java)
}
#Autowired
private lateinit var gracefulShutdownStatus: GracefulShutdownStatus
#GetMapping
fun live(): ResponseEntity<Any> {
val status = if(gracefulShutdownStatus.isTerminating())
HttpStatus.INTERNAL_SERVER_ERROR.value()
else
HttpStatus.OK.value()
log.info("Status: $status")
return ResponseEntity.status(status).build()
}
}
Changed the livenessProbe,
livenessProbe:
httpGet:
path: /actuator/live
port: 8080
initialDelaySeconds: 100
periodSeconds: 5
timeoutSeconds: 5
failureThreshold: 3
Here are the logs after the change,
2019-07-21 14:13:01.431 INFO [service,9b65b26907f2cf8f,9b65b26907f2cf8f,false] 1 --- [nio-8080-exec-2] com.jay.util.LiveEndpoint : Status: 200
2019-07-21 14:13:01.444 INFO [service,3da259976f9c286c,64b0d5973fddd577,false] 1 --- [nio-8080-exec-3] com.jay.resource.ProductResource : GET /products?id=52
2019-07-21 14:13:01.609 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Received shutdown event
2019-07-21 14:13:01.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Waiting for 30s to finish
...
2019-07-21 14:13:06.431 INFO [service,002c0da2133cf3b0,002c0da2133cf3b0,false] 1 --- [nio-8080-exec-3] com.jay.util.LiveEndpoint : Status: 500
2019-07-21 14:13:06.433 INFO [service,072abbd7275103ce,d1ead06b4abf2a34,false] 1 --- [nio-8080-exec-4] com.jay.resource.ProductResource : GET /products?id=96
...
2019-07-21 14:13:11.431 INFO [service,35aa09a8aea64ae6,35aa09a8aea64ae6,false] 1 --- [io-8080-exec-10] com.jay.util.LiveEndpoint : Status: 500
2019-07-21 14:13:11.508 INFO [service,a78c924f75538a50,0314f77f21076313,false] 1 --- [nio-8080-exec-2] com.jay.resource.ProductResource : GET /products?id=110
...
2019-07-21 14:13:16.431 INFO [service,38a940dfda03956b,38a940dfda03956b,false] 1 --- [nio-8080-exec-9] com.jay.util.LiveEndpoint : Status: 500
2019-07-21 14:13:16.593 INFO [service,d76e81012934805f,b61cb062154bb7f0,false] 1 --- [io-8080-exec-10] com.jay.resource.ProductResource : GET /products?id=152
...
2019-07-21 14:13:29.634 INFO [service,38a32a20358a7cc4,2029de1ed90e9539,false] 1 --- [nio-8080-exec-6] com.jay.resource.ProductResource : GET /products?id=191
2019-07-21 14:13:31.610 INFO [service,,,] 1 --- [ Thread-7] com.jay.util.GracefulShutdown : Resumed after hibernation
2019-07-21 14:13:31.692 INFO [service,,,] 1 --- [ Thread-7] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
With the livenessProbe of 3 failures, kubernetes served the traffic for 13 seconds after liveness failures i.e., from 14:13:16.431 to 14:13:29.634.
UPDATE 2:
The sequence of events (thanks to Eamonn McEvoy)
seconds | healthy | events
0 | ✔ | * liveness probe healthy
1 | ✔ | - SIGTERM
2 | ✔ |
3 | ✔ |
4 | ✔ |
5 | ✔ | * liveness probe unhealthy (1/3)
6 | ✔ |
7 | ✔ |
8 | ✔ |
9 | ✔ |
10 | ✔ | * liveness probe unhealthy (2/3)
11 | ✔ |
12 | ✔ |
13 | ✔ |
14 | ✔ |
15 | ✘ | * liveness probe unhealthy (3/3)
.. | ✔ | * traffic is served
28 | ✔ | * traffic is served
29 | ✘ | * pod restarts
SIGTERM isn't putting the pod into a terminating state immediately. You can see in the logs your application begins graceful shutdown at 10:23:16.180 and takes >20 seconds to complete. At this point, the container stops and pod can enter the terminating state.
As far as kubernetes is concerned the pod looks ok during the graceful shutdown period. You need to add a liveness probe to your deployment; when it becomes unhealthy the traffic will stop.
livenessProbe:
httpGet:
path: /actuator/health
port: 8080
initialDelaySeconds: 100
periodSeconds: 10
timeoutSeconds: 5
Update:
This is because you have a failure threshold of 3, so you are allowing traffic for up to 15 seconds after the sigterm;
e.g.
seconds | healthy | events
0 | ✔ | * liveness probe healthy
1 | ✔ | - SIGTERM
2 | ✔ |
3 | ✔ |
4 | ✔ |
5 | ✔ | * liveness probe issued
6 | ✔ | .
7 | ✔ | .
8 | ✔ | .
9 | ✔ | .
10 | ✔ | * liveness probe timeout - unhealthy (1/3)
11 | ✔ |
12 | ✔ |
13 | ✔ |
14 | ✔ |
15 | ✔ | * liveness probe issued
16 | ✔ | .
17 | ✔ | .
18 | ✔ | .
19 | ✔ | .
20 | ✔ | * liveness probe timeout - unhealthy (2/3)
21 | ✔ |
22 | ✔ |
23 | ✔ |
24 | ✔ |
25 | ✔ | * liveness probe issued
26 | ✔ | .
27 | ✔ | .
28 | ✔ | .
29 | ✔ | .
30 | ✘ | * liveness probe timeout - unhealthy (3/3)
| | * pod restarts
This is assuming that the endpoint returns an unhealthy response during the graceful shutdown. Since you have timeoutSeconds: 5, if the probe simply times out this will take much longer, with a 5 second delay between issuing a liveness probe request and receiving its response. It could be the case that the container actually dies before the liveness threshold is hit and you are still seeing the original behaviour