Istio 1.1.11 not supporting http2? - https

I recently asked this question on how to upgrade Istio 1.1.11 from using http1.1 to http2.
I followed the advice and my resultant services YAML looks like this.
##################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: details
labels:
app: details
service: details
spec:
ports:
- port: 9080
name: http2
selector:
app: details
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: details-v1
labels:
app: details
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: details
version: v1
spec:
containers:
- name: details
image: istio/examples-bookinfo-details-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: ratings
labels:
app: ratings
service: ratings
spec:
ports:
- port: 9080
name: http2
selector:
app: ratings
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ratings-v1
labels:
app: ratings
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: ratings
version: v1
spec:
containers:
- name: ratings
image: istio/examples-bookinfo-ratings-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: reviews
labels:
app: reviews
service: reviews
spec:
ports:
- port: 9080
name: http2
selector:
app: reviews
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v1
labels:
app: reviews
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v1
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v2
labels:
app: reviews
version: v2
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v2
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v2:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: reviews-v3
labels:
app: reviews
version: v3
spec:
replicas: 1
template:
metadata:
labels:
app: reviews
version: v3
spec:
containers:
- name: reviews
image: istio/examples-bookinfo-reviews-v3:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
service: productpage
spec:
ports:
- port: 9080
name: http2
selector:
app: productpage
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: productpage-v1
labels:
app: productpage
version: v1
spec:
replicas: 1
template:
metadata:
labels:
app: productpage
version: v1
spec:
containers:
- name: productpage
image: istio/examples-bookinfo-productpage-v1:1.13.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9080
---
I successfully followed this tutorial to curl the service using HTTPS.
curl before:
curl -o /dev/null -s -v -w "%{http_code}\n" -HHost:localhost --resolve
localhost:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert example.com.crt -HHost:localhost https://localhost:443/productpage
* Address in 'localhost:443:localhost' found illegal!
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: example.com.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [215 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [96 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [740 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=localhost; O=Localhost organization
* start date: Jan 13 05:22:09 2020 GMT
* expire date: Jan 12 05:22:09 2021 GMT
* common name: localhost (matched)
* issuer: O=example Inc.; CN=example.com
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe244006400)
> GET /productpage HTTP/2
> Host:localhost
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 200
< content-type: text/html; charset=utf-8
< content-length: 4415
< server: istio-envoy
< date: Tue, 14 Jan 2020 03:22:30 GMT
< x-envoy-upstream-service-time: 1294
<
{ [4415 bytes data]
* Connection #0 to host localhost left intact
200
If I hit the service from a browser it works perfectly fine using url https://localhost/productpage
But, it stops working after I apply the above YAML. The browser just says
"upstream connect error or disconnect/reset before headers. reset reason: connection termination"
curl after:
curl -o /dev/null -s -v -w "%{http_code}\n" -HHost:localhost --resolve localhost:$SECURE_INGRESS_PORT:$INGRESS_HOST --cacert example.com.crt -HHost:localhost https://localhost:443/productpage
* Address in 'localhost:443:localhost' found illegal!
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: example.com.crt
CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [215 bytes data]
* TLSv1.2 (IN), TLS handshake, Server hello (2):
{ [96 bytes data]
* TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [740 bytes data]
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [300 bytes data]
* TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [37 bytes data]
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
} [1 bytes data]
* TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
{ [1 bytes data]
* TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=localhost; O=Localhost organization
* start date: Jan 13 05:22:09 2020 GMT
* expire date: Jan 12 05:22:09 2021 GMT
* common name: localhost (matched)
* issuer: O=example Inc.; CN=example.com
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe13a005200)
> GET /productpage HTTP/2
> Host:localhost
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503
< content-length: 95
< content-type: text/plain
< date: Tue, 14 Jan 2020 03:16:49 GMT
< server: istio-envoy
< x-envoy-upstream-service-time: 57
<
{ [95 bytes data]
* Connection #0 to host localhost left intact
503
My destination rules look like this
(Note: It fails only if I change the above YAML, designation rules seem to be working just fine):
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: productpage
spec:
host: productpage
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: reviews
spec:
host: reviews
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v3
labels:
version: v3
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: ratings
spec:
host: ratings
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
- name: v2-mysql
labels:
version: v2-mysql
- name: v2-mysql-vm
labels:
version: v2-mysql-vm
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: details
spec:
host: details
trafficPolicy:
connectionPool:
http:
h2UpgradePolicy: UPGRADE
tls:
mode: ISTIO_MUTUAL
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
---
Few questions:
1) What could be the cause? How can I fix this? Is this a bug in Istio?
2) I'm able to hit the service from the browser before making the changes and I've read here that modern browsers only support HTTP2. Does that mean I'm automatically compliant to HTTP2? How to verify this?
3) How to gather the relevant logs to track what protocol is being used and for inter-pod communication?

The issue here is that You are most likely trying to serve HTTP (bookinfo app) content via HTTP2 protocol deployment/cluster configuration.
The bookinfo sample application from istio documentation does not support HTTP2 in its base configuration.
You can verify if You web-server supports HTTP2 protocol with this web tool: http2-test
From the other case You linked it appears You are looking into switching internal cluster communication from HTTP to HTTP2.
If You chose to continue going this path I suggest deploying service like nginx with with HTTP2 configuration similar to this found in nginx documentation for debugging purposes.
This can have alternative approach as described in google cloud documentation. In this case You can use HTTP as internal protocol in Your cluster configuration and web-server and then translate the traffic to HTTP2 on istio gateway/external loadbalancer.

Related

Spring microservices and kubernetes error

I'm using eureka-client, eureka-server, spring-cloud-starter-gateway and kafka to build my api. Using microservices, it works like this: the command sends a request to kafka for it to run, the Kafka that is installed on my machine and is not in a container. Command example:
#Autowired
private KafkaTemplate<String, ContactAdmSaveDto> kafkaTemplate;
#Override
public String create(ContactAdmSaveDto data) {
kafkaTemplate.send("contact-adm-insert", data);
return "Cadastrado com sucesso!";
}
application.properties command producer:
spring.kafka.producer.bootstrap-servers=springboot:9092
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
server.port = 30006
spring.application.name = contact-adm-command
eureka.client.serviceUrl.defaultZone = http://springboot:30002/eureka
eureka.instance.hostname=springboot
eureka.instance.prefer-ip-address=true
Example from Kafka:
#KafkaListener(topics = {"contact-adm-insert"}, groupId = "contact-adm")
public void consume(String record){
try {
ObjectMapper mapper = new ObjectMapper();
ContactAdm data = mapper.readValue(record, ContactAdm.class);
ContactAdm cat = new ContactAdm();
cat.setCell_phone(data.getCell_phone());
cat.setEmail(data.getEmail());
cat.setTelephone(data.getTelephone());
ContactAdm c = contactAdmRepository.save(cat);
ContactAdmMongo catm = new ContactAdmMongo();
catm.setCell_phone(data.getCell_phone());
catm.setEmail(data.getEmail());
catm.setTelephone(data.getTelephone());
catm.setContact_id(c.getContact_id());
contactAdmRepositoryMongo.save(catm);
} catch (Exception e) {
logger.info(e.toString());
}
}
application.properties kafka consumer:
server.port = 30005
spring.kafka.consumer.bootstrap-servers=springboot:9092
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.group-id=contact-adm
springboot is a host name for my machine's ip
follow my gateway. Remembering that all kafka services are not registered in the gateway they are only to run when they are called by the command:
server.port=30000
spring.application.name=routing
eureka.client.serviceUrl.defaultZone=http://springboot:30002/eureka/
eureka.instance.hostname=springboot
eureka.instance.prefer-ip-address=true
#spring.cloud.gateway.discovery.locator.enabled=true
#spring.main.web-application-type=reactive
spring.cloud.gateway.enabled=true
spring.cloud.gateway.routes[0].id=user
spring.cloud.gateway.routes[0].uri=lb://USER
spring.cloud.gateway.routes[0].predicates=Path=/user/**
spring.cloud.gateway.routes[1].id=testes
spring.cloud.gateway.routes[1].uri=lb://TESTES
spring.cloud.gateway.routes[1].predicates=Path=/testes/**
spring.cloud.gateway.routes[2].id=user-command
spring.cloud.gateway.routes[2].uri=lb://USER-COMMAND
spring.cloud.gateway.routes[2].predicates=Path=/user-command/**
spring.cloud.gateway.routes[3].id=category-product-command
spring.cloud.gateway.routes[3].uri=lb://CATEGORY-PRODUCT-COMMAND
spring.cloud.gateway.routes[3].predicates=Path=/category-product-command/**
spring.cloud.gateway.routes[4].id=category-product-query
spring.cloud.gateway.routes[4].uri=lb://CATEGORY-PRODUCT-QUERY
spring.cloud.gateway.routes[4].predicates=Path=/category-product-query/**
spring.cloud.gateway.routes[5].id=cart-purchase-command
spring.cloud.gateway.routes[5].uri=lb://CART-PURCHASE-COMMAND
spring.cloud.gateway.routes[5].predicates=Path=/cart-purchase-command/**
spring.cloud.gateway.routes[6].id=cart-purchase-query
spring.cloud.gateway.routes[6].uri=lb://CART-PURCHASE-QUERY
spring.cloud.gateway.routes[6].predicates=Path=/cart-purchase-query/**
spring.cloud.gateway.routes[7].id=contact-adm-command
spring.cloud.gateway.routes[7].uri=lb://CONTACT-ADM-COMMAND
spring.cloud.gateway.routes[7].predicates=Path=/contact-adm-command/**
spring.cloud.gateway.routes[8].id=contact-adm-query
spring.cloud.gateway.routes[8].uri=lb://CONTACT-ADM-QUERY
spring.cloud.gateway.routes[8].predicates=Path=/contact-adm-query/**
This all works fine but I want to put it on kubernetes so I created the services images with the following command: mvn spring-boot:build-image and with the DockerFile:
FROM openjdk:17-alpine
EXPOSE 30000
ADD src/main/resources/routing/public.pem src/main/resources/routing/public.pem
ADD /target/routes-0.0.1-SNAPSHOT.jar routes-0.0.1-SNAPSHOT.jar
ENTRYPOINT ["java","-jar","routes-0.0.1-SNAPSHOT.jar"]
Generating all services images and placing them in the docker hub to be pulled by docker kubernetes with the following deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: cart-purchase-kafka
labels:
app: cart-purchase-kafka
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: cart-purchase-kafka
template:
metadata:
labels:
run: cart-purchase-kafka
spec:
containers:
- name: cart-purchase-kafka
image: rafaelribeirosouza86/shopping:cart-purchase-kafka
imagePullPolicy: Always
ports:
- containerPort: 30011
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cart-purchase-command
labels:
app: cart-purchase-command
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: cart-purchase-command
template:
metadata:
labels:
run: cart-purchase-command
spec:
containers:
- name: cart-purchase-command
image: rafaelribeirosouza86/shopping:cart-purchase-command
imagePullPolicy: Always
ports:
- containerPort: 30012
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cart-purchase-query
labels:
app: cart-purchase-query
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: cart-purchase-query
template:
metadata:
labels:
run: cart-purchase-query
spec:
containers:
- name: cart-purchase-query
image: rafaelribeirosouza86/shopping:cart-purchase-query
imagePullPolicy: Always
ports:
- containerPort: 30010
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user
labels:
app: user
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: user
template:
metadata:
labels:
run: user
spec:
containers:
- name: user
image: rafaelribeirosouza86/shopping:user
imagePullPolicy: Always
ports:
- containerPort: 30015
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-command
labels:
app: user-command
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: user-command
template:
metadata:
labels:
run: user-command
spec:
containers:
- name: user-command
image: rafaelribeirosouza86/shopping:user-command
imagePullPolicy: Always
ports:
- containerPort: 30004
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-command-insert
labels:
app: user-command-insert
spec:
replicas: 1
# strategy:
# rollingUpdate:
# maxUnavailable: 0
# maxSurge: 1
selector:
matchLabels:
run: user-command-insert
template:
metadata:
labels:
run: user-command-insert
spec:
containers:
- name: user-command-insert
image: rafaelribeirosouza86/shopping:user-command-insert
imagePullPolicy: Always
ports:
- containerPort: 30003
protocol: TCP
imagePullSecrets:
- name: regcred
The big problem so far is that when I run it without Kubernetes it works fine but when I generate the pods it gives errors like:
NAME READY STATUS RESTARTS AGE
category-product-command-565f758d5d-4wwnf 0/1 Evicted 0 10m
category-product-command-565f758d5d-54pd5 0/1 Error 0 29m
category-product-command-565f758d5d-hmb8k 0/1 Pending 0 2m47s
category-product-command-565f758d5d-k6gmf 0/1 Evicted 0 10m
category-product-command-565f758d5d-lkd25 0/1 Error 0 41m
category-product-command-565f758d5d-ltbnl 0/1 Evicted 0 10m
category-product-command-565f758d5d-m7wwx 0/1 ContainerStatusUnknown 1 35m
category-product-command-565f758d5d-p42td 0/1 Error 0 54m
category-product-command-565f758d5d-pmfmh 0/1 Error 0 10m
category-product-command-565f758d5d-qbthd 0/1 Evicted 0 10m
category-product-command-565f758d5d-qf969 0/1 Evicted 0 10m
category-product-command-565f758d5d-twjvq 0/1 Evicted 0 10m
category-product-command-565f758d5d-vfrwq 0/1 ContainerStatusUnknown 1 22m
category-product-command-565f758d5d-xftpq 0/1 Error 0 47m
category-product-command-565f758d5d-xsg47 0/1 Evicted 0 10m
category-product-kafka-67d4fdbf76-262n8 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-2klh8 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-2mgp8 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-2rlmm 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-2z57p 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-424pj 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-4cnp2 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-4v586 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-5d7sg 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-5mndm 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-5rcgg 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-5rlz7 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-69w7h 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-6czbj 0/1 Evicted 0 36m
category-product-kafka-67d4fdbf76-6rtvb 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-6t4km 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-7pkd7 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-99z2b 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-9lfqq 0/1 Error 1 (42m ago) 53m
category-product-kafka-67d4fdbf76-9nrm4 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-bzx52 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-d62b5 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-dbhp4 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-dscdk 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-fnjdd 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-gnbnp 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-gsrs8 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-h69px 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-hcljj 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-hmxmk 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-hqngl 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-j2bx2 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-jjpkl 0/1 ContainerStatusUnknown 1 35m
category-product-kafka-67d4fdbf76-jqzlr 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-kbc25 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-khljn 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-kqht4 0/1 Error 0 54m
category-product-kafka-67d4fdbf76-kqxf5 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-l52p9 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-l8x4p 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-ljhrm 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-m6l8c 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-n49br 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-q4z79 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-qgqch 0/1 ContainerStatusUnknown 1 15m
category-product-kafka-67d4fdbf76-qjrf8 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-qntzw 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-qv7s9 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-rkhq6 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-rl2g6 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-rl7dl 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-sbpw6 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-slww4 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-ssm24 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-txtjw 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-v9976 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-vl9gp 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-vns2z 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-vqcz9 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-vst56 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-w5hpg 1/1 Running 0 8m53s
category-product-kafka-67d4fdbf76-w8tbb 0/1 Evicted 0 35m
category-product-kafka-67d4fdbf76-wpkwb 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-wvmtt 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-xp5t6 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-xtqwp 0/1 Evicted 0 23m
category-product-kafka-67d4fdbf76-z56s4 0/1 Error 0 23m
category-product-query-58897978b9-7csd7 1/1 Running 0 54m
contact-adm-command-56bb8f75db-9pvvz 1/1 Running 0 54m
contact-adm-kafka-858d968996-tgqkn 1/1 Running 0 54m
contact-adm-query-6b6b7487bb-2mqp6 1/1 Running 0 54m
gateway-7cbcb7bc4c-48b42 0/1 Pending 0 3m35s
gateway-7cbcb7bc4c-672mb 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-d9hxn 0/1 ContainerStatusUnknown 1 42m
gateway-7cbcb7bc4c-g97cs 0/1 Error 0 16m
gateway-7cbcb7bc4c-hpntm 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-js7nc 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-lctsk 0/1 Error 0 30m
gateway-7cbcb7bc4c-stwbk 0/1 Evicted 0 42m
gateway-7cbcb7bc4c-zl4rb 0/1 Error 0 54m
routes-cb9ffbb47-tmmw9 1/1 Running 0
an error in the container:
Caused by: org.apache.hc.core5.http.NoHttpResponseException: springboot:30002 failed to respond
Does anyone have any idea what the problem might be?
[SOLVED]
I made it work with StatefulSet see how my deployments look:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: gateway
spec:
serviceName: "gateway"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","gateway-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:gateway
imagePullPolicy: Always
ports:
- containerPort: 30002
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: routes
spec:
serviceName: "routes"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: routes
template:
metadata:
labels:
app: routes
spec:
containers:
- name: routes
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","routes-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:routes
imagePullPolicy: Always
ports:
- containerPort: 30000
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user
spec:
serviceName: "user"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","user-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:user
imagePullPolicy: Always
ports:
- containerPort: 30015
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user-command
spec:
serviceName: "user-command"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: user-command
template:
metadata:
labels:
app: user-command
spec:
containers:
- name: user-command
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","user-command-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:user-command
imagePullPolicy: Always
ports:
- containerPort: 30004
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: user-command-insert
spec:
serviceName: "user-command-insert"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: user-command-insert
template:
metadata:
labels:
app: user-command-insert
spec:
containers:
- name: user-command-insert
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","user-command-insert-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:user-command-insert
imagePullPolicy: Always
ports:
- containerPort: 30003
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: category-product-kafka
spec:
serviceName: "category-product-kafka"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: category-product-kafka
template:
metadata:
labels:
app: category-product-kafka
spec:
containers:
- name: category-product-kafka
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","category-product-kafka-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:category-product-kafka
imagePullPolicy: Always
ports:
- containerPort: 30008
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: category-product-command
spec:
serviceName: "category-product-command"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: category-product-command
template:
metadata:
labels:
app: category-product-command
spec:
containers:
- name: category-product-command
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","category-product-command-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:category-product-command
imagePullPolicy: Always
ports:
- containerPort: 31533
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: category-product-query
spec:
serviceName: "category-product-query"
podManagementPolicy: "Parallel"
replicas: 1
selector:
matchLabels:
app: category-product-query
template:
metadata:
labels:
app: category-product-query
spec:
containers:
- name: category-product-query
#command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","category-product-query-0.0.1-SNAPSHOT.jar"]
image: rafaelribeirosouza86/shopping:category-product-query
imagePullPolicy: Always
ports:
- containerPort: 30007
imagePullSecrets:
- name: regcred
Services:
apiVersion: v1
kind: Service
metadata:
name: gateway-service
namespace: default
spec:
# clusterIP: 10.99.233.224
ports:
- port: 30002
protocol: TCP
targetPort: 30002
nodePort: 30002
# externalTrafficPolicy: Local
type: NodePort
selector:
app: gateway
# type: LoadBalancer
#status:
# loadBalancer: {}
---
apiVersion: v1
kind: Service
metadata:
name: routes-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30000
protocol: TCP
targetPort: 30000
nodePort: 30000
# externalTrafficPolicy: Local
type: NodePort
selector:
app: routes
# type: LoadBalancer
#status:
# loadBalancer: {}
# type: ClusterIP
# type: LoadBalancer
# type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: contact-adm-query-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30001
protocol: TCP
targetPort: 30001
nodePort: 30001
# externalTrafficPolicy: Local
type: NodePort
selector:
app: contact-adm-query
# type: LoadBalancer
#status:
# loadBalancer: {}
# type: ClusterIP
# type: LoadBalancer
# type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: contact-adm-kafka-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30005
protocol: TCP
targetPort: 30005
nodePort: 30005
# externalTrafficPolicy: Local
type: NodePort
selector:
app: contact-adm-kafka
---
apiVersion: v1
kind: Service
metadata:
name: contact-adm-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30006
protocol: TCP
targetPort: 30006
nodePort: 30006
# externalTrafficPolicy: Local
type: NodePort
selector:
app: contact-adm-command
---
apiVersion: v1
kind: Service
metadata:
name: category-product-kafka-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30008
protocol: TCP
targetPort: 30008
nodePort: 30008
# externalTrafficPolicy: Local
type: NodePort
selector:
app: category-product-kafka
---
apiVersion: v1
kind: Service
metadata:
name: category-product-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 31533
protocol: TCP
targetPort: 31533
nodePort: 31533
# externalTrafficPolicy: Local
type: NodePort
selector:
app: category-product-command
---
apiVersion: v1
kind: Service
metadata:
name: category-product-query-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30007
protocol: TCP
targetPort: 30007
nodePort: 30007
# externalTrafficPolicy: Local
type: NodePort
selector:
app: category-product-query
---
apiVersion: v1
kind: Service
metadata:
name: cart-purchase-kafka-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30011
protocol: TCP
targetPort: 30011
nodePort: 30011
# externalTrafficPolicy: Local
type: NodePort
selector:
app: cart-purchase-kafka
---
apiVersion: v1
kind: Service
metadata:
name: cart-purchase-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30012
protocol: TCP
targetPort: 30012
nodePort: 30012
# externalTrafficPolicy: Local
type: NodePort
selector:
app: cart-purchase-command
---
apiVersion: v1
kind: Service
metadata:
name: cart-purchase-query-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30010
protocol: TCP
targetPort: 30010
nodePort: 30010
# externalTrafficPolicy: Local
type: NodePort
selector:
app: cart-purchase-query
---
apiVersion: v1
kind: Service
metadata:
name: user-command-insert-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30003
protocol: TCP
targetPort: 30003
nodePort: 30003
# externalTrafficPolicy: Local
type: NodePort
selector:
app: user-command-insert
---
apiVersion: v1
kind: Service
metadata:
name: user-command-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30004
protocol: TCP
targetPort: 30004
nodePort: 30004
# externalTrafficPolicy: Local
type: NodePort
selector:
app: user-command
---
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 30015
protocol: TCP
targetPort: 30015
nodePort: 30015
# externalTrafficPolicy: Local
type: NodePort
selector:
app: user

https: 404 not found with cert-manager and k3d

I'm following the cert manager guide for tls with a local k3d cluster, but when trying to open the kuard site with https, firefox warns me about a selfsigned cert, but then i get the error 404 page not found
What I did:
create k3d cluster: k3d cluster create certs -p 9080:80#loadbalancer -p 9443:443#loadbalancer
apply kuard deployment:
kubectl apply -f - << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: kuard
spec:
selector:
matchLabels:
app: kuard
replicas: 1
template:
metadata:
labels:
app: kuard
spec:
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always
name: kuard
ports:
- containerPort: 8080
EOF
apply kuard service:
kubectl apply -f - << EOF
apiVersion: v1
kind: Service
metadata:
name: kuard
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: kuard
EOF
deploy cert-manager:
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.9.1 \
--set installCRDs=true
create self-signed cluster-issuer:
kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: selfsigned
spec:
selfSigned: {}
EOF
apply ingress resource:
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kuard
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: "selfsigned"
spec:
tls:
- hosts:
- example.localhost
secretName: quickstart-example-tls
rules:
- host: example.localhost
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kuard
port:
number: 80
now reaching kuard works without encryption:
curl -kivl -H 'Host: example.localhost' 'http://127.0.1:9080'
* Uses proxy env variable NO_PROXY == 'localhost,127.0.0.1'
* Trying 127.0.0.1:9080...
* Connected to 127.0.0.1 (127.0.0.1) port 9080 (#0)
> GET / HTTP/1.1
> Host: example.localhost
> User-Agent: curl/7.84.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Content-Length: 1669
Content-Length: 1669
< Content-Type: text/html
Content-Type: text/html
< Date: Thu, 25 Aug 2022 08:41:31 GMT
Date: Thu, 25 Aug 2022 08:41:31 GMT
<
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>KUAR Demo</title>
<link rel="stylesheet" href="/static/css/bootstrap.min.css">
<link rel="stylesheet" href="/static/css/styles.css">
<script>
var pageContext = {"hostname":"kuard-5cd5556bc9-vlrtc","addrs":["10.42.0.9"],"version":"v0.8.1-1","versionColor":"hsl(18,100%,50%)","requestDump":"GET / HTTP/1.1\r\nHost: example.localhost\r\nAccept: */*\r\nAccept-Encoding: gzip\r\nUser-Agent: curl/7.84.0\r\nX-Forwarded-For: 10.42.0.1\r\nX-Forwarded-Host: example.localhost\r\nX-Forwarded-Port: 80\r\nX-Forwarded-Proto: http\r\nX-Forwarded-Server: traefik-6b84f7cbc-4t99k\r\nX-Real-Ip: 10.42.0.1","requestProto":"HTTP/1.1","requestAddr":"10.42.0.8:49432"}
</script>
</head>
<svg style="position: absolute; width: 0; height: 0; overflow: hidden;" version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<defs>
<symbol id="icon-power" viewBox="0 0 32 32">
<title>power</title>
<path class="path1" d="M12 0l-12 16h12l-8 16 28-20h-16l12-12z"></path>
</symbol>
<symbol id="icon-notification" viewBox="0 0 32 32">
<title>notification</title>
<path class="path1" d="M16 3c-3.472 0-6.737 1.352-9.192 3.808s-3.808 5.72-3.808 9.192c0 3.472 1.352 6.737 3.808 9.192s5.72 3.808 9.192 3.808c3.472 0 6.737-1.352 9.192-3.808s3.808-5.72 3.808-9.192c0-3.472-1.352-6.737-3.808-9.192s-5.72-3.808-9.192-3.808zM16 0v0c8.837 0 16 7.163 16 16s-7.163 16-16 16c-8.837 0-16-7.163-16-16s7.163-16 16-16zM14 22h4v4h-4zM14 6h4v12h-4z"></path>
</symbol>
</defs>
</svg>
<body>
<div id="root"></div>
<script src="/built/bundle.js" type="text/javascript"></script>
</body>
</html>
but when using https, I get a 404 not found error, but curl shows the self signed cert:
curl -kivl -H 'Host: example.localhost' 'https://127.0.1:9080'
* Uses proxy env variable NO_PROXY == 'localhost,127.0.0.1'
* Trying 127.0.0.1:9080...
* Connected to 127.0.0.1 (127.0.0.1) port 9080 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN: server accepted h2
* Server certificate:
* subject: CN=TRAEFIK DEFAULT CERT
* start date: Aug 25 08:19:03 2022 GMT
* expire date: Aug 25 08:19:03 2023 GMT
* issuer: CN=TRAEFIK DEFAULT CERT
* SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* h2h3 [:method: GET]
* h2h3 [:path: /]
* h2h3 [:scheme: https]
* h2h3 [:authority: example.localhost]
* h2h3 [user-agent: curl/7.84.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x559b1a1b48a0)
> GET / HTTP/2
> Host: example.localhost
> user-agent: curl/7.84.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 404
HTTP/2 404
< content-type: text/plain; charset=utf-8
content-type: text/plain; charset=utf-8
< x-content-type-options: nosniff
x-content-type-options: nosniff
< content-length: 19
content-length: 19
< date: Thu, 25 Aug 2022 08:42:16 GMT
date: Thu, 25 Aug 2022 08:42:16 GMT
<
404 page not found
* Connection #0 to host 127.0.0.1 left intact
How can I change my deployment to get the kuard site with my self signed cert and https?

How to download file from cloudflare using curl?

As part of pipeline for building Debian package with popular game Factorio i need to download game's distribution files. This is without any problems in gui web browser.
I try to download file using curl but i still cannot solve problem with CSRF token:
#!/bin/sh
LOGIN=""
PASSWD=""
VERSION=`curl -s "https://api.github.com/repos/wube/factorio-data/tags" | jq -r '.[0].name'`
ARCHIVE="factorio_alpha_x64_${VERSION}.tar.xz"
CSRF=`curl -s -c ~/cookie.txt https://www.factorio.com/login | grep csrf_token | awk -F'"' '{print $8}'`
curl -v -c ~/cookie.txt -b ~/cookie.txt -H "X-CSRF-Token: ${CSRF}" -X POST -F "csrf_token=${CSRF}" -F "username_or_email=${LOGIN}" -F "password=${PASSWD}" https://www.factorio.com/login
curl -c ~/cookie.txt https://www.factorio.com/get-download/${VERSION}/alpha/linux64 > ${ARCHIVE}
The script run fail everytime with the final response:
vitex#exiv:~/Projects/Packaging/Games/factorio-deb$ ./downloader.sh
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying 104.26.14.88:443...
* Connected to www.factorio.com (104.26.14.88) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
* CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.; CN=sni.cloudflaressl.com
* start date: Jul 6 00:00:00 2021 GMT
* expire date: Jul 5 23:59:59 2022 GMT
* subjectAltName: host "www.factorio.com" matched cert's "*.factorio.com"
* issuer: C=US; O=Cloudflare, Inc.; CN=Cloudflare Inc ECC CA-3
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x55eea0a17d10)
> POST /login HTTP/2
> Host: www.factorio.com
> user-agent: curl/7.76.1
> accept: */*
> cookie: session=eyJjc3JmX3Rva2VuIjoiMTk2MmVlODBkMDJiMGFhODQ0N2U1OGZiYTEyZGQzMThjZTY5MTFkZCJ9.YXicKQ.D93FhsjkngmtONrHEFB6P0d4w8Y
> x-csrf-token: IjE5NjJlZTgwZDAyYjBhYTg0NDdlNThmYmExMmRkMzE4Y2U2OTExZGQi.YXicKQ.HKcRPgEkSRVU4_Xat-dCV31sHWg
> content-length: 461
> content-type: multipart/form-data; boundary=------------------------c63b0f58b7ac0deb
>
* We are completely uploaded and fine
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 256)!
< HTTP/2 400
< date: Wed, 27 Oct 2021 00:24:09 GMT
< content-type: text/html; charset=utf-8
< cache-control: no-cache
< x-frame-options: SAMEORIGIN
< strict-transport-security: max-age=31536000
< vary: Cookie
* Replaced cookie session="eyJfZnJlc2giOmZhbHNlLCJjc3JmX3Rva2VuIjoiMTk2MmVlODBkMDJiMGFhODQ0N2U1OGZiYTEyZGQzMThjZTY5MTFkZCJ9.YXicKQ.PbtfNJW_assTK0ZkBWujMpBVnuM" for domain factorio.com, path /, expire 0
< set-cookie: session=eyJfZnJlc2giOmZhbHNlLCJjc3JmX3Rva2VuIjoiMTk2MmVlODBkMDJiMGFhODQ0N2U1OGZiYTEyZGQzMThjZTY5MTFkZCJ9.YXicKQ.PbtfNJW_assTK0ZkBWujMpBVnuM; Domain=.factorio.com; Secure; HttpOnly; Path=/
< via: 1.1 vegur
< cf-cache-status: DYNAMIC
< expect-ct: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
< report-to: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v3?s=HZPVm%2FRu31d1J8IkHuFfcRwFad6vXWf2%2FbHrH3PCRg1GFuXfHgsJDXN10zPpE6ZaOP7I1ClCiaDo0i0tO%2B5kih95W6gO28pCyjiiA3oXOmJvFHr%2F4iipMg0xlK7v2rVQ51w%3D"}],"group":"cf-nel","max_age":604800}
< nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
< server: cloudflare
< cf-ray: 6a47c7a32c4f27a0-PRG
<
<!DOCTYPE html>
<html>
<head>
<title> 400 - CSRF Error | Factorio</title>
...
How i can better work with cookies recieved by first request ?
What is wrong here ?

<AWS EKS / Fargate / Kubernetes> "Communications link failure" on container startup

I was testing on a kubernetes setup with AWS EKS on Fargate, and encountered an issue on the container startup.
It is a java application making use of hibernate. It seems it failed to connect to the MySQL server on startup, giving a "Communications link failure" error. The database server is running properly on AWS RDS, and the docker image can be run as expected in local.
I wonder if this is caused by the MySQL port 3306 not being configured properly on the container/node/service. Would like to see if you can spot out what the issue is and please don't hesitate to point out any mis-configuration, thank you very much.
Pod startup log
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.3.1.RELEASE)
2020-08-13 11:39:39.930 INFO 1 --- [ main] com.example.demo.DemoApplication : The following profiles are active: prod
2020-08-13 11:39:58.536 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFERRED mode.
...
......
2020-08-13 11:41:27.606 ERROR 1 --- [ task-1] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization.
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
at com.mysql.cj.jdbc.exceptions.SQLError.createCommunicationsException(SQLError.java:174) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:64) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.createNewIO(ConnectionImpl.java:836) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.<init>(ConnectionImpl.java:456) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.ConnectionImpl.getInstance(ConnectionImpl.java:246) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.mysql.cj.jdbc.NonRegisteringDriver.connect(NonRegisteringDriver.java:197) ~[mysql-connector-java-8.0.20.jar!/:8.0.20]
at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:138) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:358) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:206) ~[HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:477) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:560) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.pool.HikariPool.<init>(HikariPool.java:115) [HikariCP-3.4.5.jar!/:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-3.4.5.jar!/:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:122) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator$ConnectionProviderJdbcConnectionAccess.obtainConnection(JdbcEnvironmentInitiator.java:180) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:68) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.engine.jdbc.env.internal.JdbcEnvironmentInitiator.initiateService(JdbcEnvironmentInitiator.java:35) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.registry.internal.StandardServiceRegistryImpl.initiateService(StandardServiceRegistryImpl.java:101) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.createService(AbstractServiceRegistryImpl.java:263) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:237) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.id.factory.internal.DefaultIdentifierGeneratorFactory.injectServices(DefaultIdentifierGeneratorFactory.java:152) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.injectDependencies(AbstractServiceRegistryImpl.java:286) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.initializeService(AbstractServiceRegistryImpl.java:243) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.service.internal.AbstractServiceRegistryImpl.getService(AbstractServiceRegistryImpl.java:214) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.internal.InFlightMetadataCollectorImpl.<init>(InFlightMetadataCollectorImpl.java:176) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.boot.model.process.spi.MetadataBuildingProcess.complete(MetadataBuildingProcess.java:118) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.metadata(EntityManagerFactoryBuilderImpl.java:1224) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1255) [hibernate-core-5.4.17.Final.jar!/:5.4.17.Final]
at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:58) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:365) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:391) [spring-orm-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_212]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_212]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_212]
...
......
Service
patricks-mbp:test patrick$ kubectl get services -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test NodePort 10.100.160.22 <none> 80:31176/TCP 4h57m
service.yaml
kind: Service
apiVersion: v1
metadata:
name: test
namespace: test
spec:
selector:
app: test
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: 8080
Deployment
patricks-mbp:test patrick$ kubectl get deployments -n test
NAME READY UP-TO-DATE AVAILABLE AGE
test 0/1 1 0 4h42m
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: test
labels:
app: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: <image location>
ports:
- containerPort: 8080
resources: {}
Pods
patricks-mbp:test patrick$ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
test-8648f7959-4gdvm 1/1 Running 6 21m
patricks-mbp:test patrick$ kubectl describe pod test-8648f7959-4gdvm -n test
Name: test-8648f7959-4gdvm
Namespace: test
Priority: 2000001000
Priority Class Name: system-node-critical
Node: fargate-ip-192-168-123-170.ec2.internal/192.168.123.170
Start Time: Thu, 13 Aug 2020 21:29:07 +1000
Labels: app=test
eks.amazonaws.com/fargate-profile=fp-1a0330f1
pod-template-hash=8648f7959
Annotations: kubernetes.io/psp: eks.privileged
Status: Running
IP: 192.168.123.170
IPs:
IP: 192.168.123.170
Controlled By: ReplicaSet/test-8648f7959
Containers:
test:
Container ID: containerd://a1517a13d66274e1d7f8efcea950d0fe3d944d1f7208d057494e208223a895a7
Image: <image location>
Image ID: <image ID>
Port: 8080/TCP
Host Port: 0/TCP
State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:48:07 +1000
Finished: Thu, 13 Aug 2020 21:50:28 +1000
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Thu, 13 Aug 2020 21:43:04 +1000
Finished: Thu, 13 Aug 2020 21:45:22 +1000
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5hdzd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-5hdzd:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5hdzd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> fargate-scheduler Successfully assigned test/test-8648f7959-4gdvm to fargate-ip-192-168-123-170.ec2.internal
Normal Pulling 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Pulling image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Pulled 21m kubelet, fargate-ip-192-168-123-170.ec2.internal Successfully pulled image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2"
Normal Created 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Created container test
Normal Started 11m (x5 over 21m) kubelet, fargate-ip-192-168-123-170.ec2.internal Started container test
Normal Pulled 11m (x4 over 19m) kubelet, fargate-ip-192-168-123-170.ec2.internal Container image "174304792831.dkr.ecr.us-east-1.amazonaws.com/test:v2" already present on machine
Warning BackOff 11s (x27 over 17m) kubelet, fargate-ip-192-168-123-170.ec2.internal Back-off restarting failed container
Ingress
patricks-mbp:~ patrick$ kubectl describe ing -n test test
Name: test
Namespace: test
Address: <ALB public address>
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ test:80 (192.168.72.15:8080)
Annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"name":"test","namespace":"test"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"test","servicePort":80},"path":"/"}]}}]}}
kubernetes.io/ingress.class: alb
Events: <none>
ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: test
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: test
servicePort: 80
AWS ALB ingress controller
Permission for ALB ingress controller to communicate with cluster
-> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/rbac-role.yaml
Creation of Ingress Controller which uses ALB
-> similar to https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.8/docs/examples/alb-ingress-controller.yaml
To allow pod from Fargate to connect to RDS you need to open security group.
Find the security group ID of your Fargate service
In your RDS security group rules, instead of putting a CIDR in your source field, put the Fargate service security group ID. Port 3306

Unable to PUT when tunneling to a remote URL using localhost(127.0.0.1)

I wanted to do a GET on the following URL in Postman with Basic Authorization:
https://1.2.3.4:8338/accounts
Unfortunately I cannot connect directly to that server so I've tunneled through Jump server 5.6.7.8 using SSH Tunnel Manager and
ssh -N -p 22 username#5.6.7.8 -o StrictHostKeyChecking=no -L 127.0.0.1:8080:1.2.3.4:8338
That worked. I now want to create a container by doing a PUT to this URL using AWSV4 Authorization:
https://1.2.3.4/testcontainer
If I use the above tunner I get a 404 error. I've a feeling that my issue is that the tunnel is on port 8338 but my URL doesn't specify a port. I've tried leaving the port on 1.2.3.4 blank but it defaults to 0 and the tunnel doesn't work.
I then tried setting that port to 443(default HTTPS port). When I do that I get a SignatureDoesNotMatch error. I think that's because I set the AWSV4 authentication up on port 8338(it's a guess).
Finally I tried to setup AWSV4 authorization with port 443 but received a 403 error.
I'm not sure where to go now. Can anybody advise what I might have to do a PUT to the below URL using localhost?
https://1.2.3.4/testcontainer
UPDATE 2017-06-28
I got access to a server that can connect directly to 1.2.3.4 and decided to try using curl in the terminal. It wouldn't work as I need to use AWS v4 auth. When looking into this I came across s3curl. I've tried running the following:
./s3curl.pl --id personal -- -s -v -X PUT https://1.2.3.4/testcontainer -k
Still no luck. This is the output:
* Hostname was NOT found in DNS cache
* Trying 1.2.3.4...
* Connected to 1.2.3.4 (1.2.3.4) port 443 (#0)
* successfully set certificate verify locations:
* CAfile: none
CApath: /etc/ssl/certs
* SSLv3, TLS handshake, Client hello (1):
* SSLv3, TLS handshake, Server hello (2):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Server key exchange (12):
* SSLv3, TLS handshake, Request CERT (13):
* SSLv3, TLS handshake, Server finished (14):
* SSLv3, TLS handshake, CERT (11):
* SSLv3, TLS handshake, Client key exchange (16):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSLv3, TLS change cipher, Client hello (1):
* SSLv3, TLS handshake, Finished (20):
* SSL connection using ECDHE-RSA-AES256-SHA384
* Server certificate:
* subject: C=US; ST=T; L=A; O=B; CN=access01.b.com; emailAddress=b#us.b.com
* start date: 2017-06-04 08:05:04 GMT
* expire date: 2018-06-05 08:25:00 GMT
* issuer: C=US; ST=I; L=C; O=cc; CN=Manager CA; serialNumber=serialnumber
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
> PUT /testcontainer HTTP/1.1
> User-Agent: curl/7.35.0
> Host: 1.2.3.4
> Accept: */*
> Date: Wed, 28 Jun 2017 13:23:01 +0000
> Authorization: AWS authoization
>
< HTTP/1.1 403 Forbidden
< Date: Wed, 28 Jun 2017 13:23:01 GMT
< X-Clv-Request-Id: requestid
< Accept-Ranges: bytes
* Server cc/3.1.0.1 is not blacklisted
< Server: cc/3.1.0.1
< X-Clv-S3-Version: 2.5
< x-amz-request-id: requestid
< Content-Type: application/xml
< Content-Length: 894
<
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><Error> <Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. For more information, see REST Authentication and SOAP Authentication for details.</Message><Resource>/pctestcontainer1/</Resource><RequestId>bfb1bdf1-9d7a-4bc7-966a-a3a5e89498eb</RequestId><StringToSign>PUT
Wed, 28 Jun 2017 13:23:01 +0000
* Connection #0 to host 10.137.63.202 left intact
/pctestcontainer1</StringToSign><StringToSignBytes>80 85 84 10 10 10 87 101 100 44 32 50 56 32 74 117 110 32 50 48 49 55 32 49 51 58 50 51 58 48 49 32 43 48 48 48 48 10 47 112 99 116 101 115 116 99 111 110 116 97 105 110 101 114 49</StringToSignBytes><SignatureProvided>signature</SignatureProvided><AWSAccessKeyId>accesskey</AWSAccessKeyId><httpStatusCode>403</httpStatusCode></Error>root#utility:/tmp/cp/s3curl#
Does this mean anything to anybody?
After a lot of investigation I found that I needed to include a "Host" key in my header and use the AWS V4 credentials I generated.
I can now do a PUT using a statement in Postman.

Resources