Envoy Jaeger HTTPS collector endpoint - consul

I'm using Envoy as a proxy for Service Mesh with Consul Connect. I have configured envoy to send traces to an Jaeger collector(AWS ALB) in the Zipkin format. The configuration is the Zipkin one mentioned in https://www.consul.io/docs/connect/distributed-tracing.
This example works fine if the Jaeger collector is using HTTP, but doesn't work with HTTPS.
Does envoy support sending traces to HTTPS Jaeger collector endpoints? If yes, then what are the changes required in the following config?
Kind = "proxy-defaults"
Name = "global"
Config {
protocol = "http"
envoy_tracing_json = <<EOF
{
"http":{
"name":"envoy.tracers.zipkin",
"typedConfig":{
"#type":"type.googleapis.com/envoy.config.trace.v3.ZipkinConfig",
"collector_cluster":"collector_cluster_name",
"collector_endpoint_version":"HTTP_JSON",
"collector_endpoint":"/api/v2/spans",
"shared_span_context":false
}
}
}
EOF
envoy_extra_static_clusters_json = <<EOF
{
"connect_timeout":"3.000s",
"dns_lookup_family":"V4_ONLY",
"lb_policy":"ROUND_ROBIN",
"load_assignment":{
"cluster_name":"collector_cluster_name",
"endpoints":[
{
"lb_endpoints":[
{
"endpoint":{
"address":{
"socket_address":{
"address":"collector-url",
"port_value":9411,
"protocol":"TCP"
}
}
}
}
]
}
]
},
"name":"collector_cluster_name",
"type":"STRICT_DNS"
}
EOF
}

Related

Cannot connect using service mesh in nomad / consul

When I try to connect to an upstream service via a sidecar service in Consul Connect, I get the following error.
2023-02-01T09:31:33-08:00 Setup Failure failed to setup alloc: pre-run hook "group_services" failed: unable to get address for service "sequrercbase": invalid port "base_port": port label not found
The upstream service is named 'sequrercbase' and creates a dynamic port named 'base_port' that I'd like downstream services to connect to.
network {
mode = "bridge"
port "base_port" { }
}
service {
name = "sequrercbase"
port = "base_port"
connect {
sidecar_service {}
}
}
This service is trying to connect to 'securercbase' on the named port 'base_port'.
network {
mode = "bridge"
port "api_port" { }
}
service {
name = "sequrercbase"
port = "base_port"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "sequrercbase"
local_bind_port = 9989
}
}
}
}
}
Any thoughts on how to work around this issue?

Configure Outlier detection for consul service mesh using nomad job

I am trying to configure Outlier Detection for a consul connect service mesh based on this documentation.
https://learn.hashicorp.com/tutorials/consul/service-mesh-circuit-breaking?in=consul/developer-mesh
The documentation shows that Outlier Detection and Circuit breaking can be configured using the config stanza inside proxy.upstreams. But the following job file throws error - Blocks of type "config" are not expected here.
job "docs" {
datacenters = ["dc1"]
group "docs" {
network {
mode = "bridge"
}
service {
name = "docs"
port = "5678"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "demo"
local_bind_port = 10082
config {
connect_timeout_ms = 3000
}
}
}
}
}
}
task "server" {
driver = "docker"
config {
image = "hashicorp/http-echo"
args = [
"-listen",
":5678",
"-text",
"hello world",
]
}
}
}
}
Am I doing anything wrong? Is this not the right way to configure circuit breaking in nomad job file?
sidecar Proxy, Circuit breaking, ingress, egress must be implemented with consul directly and not from nomad. Also, In your job you didn't map the port inside docker and outside port. consul work a specific version of envoy load balacner.
First launch your job without connect stanza and do port mapping
install envoy and do proxy connect connection manually to test
once test work make a service proxy to launch your sidecar your circuit breaking
1- Launching job: (by exemple your port inside docker is 8080 )
job "docs" {
datacenters = ["dc1"]
group "docs" {
network {
mode = "bridge"
}
task "server" {
driver = "docker"
config {
image = "hashicorp/http-echo"
args = [
"-listen",
":5678",
"-text",
"hello world",
]
port_map {
docs = 8080
}
}
resources {
network {
mbits = 10
port "docs" { static = 5678 }
}
}
service {
name = "docs"
port = "docs"
check {
name = "docs port alive"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
}
}
}
2-check your consul version and install supported envoy version here. i use consul 1.11 so i will install supported envoy 1.18.4
yum -y -q install tar
curl https://func-e.io/install.sh | bash -s -- -b /usr/local/bin
func-e use 1.18.4
make the envoy bin available
cp /root/.func-e/versions/1.18.4/bin/envoy /usr/local/bin/
Proxy integration
insert at your end of consul config .for me my config are stored in
/etc/consul.d/config.hcl
config_entries {
bootstrap = [
{
kind = "proxy-defaults"
name = "global"
config {
protocol = "http"
}
}
]
}
**restart your consul service to check if envoy proxy integration worked**
systemctl restart consul
Overwrite your service registration in consul with consul file :
cat > /etc/consul.d/docs.hcl <<- EOF
service {
name = "docs"
port = 5678
#token = "" # put api service token here
check {
id = "docs"
name = "HTTP API on Port 5678"
http = "http://localhost:5678"
interval = "30s"
}
connect {
sidecar_service {
port = 20000
check {
name = "Connect Envoy Sidecar"
tcp = "127.0.0.1:20000"
interval = "10s"
}
}
}
}
EOF
restart service consul or reload it
systemctl restart consul
Test proxy side car working
consul connect envoy -sidecar-for=docs
create docs service proxy Create at /etc/systemd/system/consul-envoy-docs.service and input the following:
cat > /etc/systemd/system/consul-envoy.service <<- EOF
[Unit]
Description=Consul Envoy
After=syslog.target network.target
[Service]
ExecStart=/usr/local/bin/consul connect envoy -sidecar-for=docs
ExecStop=/bin/sleep 5
Restart=always
[Install]
WantedBy=multi-user.target
EOF
Restart consul and start consul-envoy:
systemctl daemon-reload
systemctl restart consul
systemctl start consul-envoy-docs
In the event that consul-envoy fails, restart it with:
systemctl restart consul-envoy
3. Well if all work correctly , adapt conf file in /etc/systemd/system/consul-envoy-docs.service as described here to make circuit breaking
If someone have issue with nomad , consul , vault , envoy or hashistack tag me

consul proxy change health endpoint

I have deployed a consul proxy on a different host than 'localhost' but consul keeps on checking health on 127.0.0.1.
Config of the service and it's sidecar:
service {
name = "counting"
id = "counting-1"
port = 9005
address = "169.254.1.1"
connect {
sidecar_service {
proxy {
config {
bind_address = "169.254.1.1"
bind_port = 21002
tcp_check_address = "169.254.1.1"
local_service_address = "localhost:9005"
}
}
}
}
check {
id = "counting-check"
http = "http://169.254.1.1:9005/health"
method = "GET"
interval = "10s"
timeout = "1s"
}
}
The proxy was deployed using the following command:
consul connect proxy -sidecar-for counting-1 > counting-proxy.log
Consul UI's health check message:
How do I change the health check to 169.254.1.1?
First, I recommend using the Envoy proxy (consul connect envoy) instead of the built-in proxy (consul connect proxy) since the latter is not recommended for production use.
As far as changing the health check address, you can do that by setting proxy.local_service_address. This address is used when configuring the health check for the local application.
See https://github.com/hashicorp/consul/issues/11008#issuecomment-929832280 for a related discussion on this issue.

Getting error connection refused when trying to consul connect using sidecar proxy to web

I am following this tutorial https://learn.hashicorp.com/consul/getting-started/connect
at the point when I ran consul connect proxy -sidecar-for web it started throwing this error:
2020-07-26T14:30:18.243+0100 [ERROR] proxy.inbound: failed to dial: error="dial tcp 127.0.0.1:0: connect: can't assign requested address"
why this does not have port assigned in his demonstration ?
{
"service": {
"name": "web",
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "socat",
"local_bind_port": 9191
}
]
}
}
}
}
}
The video in the tutorial shows the forth line as:
"port": 8080,
The documentation is missing that line. Not that it will matter because nothing is listening on web service so the error will persist. You can safely ignore that. I suspect your issue is that the operation nc 127.0.0.1 9191 is failing. I address that below.
The full config should look like:
{
"service": {
"name": "web",
"port": 8080,
"connect": {
"sidecar_service": {
"proxy": {
"upstreams": [
{
"destination_name": "socat",
"local_bind_port": 9191
}
]
}
}
}
}
}
But, this isn't important for getting through this section of the lab. The instructions aren't clear but don't forget to restart web proxy consul connect proxy -sidecar-for web and start socat proxy consul connect proxy -sidecar-for socat
The last part is sorely missing from the instructions and the video.

Traefik with dynamic routing to ECS backends, running as one-off tasks

I'm triying to implement solution for reverse-proxy service using Traefik v1 (1.7) and ECS one-off tasks as backends, as described in this SO question. Routing should by dynamic - requests to /user/1234/* path should go to the ECS task, running with the appropriate docker labels:
docker_labels = {
traefik.frontend.rule = "Path:/user/1234"
traefik.backend = "trax1"
traefik.enable = "true"
}
So far this setup works fine, but I need create one ECS task definition per one running task, because the docker labels are the property of ECS TaskDefinition, not the ECS task itself. Is it possible to create only one TaskDefinition and pass Traefik rules in ECS task tags, within task key/value properties?
This will require some modification in Traefik source code, are the any other available options or ways this should be implemented, that I've missed, like API Gateway or Lambda#Edge? I have no experience with those technologies, real-world examples are more then welcome.
Solved by using Traefik REST API provider. External component, which runs the one-off tasks, can discover task internal IP and update Traefik configuration on-fly by pair traefik.frontend.rule = "Path:/user/1234" and task internal IP:port values in backends section
It should GET the Traefik configuration first from /api/providers/rest endpoint, remove or add corresponding part (if task was stopped or started), and update Traefik configuration by PUT method to the same endpoint.
{
"backends": {
"backend-serv1": {
"servers": {
"server-service-serv-test1-serv-test-4ca02d28c79b": {
"url": "http://172.16.0.5:32793"
}
}
},
"backend-serv2": {
"servers": {
"server-service-serv-test2-serv-test-279c0ba1959b": {
"url": "http://172.16.0.5:32792"
}
}
}
},
"frontends": {
"frontend-serv1": {
"entryPoints": [
"http"
],
"backend": "backend-serv1",
"routes": {
"route-frontend-serv1": {
"rule": "Path:/user/1234"
}
}
},
"frontend-serv2": {
"entryPoints": [
"http"
],
"backend": "backend-serv2",
"routes": {
"route-frontend-serv2": {
"rule": "Path:/user/5678"
}
}
}
}
}

Resources