Cannot connect using service mesh in nomad / consul - consul

When I try to connect to an upstream service via a sidecar service in Consul Connect, I get the following error.
2023-02-01T09:31:33-08:00 Setup Failure failed to setup alloc: pre-run hook "group_services" failed: unable to get address for service "sequrercbase": invalid port "base_port": port label not found
The upstream service is named 'sequrercbase' and creates a dynamic port named 'base_port' that I'd like downstream services to connect to.
network {
mode = "bridge"
port "base_port" { }
}
service {
name = "sequrercbase"
port = "base_port"
connect {
sidecar_service {}
}
}
This service is trying to connect to 'securercbase' on the named port 'base_port'.
network {
mode = "bridge"
port "api_port" { }
}
service {
name = "sequrercbase"
port = "base_port"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "sequrercbase"
local_bind_port = 9989
}
}
}
}
}
Any thoughts on how to work around this issue?

Related

How do I connect redis and redis-insight containers on my network

I've written a .tf file that spins up a redis and redis-insight container in their private docker network (openstack instance), but when I ngrok to redis-insight I get this error:
Redis-insight in browser
I can't seem to get the environment variables on the redis-insight resource right.
I've tried many combinations of the env vars in the redis-insight resource.
Since I'm using ngrok for tunneling I set the RITRUSTEDORIGINS var to its port (http://localhost:4040) following the example of this page in the redis documentation that uses nginx as a proxy, but to no luck.
What environment variables should I be using on my redis-insight resource?
This is what I have written so far:
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.23.1"
}
}
}
provider "docker" {}
resource "docker_network" "redis_network" {
name = "redis_network"
}
resource "docker_image" "redis" {
name = "redis:latest"
keep_locally = false
}
resource "docker_container" "redis" {
image = docker_image.redis.image_id
name = "redis"
ports {
internal = 6379
external = 6379
}
network_mode = docker_network.redis_network.name
}
resource "docker_image" "redis-insight" {
name = "redislabs/redisinsight:latest"
keep_locally = false
}
resource "docker_container" "redis-insight" {
image = docker_image.redis-insight.image_id
name = "redis-insight"
ports {
internal = 8001
external = 8001
}
network_mode = docker_network.redis_network.name
depends_on = [docker_container.redis]
env = [
"REDIS_URL=redis://redis:6379",
"REDIS_PASSWORD=password",
# "REDIS_DATABASE=1",
# "REDIS_TLS=true",
# "INSIGHT_DEBUG=true",
# "RIPORT=8001",
# "RIPROXYENABLE=t",
"RITRUSTEDORIGINS=http://localhost:4040"
]
}
Whats the hostname and port of RedisInsight you are accessing from your browser? If its not localhost:4040, set that in RITRUSTEDORIGINS.
If it is localhost:4040, set RITRUSTEDORIGINS to http://localhost:4040.
Set the right protocol (http or https), hostname and port. This should match the one you use in browser.

Configure Outlier detection for consul service mesh using nomad job

I am trying to configure Outlier Detection for a consul connect service mesh based on this documentation.
https://learn.hashicorp.com/tutorials/consul/service-mesh-circuit-breaking?in=consul/developer-mesh
The documentation shows that Outlier Detection and Circuit breaking can be configured using the config stanza inside proxy.upstreams. But the following job file throws error - Blocks of type "config" are not expected here.
job "docs" {
datacenters = ["dc1"]
group "docs" {
network {
mode = "bridge"
}
service {
name = "docs"
port = "5678"
connect {
sidecar_service {
proxy {
upstreams {
destination_name = "demo"
local_bind_port = 10082
config {
connect_timeout_ms = 3000
}
}
}
}
}
}
task "server" {
driver = "docker"
config {
image = "hashicorp/http-echo"
args = [
"-listen",
":5678",
"-text",
"hello world",
]
}
}
}
}
Am I doing anything wrong? Is this not the right way to configure circuit breaking in nomad job file?
sidecar Proxy, Circuit breaking, ingress, egress must be implemented with consul directly and not from nomad. Also, In your job you didn't map the port inside docker and outside port. consul work a specific version of envoy load balacner.
First launch your job without connect stanza and do port mapping
install envoy and do proxy connect connection manually to test
once test work make a service proxy to launch your sidecar your circuit breaking
1- Launching job: (by exemple your port inside docker is 8080 )
job "docs" {
datacenters = ["dc1"]
group "docs" {
network {
mode = "bridge"
}
task "server" {
driver = "docker"
config {
image = "hashicorp/http-echo"
args = [
"-listen",
":5678",
"-text",
"hello world",
]
port_map {
docs = 8080
}
}
resources {
network {
mbits = 10
port "docs" { static = 5678 }
}
}
service {
name = "docs"
port = "docs"
check {
name = "docs port alive"
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
}
}
}
2-check your consul version and install supported envoy version here. i use consul 1.11 so i will install supported envoy 1.18.4
yum -y -q install tar
curl https://func-e.io/install.sh | bash -s -- -b /usr/local/bin
func-e use 1.18.4
make the envoy bin available
cp /root/.func-e/versions/1.18.4/bin/envoy /usr/local/bin/
Proxy integration
insert at your end of consul config .for me my config are stored in
/etc/consul.d/config.hcl
config_entries {
bootstrap = [
{
kind = "proxy-defaults"
name = "global"
config {
protocol = "http"
}
}
]
}
**restart your consul service to check if envoy proxy integration worked**
systemctl restart consul
Overwrite your service registration in consul with consul file :
cat > /etc/consul.d/docs.hcl <<- EOF
service {
name = "docs"
port = 5678
#token = "" # put api service token here
check {
id = "docs"
name = "HTTP API on Port 5678"
http = "http://localhost:5678"
interval = "30s"
}
connect {
sidecar_service {
port = 20000
check {
name = "Connect Envoy Sidecar"
tcp = "127.0.0.1:20000"
interval = "10s"
}
}
}
}
EOF
restart service consul or reload it
systemctl restart consul
Test proxy side car working
consul connect envoy -sidecar-for=docs
create docs service proxy Create at /etc/systemd/system/consul-envoy-docs.service and input the following:
cat > /etc/systemd/system/consul-envoy.service <<- EOF
[Unit]
Description=Consul Envoy
After=syslog.target network.target
[Service]
ExecStart=/usr/local/bin/consul connect envoy -sidecar-for=docs
ExecStop=/bin/sleep 5
Restart=always
[Install]
WantedBy=multi-user.target
EOF
Restart consul and start consul-envoy:
systemctl daemon-reload
systemctl restart consul
systemctl start consul-envoy-docs
In the event that consul-envoy fails, restart it with:
systemctl restart consul-envoy
3. Well if all work correctly , adapt conf file in /etc/systemd/system/consul-envoy-docs.service as described here to make circuit breaking
If someone have issue with nomad , consul , vault , envoy or hashistack tag me

consul proxy change health endpoint

I have deployed a consul proxy on a different host than 'localhost' but consul keeps on checking health on 127.0.0.1.
Config of the service and it's sidecar:
service {
name = "counting"
id = "counting-1"
port = 9005
address = "169.254.1.1"
connect {
sidecar_service {
proxy {
config {
bind_address = "169.254.1.1"
bind_port = 21002
tcp_check_address = "169.254.1.1"
local_service_address = "localhost:9005"
}
}
}
}
check {
id = "counting-check"
http = "http://169.254.1.1:9005/health"
method = "GET"
interval = "10s"
timeout = "1s"
}
}
The proxy was deployed using the following command:
consul connect proxy -sidecar-for counting-1 > counting-proxy.log
Consul UI's health check message:
How do I change the health check to 169.254.1.1?
First, I recommend using the Envoy proxy (consul connect envoy) instead of the built-in proxy (consul connect proxy) since the latter is not recommended for production use.
As far as changing the health check address, you can do that by setting proxy.local_service_address. This address is used when configuring the health check for the local application.
See https://github.com/hashicorp/consul/issues/11008#issuecomment-929832280 for a related discussion on this issue.

Create private network with Terraform with starting script - Google Cloud Platform

starting with Terraform recently with GCP, I would like finish a exercice:
Create a new VPC network with a single subnet.
Create a firewall rule that allows external RDP traffic to the bastion host system.
Deploy two Windows servers that are connected to both the VPC network and the default network.
Create a virtual machine that points to the startup script.
Configure a firewall rule to allow HTTP access to the virtual machine.
Here is my solution:
Create a new VPC network called securenetwork, then create a new VPC subnet inside securenetwork. Once the network and subnet have been configured, configure a firewall rule that allows inbound RDP traffic (TCP port 3389) from the internet to the bastion host.
# Create the securenetwork network
resource "google_compute_network" "securenetwork" {
name = "securenetwork"
auto_create_subnetworks = false
}
# Create securesubnet-us subnetwork
resource "google_compute_subnetwork" "securesubnet-eu" {
name = "securesubnet-eu"
region = "europe-west1"
network = "${google_compute_network.securenetwork.self_link}"
ip_cidr_range = "10.130.0.0/20"
}
# Create a firewall rule to allow HTTP, SSH, RDP and ICMP traffic on securenetwork
resource "google_compute_firewall" "securenetwork-allow-http-ssh-rdp-icmp" {
name = "securenetwork-allow-http-ssh-rdp-icmp"
network = "${google_compute_network.securenetwork.self_link}"
allow {
protocol = "tcp"
ports = ["3389"]
}
allow {
protocol = "icmp"
}
}
# Create the vm-securehost instance
module "vm-securehost" {
source = "./instance/securehost"
instance_name = "vm-securehost"
instance_zone = "europe-west1-d"
instance_subnetwork = "${google_compute_subnetwork.securesubnet-eu.self_link}"
instance_network = "${google_compute_network.securenetwork.self_link}"
}
# Create the vm-bastionhost instance
module "vm-bastionhost" {
source = "./instance/bastionhost"
instance_name = "vm-bastionhost"
instance_zone = "europe-west1-d"
instance_subnetwork = "${google_compute_subnetwork.securesubnet-eu.self_link}"
instance_network = "${google_compute_network.securenetwork.self_link}"
}
Deploy Windows instances
a Windows 2016 server instance called vm-securehost with two network interfaces. Configure the first network interface with an internal only connection to the new VPC subnet, and the second network interface with an internal only connection to the default VPC network. This is the secure server.
variable "instance_name" {}
variable "instance_zone" {}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {}
variable "instance_network" {}
resource "google_compute_instance" "vm_instance" {
name = "${var.instance_name}"
zone = "${var.instance_zone}"
machine_type = "${var.instance_type}"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2016"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
access_config {
# Allocate a one-to-one NAT IP to the instance
}
}
}
a second Windows 2016 server instance called vm-bastionhost with two network interfaces. Configure the first network interface to connect to the new VPC subnet with an ephemeral public (external NAT) address, and the second network interface with an internal only connection to the default VPC network. This is the jump box or bastion host.
variable "instance_name" {}
variable "instance_zone" {}
variable "instance_type" {
default = "n1-standard-1"
}
variable "instance_subnetwork" {}
variable "instance_network" {}
resource "google_compute_address" "default" {
name = "default"
region = "europe-west1"
}
resource "google_compute_instance" "vm_instance" {
name = "${var.instance_name}"
zone = "${var.instance_zone}"
machine_type = "${var.instance_type}"
boot_disk {
initialize_params {
image = "windows-cloud/windows-2016"
}
}
network_interface {
subnetwork = "${var.instance_subnetwork}"
network = "${var.instance_network}"
access_config {
# Allocate a one-to-one NAT IP to the instance
nat_ip = "${google_compute_address.default.address}"
}
}
}
My question:
how to config the Windows compute instance called vm-securehost that does not have a public ip-address?
how to config the Windows compute instance called vm-securehost that run Microsoft IIS web server software on startup?
Thanks for any comment for the solution
To create a vm without any external ip address, omit the ‘access config’ argument in your terraform script, as it’s the one responsible for creation of external ip address.
To run Microsoft IIS web server software on your vm while startup, add the following argument in your vm creation block (exclude quotation marks) -
'metadata_startup_script = import-module servermanager && add-windowsfeature web-server -includeallsubfeature'
Please refer to following links for detailed information on the issue -
https://cloud.google.com/compute/docs/tutorials/basic-webserver-iis
https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_instance#metadata_startup_script

reconcile unable to talk with Consul backend

Im trying to setup a docker container for my vault/consul but get the following error:-
2017/06/22 18:15:58.335293 [WARN ] physical/consul: reconcile unable to talk with Consul backend: error=service registration failed: Put http://127.0.0.1:8500/v1/agent/service/register: dial tcp 127.0.0.1:8500: getsockopt: connection refused
Here is my vault config file.
storage "consul" {
address = "127.0.0.1:8500"
redirect_addr = "http:/127.0.0.1:8500"
path = "vault"
scheme = "http"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
#telemetry {
# statsite_address = "127.0.0.1:8125"
# disable_hostname = true
#}
where is Consul?
This error is saying I'm trying to reach this URL: http://127.0.0.1:8500/v1/agent/service/register and can't.
This implies that either Consul isn't running, or it's running somewhere other than at http://127.0.0.1:8500
Find your consul, and then update your config to point to it.

Resources