connect db and back using service name - consul

I try to use service name to connect my nomad job Postgresql and the app.
Using agent node where the job is running works fine but with my job.service.consul do not work.
As decribe in application.properties using ip or host works but service name does'nt work
Can you help me ?
Application.properties
#Enable ddl
spring.jpa.hibernate.ddl-auto=none
##Works
#spring.datasource.url=jdbc:postgresql://10.3.52.121:5432/postgres
#spring.datasource.url=jdbc:postgresql://vmc####26.dev.##.##.##.##.##:5432/postgres
##DON'T WORK WITH SERVICE NAME
#spring.datasource.url=jdbc:postgresql://pgsql-##.service.consul:5432/postgres
spring.datasource.url=jdbc:postgresql://${DB_SERVICE_NAME}:5432/postgres
spring.datasource.username=${POSTGRES_USER}
spring.datasource.password=${POSTGRES_PASSWORD}
spring.jpa.properties.hibernate.dialect= org.hibernate.dialect.PostgreSQLDialect
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
Postgresql job
job "pgsql-qa" {
datacenters = ["###"]
type = "service"
vault {
policies = ["###"]
change_mode = "noop"
}
group "pgsql-qa" {
count = 1
task "pgsql-qa" {
driver = "docker"
config {
image = "postgres"
volumes = [
"name=####pgsqldb,io_priority=high,size=5,repl=1:/var/lib/postgresql/data"
]
volume_driver = "pxd"
network_mode = "bridge"
port_map {
db = 5432
}
}
template {
data = <<EOH
POSTGRES_USER="{{ with secret "app/###/###/db/admin" }}{{ .Data.data.user }}{{end}}"
POSTGRES_PASSWORD="{{ with secret "app/###/###/db/admin" }}{{ .Data.data.password }}{{end}}"
EOH
destination = "secrets/db"
env = true
}
logs {
max_files = 5
max_file_size = 15
}
resources {
cpu = 1000
memory = 1024
network {
mbits = 10
port "db" {
static = 5432
}
}
}
service {
name = "pgsql-qa"
tags = ["urlprefix-pgsqlqadb proto=tcp"]
port = "db"
check {
name = "alive"
type = "tcp"
interval = "10s"
timeout = "2s"
port = "db"
}
}
}
restart {
attempts = 10
interval = "5m"
delay = "25s"
mode = "delay"
}
}
update {
max_parallel = 1
min_healthy_time = "5s"
healthy_deadline = "3m"
auto_revert = false
canary = 0
}
}

I found the solution. Just add this line in template {} section to get the IP and the random port created by Nomad.
db_service_name change this line with your service name.
MY_DB = "{{ range service "${db_service_name}" }}{{ .Address }}:{{ .Port }}{{ end }}"
Thx

Related

hashicorp nomad .. why my app is not connecting to the database (Postgres , golang api) provider="nomad"

Help required.. I tried in many ways but unfortunately app is not connecting to the database.
DB_CONN is the env variable that app is expecting
service provider is nomad.
so it must be a simple service discovery. Created two groups in same job file,infact I tried differently as well but it did not work.
variable "datacenters" {
description = "A list of datacenters in the region which are eligible for task placement."
type = list(string)
default = ["dc1"]
}
variable "region" {
description = "The region where the job should be placed."
type = string
default = "global"
}
variable "postgres_db" {
description = "Postgres DB name"
default = "vehicle_master_bd"
}
variable "postgres_user" {
description = "Postgres DB User"
default = "postgres"
}
variable "postgres_password" {
description = "Postgres DB Password"
default = "postgres"
}
# Begin Job Spec
job "cres-vehicleMaster" {
type = "service"
region = var.region
datacenters = var.datacenters
group "db" {
network {
port "db" {
to = 5432
}
}
task "postgres" {
driver = "docker"
meta {
service = "database"
}
service {
name = "database"
port = "db"
provider = "nomad"
}
config {
image = "postgres"
ports = ["db"]
}
resources {
cpu = 500
memory = 500
}
env {
POSTGRES_DB = var.postgres_db
POSTGRES_USER = var.postgres_user
POSTGRES_PASSWORD = var.postgres_password
}
}
}
group "vehicleMaster-api" {
count = 1
network {
port "api" {
to = 50080
}
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
task "vehicleMaster-api" {
driver = "docker"
service {
name = "vehicleMaster-api"
tags = ["vehicleMaster", "RESTAPI"]
port = "api"
provider = "nomad"
}
template {
data = <<EOH
{{ range nomadService "database" }}
DB_CONN="host={{ .Address }} port={{ .Port }} user=${var.postgres_user} password=${var.postgres_password} dbname=${var.postgres_db} sslmode=disable"
{{ end }}
EOH
destination = "local/env.txt"
env = true
}
env {
// DB_CONN = "host=127.0.0.1 port=5432 user=postgres password=postgres dbname=vehicle_master_bd sslmode=disable"
// DB_CONN = "host=${NOMAD_IP_postgres} port=${NOMAD_PORT_postgres} user=${var.postgres_user} password=${var.postgres_password} dbname=${var.postgres_db} sslmode=disable"
PORT = "50080"
}
config {
image = "jpalaparthi/vehiclemaster:v0.0.3"
ports = ["api"]
}
resources {
cpu = 500 # 500 MHz
memory = 512 # 256MB
}
}
}
}

finding proxmox ip address with terraform and Ansible

I'm having this code:
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "2.8.0"
}
}
}
provider "proxmox" {
pm_api_url = "https://url/api2/json"
pm_user = "user"
pm_password = "pass"
pm_tls_insecure = true
}
resource "proxmox_vm_qemu" "test" {
count = 1
name = "test-${count.index + 1}"
target_node = "prm01"
clone = "image-here"
guest_agent_ready_timeout = 60
os_type = "cloud-init"
cores = 2
sockets = 1
cpu = "host"
memory = 4048
scsihw = "virtio-scsi-pci"
bootdisk = "scsi0"
disk {
slot = 0
size = "32G"
type = "scsi"
storage = "local-lvm"
iothread = 1
}
network {
model = "virtio"
bridge = "vmbr0"
}
lifecycle {
ignore_changes = [
network,
]
}
}
output "proxmox_ip_address_default" {
description = "Current IP Default"
value = proxmox_vm_qemu.test.*.default_ipv4_address
}
This is created via Ansible playbook. What I'm trying to find is the IP assigned to the machine as I'm running then another playbook to provision that machine. The problem is that I didn't found any solution on how to find the assigned IP address of that machine
Output it is empty!
Any help?

Terraform AWS-EC2 security groups

I'm quite busy with trying to learn more about Terraform but I'm having one problem that I have no clue on how to work-around/fix.
The problem is as follows, in my script I am generating an ec2 instance (AWS) with a couple of side things like en EIP and a security group from a module, which is all working fine. But I cannot figure out how to attach the security group to the machine, now it's being created and that's it.
The code is as follows:
data "aws_ami" "latest" {
most_recent = true
owners = [ "self"]
filter {
name = "name"
values = [ lookup(var.default_ami, var.ami) ]
}
}
module "aws_security_group" {
source = "./modules/services/Security groups/"
server_port = 443
}
resource "aws_instance" "test-ec2deployment" {
ami = data.aws_ami.latest.id
instance_type = var.instance_type
subnet_id = var.subnet_id
availability_zone = var.availability_zone
associate_public_ip_address = var.public_ip
root_block_device {
volume_type = "gp2"
volume_size = 60
delete_on_termination = true
}
tags = {
Name = "Testserver2viaTerraform"
}
}
resource "aws_eip" "ip" {
instance = aws_instance.test-ec2deployment.id
}
resource "aws_eip" "example" {
vpc = true
}
Above is the main file and i'm loading the following module:
resource "aws_security_group" "my-webserver" {
name = "webserver"
description = "Allow HTTP from Anywhere"
vpc_id = "vpc-"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "my-webserver"
Site = "my-web-site"
}
}
The last step is attaching the security group to the machine but again, no clue on how to do that. I've been reading several docs and tried to google but I cannot seem to find the answer or the answer does not work for me. So hopefully you guys can help me further.
Thanks for your time, much appreciated!
In aws_security_group module you need to output security group id by add the following in ./modules/services/Security groups//main.tf
output "securitygroup_id" {
value = aws_security_group.my-webserver.id
}
then in your main tf file attach security group to your instance like this:
resource "aws_network_interface_sg_attachment" "sg_attachment" {
security_group_id = module.aws_security_group.securitygroup_id
network_interface_id = aws_instance.test-ec2deployment.primary_network_interface_id
}

Nomad HTTP health check does not seem to occur

It seems the HTTP health check is not occurring, I've come to this conclusion due to the HTTP debug log not showing any regular periodic requests.
Is there any additional configuration request for a health check to occur?
job "example" {
datacenters = ["dc1"]
type = "service"
update {
max_parallel = 1
min_healthy_time = "10s"
healthy_deadline = "3m"
progress_deadline = "10m"
auto_revert = false
canary = 0
}
migrate {
max_parallel = 1
health_check = "checks"
min_healthy_time = "10s"
healthy_deadline = "5m"
}
group "app" {
count = 1
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
ephemeral_disk {
size = 300
}
task "app" {
driver = "docker"
config {
image = "localhost:5000/myhub:latest"
command = "python"
args = [
"manage.py",
"runserver",
"0.0.0.0:8001"
]
port_map {
app = 8001
}
network_mode = "host"
}
resources {
cpu = 500
memory = 256
network {
mbits = 10
port "app" {}
}
}
service {
name = "myhub"
port = "app"
check {
name = "alive"
type = "http"
port = "app"
path = "/"
interval = "10s"
timeout = "3s"
}
}
}
}
}
It seems Consul must be installed for this to occur.
Also make sure to install Consul v1.4.2 or later as v1.4.1 seems to have a bug: https://github.com/hashicorp/consul/issues/5270

Terraform - Azure Windows VM winrm connection issue

I want to create windows azure VM, copy some file and run some simple command on that VM using terraform script.
Problem is : I am able to create VM but not able to connect via winrm.
provider "azurerm" {
subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
tenant_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
resource "azurerm_virtual_network" "vnet" {
name = "cmTFVnet"
address_space = ["10.0.0.0/16"]
location = "South India"
resource_group_name = "cservice"
}
resource "azurerm_subnet" "subnet" {
name = "cmTFSubnet"
resource_group_name = "cservice"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "publicip" {
name = "cmTFPublicIP"
location = "South India"
resource_group_name = "cservice"
public_ip_address_allocation = "dynamic"
}
resource "azurerm_network_security_group" "nsg" {
name = "cmTFNSG"
location = "South India"
resource_group_name = "cservice"
security_rule {
name = "SSH"
priority = 340
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "winrm"
priority = 1010
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "5985"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "winrm-out"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "5985"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_interface" "nic" {
name = "cmNIC"
location = "South India"
resource_group_name = "cservice"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "compilerNICConfg"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.publicip.id}"
}
}
resource "azurerm_virtual_machine" "vm" {
name = "cmTFVM"
location = "South India"
resource_group_name = "cservice"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_D2s_v3"
storage_image_reference
{
id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
storage_os_disk {
name = "cmOsDisk"
managed_disk_type = "Premium_LRS"
create_option = "FromImage"
}
os_profile {
computer_name = "hostname"
admin_username = "test"
admin_password = "test#123"
}
os_profile_windows_config {
enable_automatic_upgrades = "true"
provision_vm_agent ="true"
winrm = {
protocol = "http"
}
}
provisioner "remote-exec" {
connection = {
type = "winrm"
user = "test"
password = "test#123"
agent = "false"
https = false
insecure = true
}
inline = [
"cd..",
"cd..",
"cd docker",
"mkdir test"
]
}
}
VM is created successfully but not able to connect by WINRM
but I am getting following error in "remote-exec":
azurerm_virtual_machine.vm: timeout - last error: unknown error Post
http://:5985/wsman: dial tcp :5985: connectex: A connection attempt
failed because the connected party did not properly respond after a
period of time, or established connection failed because connected
host has failed to respond.
or http response error: 401 - invalid content type
When you create the Windows Azure VM, the WINRM is not configured by default. So if you want to connect the VM through the WINRM, you should configure the WINRM after the VM creation time, or in the creation time.
You can follow the steps in Configure WinRM after virtual machine creation. And you can also configure it in the creation time. There is an example shows that through Azure template. It will also provide a little help. See Deploys a Windows VM and Configures a WinRM Https listener.

Resources