ERROR HTTP Status 405 ? Method Not Allowed when using GCP load balancer - spring

I have an aplication made in spring boot, it recovers data from a remote PostgreSQL database. It works well locally (from local app to local db), from local host to remote db and with all resources on GCP cloud (vm with tomcat server that host the aplication and with a cloud SQL for PostgreSQL database). The last part of my PoC is to host my aplication in my instance group with a load balancer attached. When i reach my load balancer i can see my welcome page where i use spring security to login (revovers the credentials from the same postgreSQL database), but it isnt work and i recieve the next error:
LB error page
And when I check my catalina.out log, it shows me the next error:
11:34 ERROR 893 --- [io-8080-exec-11] o.s.b.w.servlet.support.ErrorPageFilter : Cannot forward to error page for request [/login] as the response has already been committed. As a result, the response may have the wrong status code. If your application is running on WebSphere Application Server you may be able to resolve this problem by setting com.ibm.ws.webcontainer.invokeFlushAfterService to false
11:35 WARN 893 --- [nio-8080-exec-9] .w.s.m.s.DefaultHandlerExceptionResolver : Resolved [org.springframework.web.HttpRequestMethodNotSupportedException: Request method 'POST' not supported]
Im going to share my LB terraform code if helps, ´the lb part is seem to be the problem in my PoC.
provider "google-beta" {
project = var.project
region = "us-central1"
credentials = "C:/Users/jperezgarcia/Desktop/Terraform/GCP/credentials/mario.json"
}
resource "google_compute_region_ssl_certificate" "ssl-crt" {
provider = google-beta
project = var.project
name_prefix = "my-certificate-"
region = var.lb_region
private_key = file("lb_http/certificate/privateKey.key")
certificate = file("lb_http/certificate/certificate.crt")
lifecycle {
create_before_destroy = true
}
}
resource "google_compute_forwarding_rule" "lb-front-HTTP" {
provider = google-beta
project = var.project
name = var.lb_front_name
load_balancing_scheme = "INTERNAL_MANAGED"
port_range = var.lb_front_port_range
target = google_compute_region_target_http_proxy.lb-proxy-http.self_link
region = var.lb_region
network = var.lb_network
subnetwork = var.lb_subnetwork
# ip_address = "10.10.20.5"
}
resource "google_compute_forwarding_rule" "lb-front-HTTPS" {
provider = google-beta
project = var.project
name = "lb-https-front"
port_range = "443"
load_balancing_scheme = "INTERNAL_MANAGED"
# ip_address = "10.10.20.5"
target = google_compute_region_target_https_proxy.lb-proxy-https.self_link
region = var.lb_region
network = var.lb_network
subnetwork = var.lb_subnetwork
}
resource "google_compute_region_target_http_proxy" "lb-proxy-http" {
provider = google-beta
name = var.lb_proxy_name
region = var.lb_region
project = var.project
url_map = google_compute_region_url_map.lb_url_map.self_link
}
resource "google_compute_region_target_https_proxy" "lb-proxy-https" {
provider = google-beta
name = "test-proxy"
region = var.lb_region
project = var.project
url_map = google_compute_region_url_map.lb_url_map.self_link
ssl_certificates = [google_compute_region_ssl_certificate.ssl-crt.id]
}
resource "google_compute_region_url_map" "lb_url_map" {
provider = google-beta
project = var.project
name = var.url_map_name
region = var.lb_region
default_service = google_compute_region_backend_service.lb-backend.self_link
}
resource "google_compute_region_backend_service" "lb-backend" {
provider = google-beta
name = var.lb_backend_name
region = var.lb_region
project = var.project
load_balancing_scheme = "INTERNAL_MANAGED"
port_name = var.lb_backend_port_name
protocol = var.lb_backend_protocol
timeout_sec = var.lb_backend_timeout
health_checks = [var.healthcheck_output]
locality_lb_policy = "ROUND_ROBIN"
session_affinity = "GENERATED_COOKIE"
affinity_cookie_ttl_sec= 3600
log_config {
enable = true
}
backend {
group = var.ig_id
balancing_mode = "UTILIZATION"
capacity_scaler = 1.0
}
}
Thanks for any help here.

I resolved it configuring sticky sessions trough a cookie generated by the load balancer itself. I was trying to doing using round robin LB but it doesn't make any sense if you have to keep your session on, you must use ring hash. I'll share the script (look at the back end service):
provider "google-beta" {
project = var.project
region = var.region
credentials = var.credentials
}
resource "google_compute_region_ssl_certificate" "ssl-crt" {
provider = google-beta
project = var.project
name_prefix = var.certificate_name
region = var.lb_region
private_key = file("lb_http/certificate/privateKey.key")
certificate = file("lb_http/certificate/certificate.crt")
lifecycle {
create_before_destroy = true
}
}
resource "google_compute_forwarding_rule" "lb-front-HTTP" {
provider = google-beta
project = var.project
name = var.lb_http_front_name
load_balancing_scheme = "INTERNAL_MANAGED"
port_range = var.lb_front_port_range
target = google_compute_region_target_http_proxy.lb-proxy-http.self_link
region = var.lb_region
network = var.lb_network
subnetwork = var.lb_subnetwork
# ip_address = "10.10.20.5"
}
resource "google_compute_forwarding_rule" "lb-front-HTTPS" {
provider = google-beta
project = var.project
name = "lb-https-front"
port_range = "443"
load_balancing_scheme = "INTERNAL_MANAGED"
# ip_address = "10.10.20.5"
target = google_compute_region_target_https_proxy.lb-proxy-https.self_link
region = var.lb_region
network = var.lb_network
subnetwork = var.lb_subnetwork
}
resource "google_compute_region_target_http_proxy" "lb-proxy-http" {
provider = google-beta
name = var.lb_proxy_name
region = var.lb_region
project = var.project
url_map = google_compute_region_url_map.lb_url_map.self_link
}
resource "google_compute_region_target_https_proxy" "lb-proxy-https" {
provider = google-beta
name = "test-proxy"
region = var.lb_region
project = var.project
url_map = google_compute_region_url_map.lb_url_map.self_link
ssl_certificates = [google_compute_region_ssl_certificate.ssl-crt.id]
}
resource "google_compute_region_url_map" "lb_url_map" {
provider = google-beta
project = var.project
name = var.url_map_name
region = var.lb_region
default_service = google_compute_region_backend_service.lb-backend.self_link
}
resource "google_compute_region_backend_service" "lb-backend" {
provider = google-beta
name = var.lb_backend_name
region = var.lb_region
project = var.project
load_balancing_scheme = "INTERNAL_MANAGED"
port_name = var.lb_backend_port_name
protocol = var.lb_backend_protocol
timeout_sec = var.lb_backend_timeout
health_checks = [var.healthcheck_output]
locality_lb_policy = "RING_HASH"
session_affinity = "GENERATED_COOKIE"
affinity_cookie_ttl_sec = 3600
connection_draining_timeout_sec = 300
log_config {
enable = true
}
consistent_hash {
minimum_ring_size = 3
http_cookie {
ttl {
seconds = 11
nanos = 1111
}
name = "mycookie"
}
}
backend {
group = var.ig_id
balancing_mode = "UTILIZATION"
capacity_scaler = 1.0
}
}

Related

Unable to create EC2 using Terraform. Route Table Association stuck in creating mode

I am trying to create a simple infrastructure which includes EC2, VPC and internet connectivity with Internet Gateway, but while the infrastructure is being created through terraform apply the terminal output gets stuck in creating mode for approximately 5-6 minutes for route table association using subnet id and then finally throws error that vpc-id, routetableid, subnet id does not exist and not found.
Sharing some specific code below :
resource "aws_route_table" "dev-public-crt" {
vpc_id = "aws_vpc.main-vpc.id"
route {
cidr_block = "0.0.0.0/0"
gateway_id = "aws_internet_gateway.dev-igw.id"
}
tags = {
Name = "dev-public-crt"
}
}
resource "aws_route_table_association" "dev-crta-public-subnet-1"{
subnet_id = "aws_subnet.dev-subnet-public-1.id"
route_table_id = "aws_route_table.dev-public-crt.id"
}
resource "aws_vpc" "dev-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "dev-vpc"
}
}
resource "aws_subnet" "dev-subnet-public-1" {
vpc_id = "aws_vpc.dev-vpc.id"
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
tags = {
Name = "dev-subnet-public-1"
}
}
You need to remove the " around all the reference values you have there: vpc_id = "aws_vpc.main-vpc.id" should be vpc_id = aws_vpc.main-vpc.id, etc. Otherwise you try to create a aws_route_table in the vpc with the literal id "aws_vpc.main-vpc.id".
Whenever you want to reference variables or resources or data sources either do not wrap in " at all, or interpolate using "something ${aws_vpc.main-vpc.id} ..."
The result should probably look like:
resource "aws_route_table" "dev-public-crt" {
vpc_id = aws_vpc.main-vpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.dev-igw.id
}
tags = {
Name = "dev-public-crt"
}
}
resource "aws_route_table_association" "dev-crta-public-subnet-1"{
subnet_id = aws_subnet.dev-subnet-public-1.id
route_table_id = aws_route_table.dev-public-crt.id
}
resource "aws_vpc" "dev-vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "dev-vpc"
}
}
resource "aws_subnet" "dev-subnet-public-1" {
vpc_id = aws_vpc.dev-vpc.id
cidr_block = "10.0.1.0/24"
map_public_ip_on_launch = "true"
tags = {
Name = "dev-subnet-public-1"
}
}
No guarantee that this works because now there could be invalid references, but those need to cleaned up by you

create azure vm from custom image using terraform error

I need to provision a VMs in Azure from a Custom Image using Terraform, and everything works fine with image from the market place but when I try to specify a my custom image an error returns. I have been banging my head all day on this issue.
Here my tf script:
resource "azurerm_windows_virtual_machine" "tftest" {
name = "myazurevm"
location = "eastus"
resource_group_name = "myresource-rg"
network_interface_ids = [azurerm_network_interface.azvm1nic.id]
size = "Standard_B1s"
storage_image_reference {
id = "/subscriptions/xxxxxxxxxxxxxxxxxxxxxxxxxxxxx/resourceGroups/xxxxx/providers/Microsoft.Compute/images/mytemplate"
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
storage_data_disk {
name = "my-data-disk"
managed_disk_type = "Premium_LRS"
disk_size_gb = 75
create_option = "FromImage"
lun = 0
}
os_profile {
computer_name = "myvmazure"
admin_username = "admin"
admin_password = "test123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
Here the error returned during plan phase:
2020-07-17T20:02:26.9367986Z ==============================================================================
2020-07-17T20:02:26.9368212Z Task : Terraform
2020-07-17T20:02:26.9368456Z Description : Execute terraform commands to manage resources on AzureRM, Amazon Web Services(AWS) and Google Cloud Platform(GCP)
2020-07-17T20:02:26.9368678Z Version : 0.0.142
2020-07-17T20:02:26.9368852Z Author : Microsoft Corporation
2020-07-17T20:02:26.9369049Z Help : [Learn more about this task](https://aka.ms/AA5j5pf)
2020-07-17T20:02:26.9369262Z ==============================================================================
2020-07-17T20:02:27.2826725Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe providers
2020-07-17T20:02:27.5303002Z .
2020-07-17T20:02:27.5304176Z └── provider.azurerm
2020-07-17T20:02:27.5304628Z
2020-07-17T20:02:27.5363313Z [command]D:\agent\_work\_tool\terraform\0.12.3\x64\terraform.exe plan
2020-07-17T20:02:29.7685150Z [31m
2020-07-17T20:02:29.7788471Z [1m[31mError: [0m[0m[1mInsufficient os_disk blocks[0m
2020-07-17T20:02:29.7792789Z
2020-07-17T20:02:29.7793007Z [0m on line 0:
2020-07-17T20:02:29.7793199Z (source code not available)
2020-07-17T20:02:29.7793305Z
2020-07-17T20:02:29.7793472Z At least 1 "os_disk" blocks are required.
2020-07-17T20:02:29.7793660Z [0m[0m
2020-07-17T20:02:29.7793800Z [31m
2020-07-17T20:02:29.7793975Z [1m[31mError: [0m[0m[1mMissing required argument[0m
Do you have any suggestions to locate the issue?
I have finally figured out the issue. I was using the wrong terraform resource:
wrong --> azurerm_windows_virtual_machine
correct --> azurerm_virtual_machine
azurerm_windows_virtual_machine doesn't support arguments like (storage_os_disk, storage_data_disk) and is not the right one for custom images unless the image is publish in Shared Image Gallery.
See documentation for options supported from each provider:
https://www.terraform.io/docs/providers/azurerm/r/virtual_machine.html
https://www.terraform.io/docs/providers/azurerm/r/windows_virtual_machine.html
first do it
https://learn.microsoft.com/pt-br/azure/virtual-machines/windows/upload-generalized-managed?toc=%2Fazure%2Fvirtual-machines%2Fwindows%2Ftoc.json
than my all cod
resource "azurerm_resource_group" "example" {
name = "example-resources1"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network1"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal1"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal1"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "azurerm_virtual_machine" "example" {
name = "example-machine1"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
vm_size = "Standard_B1s"
network_interface_ids = [
azurerm_network_interface.example.id,
]
storage_image_reference {
id = "/subscriptions/XXXXXXXXXXXXX/resourceGroups/ORIGEM/providers/Microsoft.Compute/images/myImage"
//just copi id from your image that you created
}
storage_os_disk {
name = "my-os-disk"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
os_profile {
computer_name = "myvmazure"
admin_username = "adminusername"
admin_password = "testenovo#123"
}
os_profile_windows_config {
provision_vm_agent = true
}
}
//bellow the cod to call powershell o work extension,
resource "azurerm_virtual_machine_extension" "software" {
name = "install-software"
//resource_group_name = azurerm_resource_group.example.name
virtual_machine_id = azurerm_virtual_machine.example.id
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.9"
protected_settings = <<SETTINGS
{
"commandToExecute": "powershell -encodedCommand ${textencodebase64(file("install.ps1"), "UTF-16LE")}"
}
SETTINGS
}
You can use a custom image with "azurerm_windows_virtual_machine" module setting "source_image_id" parameter. Documentation note that "One of either source_image_id or source_image_reference must be set." One you can use for marketplace/gallery imagens and the other for managed images.

Terraform - Azure Windows VM winrm connection issue

I want to create windows azure VM, copy some file and run some simple command on that VM using terraform script.
Problem is : I am able to create VM but not able to connect via winrm.
provider "azurerm" {
subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
tenant_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
resource "azurerm_virtual_network" "vnet" {
name = "cmTFVnet"
address_space = ["10.0.0.0/16"]
location = "South India"
resource_group_name = "cservice"
}
resource "azurerm_subnet" "subnet" {
name = "cmTFSubnet"
resource_group_name = "cservice"
virtual_network_name = "${azurerm_virtual_network.vnet.name}"
address_prefix = "10.0.2.0/24"
}
resource "azurerm_public_ip" "publicip" {
name = "cmTFPublicIP"
location = "South India"
resource_group_name = "cservice"
public_ip_address_allocation = "dynamic"
}
resource "azurerm_network_security_group" "nsg" {
name = "cmTFNSG"
location = "South India"
resource_group_name = "cservice"
security_rule {
name = "SSH"
priority = 340
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "winrm"
priority = 1010
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "5985"
source_address_prefix = "*"
destination_address_prefix = "*"
}
security_rule {
name = "winrm-out"
priority = 100
direction = "Outbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "5985"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
resource "azurerm_network_interface" "nic" {
name = "cmNIC"
location = "South India"
resource_group_name = "cservice"
network_security_group_id = "${azurerm_network_security_group.nsg.id}"
ip_configuration {
name = "compilerNICConfg"
subnet_id = "${azurerm_subnet.subnet.id}"
private_ip_address_allocation = "dynamic"
public_ip_address_id = "${azurerm_public_ip.publicip.id}"
}
}
resource "azurerm_virtual_machine" "vm" {
name = "cmTFVM"
location = "South India"
resource_group_name = "cservice"
network_interface_ids = ["${azurerm_network_interface.nic.id}"]
vm_size = "Standard_D2s_v3"
storage_image_reference
{
id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
}
storage_os_disk {
name = "cmOsDisk"
managed_disk_type = "Premium_LRS"
create_option = "FromImage"
}
os_profile {
computer_name = "hostname"
admin_username = "test"
admin_password = "test#123"
}
os_profile_windows_config {
enable_automatic_upgrades = "true"
provision_vm_agent ="true"
winrm = {
protocol = "http"
}
}
provisioner "remote-exec" {
connection = {
type = "winrm"
user = "test"
password = "test#123"
agent = "false"
https = false
insecure = true
}
inline = [
"cd..",
"cd..",
"cd docker",
"mkdir test"
]
}
}
VM is created successfully but not able to connect by WINRM
but I am getting following error in "remote-exec":
azurerm_virtual_machine.vm: timeout - last error: unknown error Post
http://:5985/wsman: dial tcp :5985: connectex: A connection attempt
failed because the connected party did not properly respond after a
period of time, or established connection failed because connected
host has failed to respond.
or http response error: 401 - invalid content type
When you create the Windows Azure VM, the WINRM is not configured by default. So if you want to connect the VM through the WINRM, you should configure the WINRM after the VM creation time, or in the creation time.
You can follow the steps in Configure WinRM after virtual machine creation. And you can also configure it in the creation time. There is an example shows that through Azure template. It will also provide a little help. See Deploys a Windows VM and Configures a WinRM Https listener.

create azure vm from image using terraform

I have taken reference of github code.Please find below URL
https://github.com/terraform-providers/terraform-provider-azurerm/tree/master/examples/vm-from-managed-image
I modified the scripts and executed terraform init. I received below error.
Error reading config for azurerm_network_interface[main]: parse error at 1:18: expected ")" but found "."[0m
My Script :
# Configure the Microsoft Azure Provider
provider "azurerm" {
subscription_id = "xxxxxxxx"
client_id = "xxxxxxxx"
client_secret = "xxxxxxxx"
tenant_id = "xxxxxxxx"
}
# Locate the existing custom/golden image
data "azurerm_image" "search" {
name = "AZLXSPTDEVOPS01_Image"
resource_group_name = "RG-PLATFORM"
}
output "image_id" {
value = "/subscriptions/4f5c9f2a-3584-4bbd-a26e-bbf69ffbfbe6/resourceGroups/RG-EASTUS-SPT-PLATFORM/providers/Microsoft.Compute/images/AZLXSPTDEVOPS01_Image"
}
# Create a Resource Group for the new Virtual Machine.
resource "azurerm_resource_group" "main" {
name = "RG-TEST"
location = "eastus"
}
# Create a Virtual Network within the Resource Group
resource "azurerm_virtual_network" "main" {
name = "RG-Vnet"
address_space = ["10.100.0.0/16"]
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${azurerm_resource_group.main.location}"
}
# Create a Subnet within the Virtual Network
resource "azurerm_subnet" "internal" {
name = "RG-Terraform-snet-in"
virtual_network_name = "${azurerm_virtual_network.main.name}"
resource_group_name = "${azurerm_resource_group.main.name}"
address_prefix = "10.100.2.0/24"
}
# Create a Network Security Group with some rules
resource "azurerm_network_security_group" "main" {
name = "RG-QA-Test-Web-NSG"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
security_rule {
name = "allow_SSH"
description = "Allow SSH access"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# Create a network interface for VMs and attach the PIP and the NSG
resource "azurerm_network_interface" "main" {
name = "myNIC"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_security_group_id = "${azurerm_network_security_group.main.id}"
ip_configuration {
name = "primary"
subnet_id = "${azurerm_subnet.internal.id}"
private_ip_address_allocation = "static"
private_ip_address = "${cidrhost("10.100.1.8/24", 4)}"
}
}
# Create a new Virtual Machine based on the Golden Image
resource "azurerm_virtual_machine" "vm" {
name = "AZLXSPTDEVOPS01"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.id}"]
vm_size = "Standard_DS12_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
id = "${data.azurerm_image.search.id}"
}
storage_os_disk {
name = "AZLXSPTDEVOPS01-OS"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "APPVM"
admin_username = "admin"
admin_password = "admin#2019"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
Below script is working fine
# Configure the Microsoft Azure Provider
provider "azurerm" {
subscription_id = "xxxx"
client_id = "xxxx"
client_secret = "xxxx"
tenant_id = "xxxx"
}
# Locate the existing custom/golden image
data "azurerm_image" "search" {
name = "AZDEVOPS01_Image"
resource_group_name = "RG-PLATFORM"
}
output "image_id" {
value = "/subscriptions/xxxxxx/resourceGroups/RG-EASTUS-SPT-PLATFORM/providers/Microsoft.Compute/images/AZLXDEVOPS01_Image"
}
# Create a Resource Group for the new Virtual Machine.
resource "azurerm_resource_group" "main" {
name = "RG-OPT-QA-TEST"
location = "eastus"
}
# Create a Subnet within the Virtual Network
resource "azurerm_subnet" "internal" {
name = "RG-Terraform-snet-in"
virtual_network_name = "RG-OPT-QA-Vnet"
resource_group_name = "${azurerm_resource_group.main.name}"
address_prefix = "10.100.2.0/24"
}
# Create a Network Security Group with some rules
resource "azurerm_network_security_group" "main" {
name = "RG-QA-Test-Dev-NSG"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
security_rule {
name = "allow_SSH"
description = "Allow SSH access"
priority = 100
direction = "Inbound"
access = "Allow"
protocol = "Tcp"
source_port_range = "*"
destination_port_range = "22"
source_address_prefix = "*"
destination_address_prefix = "*"
}
}
# Create a network interface for VMs and attach the PIP and the NSG
resource "azurerm_network_interface" "main" {
name = "NIC"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_security_group_id = "${azurerm_network_security_group.main.id}"
ip_configuration {
name = "nicconfig"
subnet_id = "${azurerm_subnet.internal.id}"
private_ip_address_allocation = "static"
private_ip_address = "${cidrhost("10.100.2.16/24", 4)}"
}
}
# Create a new Virtual Machine based on the Golden Image
resource "azurerm_virtual_machine" "vm" {
name = "AZLXDEVOPS01"
location = "${azurerm_resource_group.main.location}"
resource_group_name = "${azurerm_resource_group.main.name}"
network_interface_ids = ["${azurerm_network_interface.main.id}"]
vm_size = "Standard_DS12_v2"
delete_os_disk_on_termination = true
delete_data_disks_on_termination = true
storage_image_reference {
id = "${data.azurerm_image.search.id}"
}
storage_os_disk {
name = "AZLXDEVOPS01-OS"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "APPVM"
admin_username = "devopsadmin"
admin_password = "Cssladmin#2019"
}
os_profile_linux_config {
disable_password_authentication = false
}
}
Well, with the errors that in your comment, I think you should set the subnet like this:
resource "azurerm_subnet" "internal" {
name = "RG-Terraform-snet-in"
virtual_network_name = "${azurerm_virtual_network.main.name}"
resource_group_name = "${azurerm_resource_group.main.name}"
address_prefix = "10.100.1.0/24"
}
And the error with the virtual network, I do not see the virtual network with the name "RG-Vnet" in the code as the error said. So you should take a check if everything is all right in your code as you want.
To create an Azure VM from the image in Azure Marketplace, you can follow the tutorial Create a complete Linux virtual machine infrastructure in Azure with Terraform. You do not need to create an image resource in your Terraform code. Just set it like this in the resource azurerm_virtual_machine:
storage_os_disk {
name = "myOsDisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
}
Also, when you refer to other resources in the same code, you should do it like this:
virtual_network_name = "${azurerm_virtual_network.main.name}"
not just with the string name as "RG-Vnet", it's not the correct way.

akka.cluster with double asp.net webapi on IIS

In out cluster we have five nodes composite of:
2 seed nodes (backend)
1 worker
2 webapi on IIS
The cluster is joined, up and running; but the second IIS when perform the first message to the cluster via router make all cluster unreachable and dissociated.
In addition the second IIS can't deliver any message.
Here is my IIS config:
<hocon>
<![CDATA[
akka.loglevel = INFO
akka.log-config-on-start = off
akka.stdout-loglevel = INFO
akka.actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
deployment {
/Process {
router = round-robin-group
routees.paths = ["/user/Process"] # path of routee on each node
# nr-of-instances = 3 # max number of total routees
cluster {
enabled = on
allow-local-routees = off
use-role = Process
}
}
}
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
akka.remote {
helios.tcp {
# transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
# applied-adapters = []
# transport-protocol = tcp
port = 0
hostname = 172.16.1.8
}
log-remote-lifecyclo-events = DEBUG
}
akka.cluster {
seed-nodes = [
"akka.tcp://ClusterActorSystem#172.16.1.8:2551",
"akka.tcp://ClusterActorSystem#172.16.1.8:2552"
]
roles = [Send]
auto-down-unreachable-after = 10s
# how often should the node send out gossip information?
gossip-interval = 1s
# discard incoming gossip messages if not handled within this duration
gossip-time-to-live = 2s
}
# http://getakka.net/docs/persistence/at-least-once-delivery
akka.persistence.at-least-once-delivery.redeliver-interval = 300s
# akka.persistence.at-least-once-delivery.redelivery-burst-limit =
# akka.persistence.at-least-once-delivery.warn-after-number-of-unconfirmed-attempts =
akka.persistence.at-least-once-delivery.max-unconfirmed-messages = 1000000
akka.persistence.journal.plugin = "akka.persistence.journal.sql-server"
akka.persistence.journal.publish-plugin-commands = on
akka.persistence.journal.sql-server {
class = "Akka.Persistence.SqlServer.Journal.SqlServerJournal, Akka.Persistence.SqlServer"
plugin-dispatcher = "akka.actor.default-dispatcher"
table-name = EventJournal
schema-name = dbo
auto-initialize = on
connection-string-name = "HubAkkaPersistence"
refresh-interval = 1s
connection-timeout = 30s
timestamp-provider = "Akka.Persistence.Sql.Common.Journal.DefaultTimestampProvider, Akka.Persistence.Sql.Common"
metadata-table-name = Metadata
}
akka.persistence.snapshot-store.plugin = ""akka.persistence.snapshot-store.sql-server""
akka.persistence.snapshot-store.sql-server {
class = "Akka.Persistence.SqlServer.Snapshot.SqlServerSnapshotStore, Akka.Persistence.SqlServer"
plugin-dispatcher = ""akka.actor.default-dispatcher""
connection-string-name = "HubAkkaPersistence"
schema-name = dbo
table-name = SnapshotStore
auto-initialize = on
}
]]>
</hocon>
inside the global.asax we create a new router to the cluster:
ClusterActorSystem = ActorSystem.Create("ClusterActorSystem");
var backendRouter =
ClusterActorSystem.ActorOf(
Props.Empty.WithRouter(FromConfig.Instance), "Process");
Send = SistemiHubClusterActorSystem.ActorOf(
Props.Create(() => new Common.Actors.Send(backendRouter)),
"Send");
and here is our backend config:
<hocon><![CDATA[
akka.loglevel = INFO
akka.log-config-on-start = on
akka.stdout-loglevel = INFO
akka.actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
akka.remote {
helios.tcp {
# transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
# applied-adapters = []
# transport-protocol = tcp
#
# seed-node ports 2551 and 2552
# non-seed-node port 0
port = 2551
hostname = 172.16.1.8
}
log-remote-lifecyclo-events = INFO
}
akka.cluster {
seed-nodes = [
"akka.tcp://ClusterActorSystem#172.16.1.8:2551",
"akka.tcp://ClusterActorSystem#172.16.1.8:2552"
]
roles = [Process]
auto-down-unreachable-after = 10s
}
]]></hocon>
The issue in present using Akka 1.1 and Akka 1.2
UPDATE
I have found that the issue is related to our LoadBalancer (NetScaler) if I call each IIS directly is working fine. If called by the balancer I face the reported issue; the balancer is trasparent (it only add some headers to the request). What can I check to solve this issue?
Finally I found the problem, we are using akka.persistence that requires a specific value declination for the PersistenceId for each IIS.

Resources