Unable to bootstrap nomad cluster with multi-region setup "Error bootstrapping: Unexpected response code: 500 (No path to region)" - consul

I am trying to setup Nomad ACL among a multi region and multi datacenter cluster,
In server stanza I added the below on all server nodes
server {
enabled = true
bootstrap_expect = 2
encrypt = "XXX-same-on-all-servers-XXX"
authoritative_region = "HOME-DC"
server_join {
retry_join = ["server1", "server2", "server3"]
}
}
acl {
enabled = true
}
After I restart all the servers on tailing the logs this is what I get
2021-02-01T11:38:04.156Z [WARN] nomad.rpc: no path found to region: region=HOME-DC
2021-02-01T11:38:04.157Z [ERROR] nomad: failed to fetch namespaces from authoritative region: error="No path to region"
And this is I get if I run
nomad acl bootstrap -address=$NOMAD_ADDR
Error bootstrapping: Unexpected response code: 500 (No path to region)
On the docs I see it asks you to set the replication_token value of the acl stanza, but I am not clear on how to do it, Does it has to be generated somehow like the encrypt token? If yes then how? Reference

authoritative region: authoritative_region is not required and should be removed. After removing it, run nomad acl bootstrap will success.
non-authoritative regions:
authoritative_region is always required.
replication_token is required too, replication_token could be your authoritative's bootstrap token, or create another token from for less capabilities.

Related

RedisCommandTimeOutException while making connecting micronaut lambda with elastic-cache

I am trying to create a lambda using Micronaut-2 connecting to elastic-cache.
I have used redis-lettuce dependency in the project with the following configuration and encryption on the transaction is enabled in the elastic-cache config.
redis:
uri: redis://{aws master node endpoint}
password: {password}
tls: true
ssl: true
io-thread-pool-size: 5
computation-thread-pool-size: 4
I am getting below exception:
command timed out after 1 minute(s): io.lettuce.core.rediscommandtimeoutexception
io.lettuce.core.rediscommandtimeoutexception: command timed out after 1 minute(s) at
io.lettuce.core.exceptionfactory.createtimeoutexception(exceptionfactory.java:51) at
io.lettuce.core.lettucefutures.awaitorcancel(lettucefutures.java:119) at
io.lettuce.core.futuresyncinvocationhandler.handleinvocation(futuresyncinvocationhandler.java:75)
at io.lettuce.core.internal.abstractinvocationhandler.invoke(abstractinvocationhandler.java:79)
com.sun.proxy.$proxy22.set(unknown source) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:29) at
hello.world.function.httpbookredishandler.execute(httpbookredishandler.java:16) at
io.micronaut.function.aws.micronautrequesthandler.handlerequest(micronautrequesthandler.java:73)
I have tried with spring cloud function with same network (literally on the same lambda) with the same elastic cache setup, it is working fine.
Any direction that can help me to debug this issue, please.
This might be late.
First thing to mention here is, an elastic-cache can only be accessed within a VPC. If you want to access it from the internet, it needs to have NAT GW enabled.

Google Cloud Monitoring Ruby client permission issue

I am following the Ruby code sample to add a custom metrics to stackdriver, however, I keep getting the permission denied error.
client = Google::Cloud::Monitoring::Metric.new
project_name = Google::Cloud::Monitoring::V3::MetricServiceClient.project_path project_id
descriptor = Google::Api::MetricDescriptor.new(
type: "custom.googleapis.com/my_metric#{random_suffix}",
metric_kind: Google::Api::MetricDescriptor::MetricKind::GAUGE,
value_type: Google::Api::MetricDescriptor::ValueType::DOUBLE,
description: "This is a simple example of a custom metric."
)
result = client.create_metric_descriptor project_name, descriptor
the error I got is "Google::Gax::PermissionDeniedError (GaxError RPC failed, caused by 7:Permission monitoring.metricDescriptors.create denied (or the resource may not exist).)"
The environment variable GOOGLE_APPLICATION_CREDENTIALS is set, and it works fine for the Google Cloud Storage code below
storage = Google::Cloud::Storage.new project: project_id
# Make an authenticated API request
storage.buckets.each do |bucket|
puts bucket.name
end
At this point, I don't know what is the problem. Do I need to set up a different credential for Cloud Monitoring?

Terraform azurerm_virtual_machine_extension error "extension operations are disallowed"

I have written a Terraform template that creates an Azure Windows VM. I need to configure the VM to Enable PowerShell Remoting for the release pipeline to be able to execute Powershell scripts. After the VM is created I can RDP to the VM and do everything I need to do to enable Powershell remoting, however, it would be ideal if I could script all of that so it could be executed in a Release pipeline. There are two things that prevent that.
The first, and the topic of this question is, that I have to run "WinRM quickconfig". I have the template working such that when I do RDP to the VM, after creation, that when I run "WinRM quickconfig" I receive the following responses:
WinRM service is already running on this machine.
WinRM is not set up to allow remote access to this machine for management.
The following changes must be made:
Configure LocalAccountTokenFilterPolicy to grant administrative rights remotely to local users.
Make these changes [y/n]?
I want to configure the VM in Terraform so LocalAccountTokenFilterPolicy is set and it becomes unnecessary to RDP to the VM to run "WinRM quickconfig". After some research it appeared I might be able to do that using the resource azure_virtual_machine_extension. I add this to my template:
resource "azurerm_virtual_machine_extension" "vmx" {
name = "hostname"
location = "${var.location}"
resource_group_name = "${var.vm-resource-group-name}"
virtual_machine_name = "${azurerm_virtual_machine.vm.name}"
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
settings = <<SETTINGS
{
# "commandToExecute": "powershell Set-ItemProperty -Path 'HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Policies\\System' -Name 'LocalAccountTokenFilterPolicy' -Value 1 -Force"
}
SETTINGS
}
When I apply this, I get the error:
Error: compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I couldn't find any Terraform documentation that addresses how to set the allowExtensionOperations property to true. On a whim, I tried adding the property "allow_extension_operations" to the os_profile block in the azurerm_virtual_machine resource but it is rejected as an invalid property. I also tried adding it to the os_profile_windows_config block and isn't valid there either.
I found a statement on Microsoft's documentation regarding the osProfile.allowExtensionOperations property that says:
"This may only be set to False when no extensions are present on the virtual machine."
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.management.compute.models.osprofile.allowextensionoperations?view=azure-dotnet
This implies to me that the property is True by default but it doesn't actually say that and it certainly isn't acting like that. Is there a way in Terraform to set osProfile.alowExtensionOperations to true?
Running into the same issue adding extensions using Terraform, i created a Windows 2016 custom image,
provider "azurerm" version ="2.0.0"
Terraform 0.12.24
Terraform apply error:
compute.VirtualMachineExtensionsClient#CreateOrUpdate: Failure sending request: StatusCode=0
-- Original Error: autorest/azure: Service returned an error.
Status=<nil>
Code="OperationNotAllowed"
Message="This operation cannot be performed when extension operations are disallowed. To allow, please ensure VM Agent is installed on the VM and the osProfile.allowExtensionOperations property is true."
I ran into same error, possible solution depends on 2 things here.
You have to pass provider "azurerm" version ="2.5.0 and you have to pass os_profile_windows_config (see below) parameter in virtual machine resource as well. So, that terraform will consider the extensions that your are passing. This fixed my errors.
os_profile_windows_config {
provision_vm_agent = true
}

Exported resources not working with puppet

I've written a module to set up the Prometheus node_exporter (in here called ni_trending). Now I need to add all FQDNs of the nodes to a simple file: So declaring an exported resource makes much sense here. PuppetDB is configured and working.
Here's the declaration, within my config.pp:
##node_exporter { "${listen_address}":
hostname => $ni_trending::hostname,
listen_port => $ni_trending::listen_port,
}
When the module is applied on the node I get following error:
Error: Could not retrieve catalog from remote server: Error 500 on
SERVER: Server Error: Evaluation Error: Error while evaluating a
Resource Statement, Invalid export in Class[Ni_trending]: {} is not a
resource on node ydixken-dev01.berlin.ni
Within the ni_trending module I'm retrieving all collected resources via:
Node_exporter <<| |>>
What is missing here?

How to set the "cluster" property in prisma.yml

Thanks for reading my question in advance. I am just start to use graphql and prisma following this tutorial.
I have the following Error when Deploying the Prisma database service:
Error: No cluster set. Please set the "cluster" property in your prisma.yml
at /Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/src/index.ts:89:11
at step (/Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/dist/index.js:40:23)
at Object.next (/Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/dist/index.js:21:53)
at fulfilled (/Users/judy/howtographql/server/node_modules/graphql-config-extension-prisma/dist/index.js:12:58)
at <anonymous>
error Command failed with exit code 1.
ERROR: "playground" exited with 1.
error Command failed with exit code 1.
I looked over the tutorial to find that there is nothing about how to set the cluster. I wonder how to fix this problem.
The default prisma.yaml is:
# the name for the service (will be part of the service's HTTP endpoint)
service: hackernews-graphql-js
# the cluster and stage the service is deployed to
stage: dev
# to disable authentication:
# disableAuth: true
secret: mysecret123
# the file path pointing to your data model
datamodel: datamodel.graphql
It could be just that you may have entered an incorrect endpoint address. Please refer https://github.com/prisma/graphql-config-extension-graphcool/issues/8

Resources