What ACL rules are necessary to request Consul metrics? - consul

I've recently activated ACLs in Consul and everything seems to be accessible except for the metrics end point (m/v1/agent/metrics)
I've tried all kind of combinations of rules in the policy I'm using to generate the token used in the curl request (see below) but none works except for the bootstrap token. However, I don't think is right to use that token for metrics as it has too much permissions.
curl -H 'X-Consul-Token: <redacted>' https://consul-url.com/v1/agent/metrics
Does anyone know which rules to use in the ACL policy so I can access metrics?

Just before pulling my last hair out I found a working solution. I couldn't find any explicit reference to it but I've tested it and it works, so I hope it helps someone. See below the rules to set in the policy used to create a token to get metrics:
acl = "read"
keyring = "read"
operator = "read"
query_prefix "" {
policy = "read"
}
service_prefix "" {
policy = "read"
}
session_prefix "" {
policy = "read"
}
agent_prefix "" {
policy = "read"
}
event_prefix "" {
policy = "read"
}
key_prefix "" {
policy = "read"
}
node_prefix "" {
policy = "read"
}

Related

terraform windows server 2016 in Azure and domain join issue logging in to domain with network level authentication error message

I successfully got a windows server 2016 to come up and join the domain. However, when I go to remote desktop login it throws an error about network level authentication. Something about domain controller cannot be contacted to perform Network Level Authentication (NLA).
I saw some video on work arounds at https://www.bing.com/videos/search?q=requires+network+level+authentication+error&docid=608000415751557665&mid=8CE580438CBAEAC747AC8CE580438CBAEAC747AC&view=detail&FORM=VIRE.
Is there a way to address this with terraform and up front instead?
To join domain I am using:
name = "domjoin"
virtual_machine_id = azurerm_windows_virtual_machine.vm_windows_vm.id
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
settings = <<SETTINGS
{
"Name": "mydomain.com",
"User": "mydomain.com\\myuser",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
depends_on = [ azurerm_windows_virtual_machine.vm_windows_vm ]
Is there an option I should add in this domjoin code perhaps?
I can log in with my local admin account just fine. I see the server is connected to the domain. A nslookup on the domain shows an ip address that was configured to be reachable by firewall rules, so it can reach the domain controller.
Seems like there might be some settings that could help out, see here, possibly all that is needed might be:
"EnableCredSspSupport": "true" inside your domjoin settings block.
You might also need to do something with the registry on the server side, which can be done by using remote-exec.
For example something like:
resource "azurerm_windows_virtual_machine" "example" {
name = "example-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
storage_os_disk {
name = "example-os-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "example-vm"
admin_username = "adminuser"
admin_password = "SuperSecurePassword1234!"
}
os_profile_windows_config {
provision_vm_agent = true
}
provisioner "remote-exec" {
inline = [
"echo Updating Windows...",
"powershell.exe -Command \"& {Get-WindowsUpdate -Install}\"",
"echo Done updating Windows."
]
connection {
type = "winrm"
user = "adminuser"
password = "SuperSecurePassword1234!"
timeout = "30m"
}
}
}
In order to set the correct keys in the registry you might need something like this inside the remote-exec block (I have not validated this code) :
Set-ItemProperty -Path 'HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\Winstations\RDP-tcp' -Name 'SecurityLayer' -Value 0
In order to make the Terraform config cleaner I would recommend using templates for the Powershell script, see here
Hope this helps

Prevent KeyVault from updating secrets using Terraform

I'm building a terraform template to create Azure resources including Keyvault Secrets. The customer Subscription policy doesn't allow anyone to update/delete/view keyvault secrets.
If I run terraform apply for the first time, it will work perfectly. However, running the same template again will give you the following error: Error:
Error updating Key Vault "####" (Resource Group "####"): keyvault.VaultsClient#Update: Failure responding to request: StatusCode=403 --
Original Error: autorest/azure: Service returned an error. Status=403 Code="RequestDisallowedByPolicy" Message="Resource '###' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"###nis-deny-keyvault-acl\", ...
on ..\..\modules\azure\keyvault\main.tf line 15, in resource "azurerm_key_vault" "keyvault":
15: resource "azurerm_key_vault" "keyvault" {
How can I get my CI/CD working while that means terraform apply will be continuously running?
Is there a way to pass this policy in terraform?
Is there a way to prevent terraform from updating KV once it created (other than locking the resource)?
Here is the Keyvault module:
variable "keyvault_id" {
type = string
}
variable "secrets" {
type = map(string)
}
locals {
secret_names = keys(var.secrets)
}
resource "azurerm_key_vault_secret" "secret" {
count = length(var.secrets)
name = local.secret_names[count.index]
value = var.secrets[local.secret_names[count.index]]
key_vault_id = var.keyvault_id
}
data "azurerm_key_vault_secret" "secrets" {
count = length(var.secrets)
depends_on = [azurerm_key_vault_secret.secret]
name = local.secret_names[count.index]
key_vault_id = var.keyvault_id
}
output "keyvault_secret_attributes" {
value = [for i in range(length(azurerm_key_vault_secret.secret.*.id)) : data.azurerm_key_vault_secret.secrets[i]]
}
And here is the module from my template:
locals {
secrets_map = {
appinsights-key = module.app_insights.app_insights_instrumentation_key
storage-account-key = module.storage_account.primary_access_key
}
output_secret_map = {
for secret in module.keyvault_secrets.keyvault_secret_attributes :
secret.name => secret.id
}
}
module "keyvault" {
source = "../../modules/azure/keyvault"
keyvault_name = local.kv_name
resource_group_name = azurerm_resource_group.app_rg.name
}
module "keyvault_secrets" {
source = "../../modules/azure/keyvault-secret"
keyvault_id = module.keyvault.keyvault_id
secrets = local.secrets_map
}
module "app_service_keyvault_access_policy" {
source = "../../modules/azure/keyvault-policy"
vault_id = module.keyvault.keyvault_id
tenant_id = module.app_service.app_service_identity_tenant_id
object_ids = module.app_service.app_service_identity_object_ids
key_permissions = ["get", "list"]
secret_permissions = ["get", "list"]
certificate_permissions = ["get", "list"]
}
Using Terraform for provisioning and managing a keyvault with that kind of limitations sounds like a bad idea. Terraforms main idea is to monitor the state of your resources - if it is not allowed to read the resource it becomes pretty useless. Your problem is not even that Terraform is trying to update something, it fails because it wants to check the current state of your resource and fails.
If your goal is just to create secrets in a keyvault, I would just us the az keyvault commands like this:
az login
az keyvault secret set --name mySecret --vault-name myKeyvault --value mySecretValue
An optimal solution would of course be that your service principal that you use for executing Terrafom commands has the sufficient rights to perform the actions it was created for.
I know this is a late answer, but for future visitors:
The pipeline running the Terraform Plan and Apply will need to have proper access to the key vault.
So, if you are running your CI/CD from Azure Pipelines, you would typically have a service connection that your pipeline uses for authentication.
The service connection you use for Terraform is most likely based on a service principal that has contributor rights (at least at resource group level) for it to provisioning anything at all.
If that is the case, then you must add a policy giving that same service principal (Use the Service Principals Enterprise Object Id) to have at least list, get and set permissions for secrets.

Disable Google Smart Lock in chrome password settings by password manager

I installed truekey and dashlane password managers which disable the google smart lock.
if we go and check chrome://settings/passwords, it shows google smart lock feature in disabled state and says Truekey (or Dashlane) is controlling this setting.
I want to know how do they disable this setting without end user knowing about it.
True key does show a permission warning to "Change your privacy related settings".
It uses the chrome.privacy api: https://developer.chrome.com/extensions/privacy
Add the "privacy" permission to the manifest.
Then, you can disable Chrome's password manager like this:
chrome.privacy.services.passwordSavingEnabled.get({}, function({ levelOfControl }) {
if(levelOfControl == "controllable_by_this_extension") {
chrome.privacy.services.passwordSavingEnabled.set({ value: true }, function() {
if(chrome.runtime.lastError == null) {
console.log("success")
} else {
console.log("error:", chrome.runtime.lastError)
} }) } })

firebase data validation failing

First time working with Firebase on a new project and I'm getting a permission denied message when writing an activity event when I include a validation rule.
The validation rule looks like:
"activity": {
".read": "auth != null",
".write": "auth != null",
".validate": "newData.hasChildren(['user'])",
".indexOn": ["when"]
}
On a new activity event, I push a new entry, grab the token ID and make it (for now) part of the data being pushed. When watching this in debug (using a custom token authentication system) this is what I see. The json pushed has a "user" entry that is the GUID of the auth user so I'm not sure why it's failing. I spaced out the json text.
utility.js (line 1675)
FIREBASE: Attempt to write {
"id":"-K4oomuOpaY4K2aGUYZA",
"imp":false,
"text":"xxx.",
**"user":"0648480c-xxx"**,
"when":1449363973059
} to /activity/## with auth={"uid":"0648480c-xxx","name":"Greg Merideth"}
/activity:.write: "auth != null" => true
/activity:.validate: "newData.hasChildren(['user'])" => false
FIREBASE: Validation failed. firebase.js (line 195)
FIREBASE: Write was denied firebase.js (line 195)
I even tried changing the rule to ".validate": "newData.hasChild('user')" with the same end result.
Is newData looking at the inbound packet or my "auth" packet?
Update (from the comments)
The addition of a new item calls a function passing in the fbActivity handler which then calls:
var message = fbActivity.push({
id: user.fn(),
text: t.val(),
imp: false,
user: user.uid(),
when: new Date().getTime()}
To push the new entry. We're not using the fb.timestamp as our server runs 3 seconds behind fb's so our time stamps come out weird.
I'm guessing that you're calling push() to add a new child under activity. In that case, your rules are missing the extra level that is generated by push():
"activity": {
"$activityid": {
".read": "auth != null",
".write": "auth != null",
".validate": "newData.hasChildren(['user'])",
".indexOn": ["when"]
}
}
If that is the case, please take time to read the Firebase security guide, which explains this and many other useful bits about the language.

How do you restrict access to certain paths using Lighttpd?

I would like to restrict access to my /admin URL to internal IP addresses only. Anyone on the open Internet should not be able to login to my web site. Since I'm using Lighttpd my first thought was to use mod_rewrite to redirect any outside request for the /admin URL back to my home page, but I don't know much about Lighty and the docs don't say much about detecting a 192.168.0.0 IP range.
Try this:
$HTTP["remoteip"] == "192.168.0.0/16" {
/* your rules here */
}
Example from the docs:
# deny the access to www.example.org to all user which
# are not in the 10.0.0.0/8 network
$HTTP["host"] == "www.example.org" {
$HTTP["remoteip"] != "10.0.0.0/8" {
url.access-deny = ( "" )
}
}
This worked for me:
$HTTP["remoteip"] != "192.168.1.1/254" {
$HTTP["url"] =~ "^/intranet/" {
url.access-deny = ( "" )
}
}
!= worked over ==.

Resources