Valid service account id not being accepted for workflow service account - google-workflows

I am attempting to deploy a workflow using the https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/workflows_workflow terraform resource and its failing with error:
Error: Error creating Workflow: googleapi: Error 400: request contains errors
Details:
[
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"description": "The referenced service account is not user-managed, please verify the correctness of the service account name",
"field": "workflow.service_account"
}
]
}
]
I can see from running terraform plan that this is the definition of my workflow:
+ resource "google_workflows_workflow" "my_first_workflow" {
+ create_time = (known after apply)
+ description = "Magic"
+ id = (known after apply)
+ name = "myworkflow"
+ name_prefix = (known after apply)
+ project = "myproject"
+ region = "europe-west4"
+ revision_id = (known after apply)
+ service_account = "projects/myproject/serviceAccounts/service-account"
+ source_contents = <<-EOT
- postCallBigqueryStoredProcedure:
call: http.post
args:
url: https://bigquery.googleapis.com/bigquery/v2/projects/myproject/jobs
body: {
"configuration": {
"query": {
"query": "call mydataset.mystoredprocedure()"
}
}
}
EOT
+ state = (known after apply)
+ update_time = (known after apply)
}
The error messages is complaining about the service account however I'm certain that the service account named here: projects/myproject/serviceAccounts/service-account is valid and exists so I'm clueless as to why I'm getting this error. Googling the error message hasn't turned up anything useful.
Does anyone know what might be the problem?

You mentioned that the service account is valid and it exists. When you are referencing it, are you including the full account name including the details after the '#' ie. 7**********-compute#developer.gserviceaccount.com?
I was able to replicate this behaviour by using either an incorrect name or a service account name without the full email address.
You must use the complete email address of your service account. Here's a sample of a correct format. I'm currently using Terraform v0.14.7:
service_account = "projects/project_id/serviceAccounts/7**********-compute#developer.gserviceaccount.com"

Related

terraform windows server 2016 in Azure and domain join issue logging in to domain with network level authentication error message

I successfully got a windows server 2016 to come up and join the domain. However, when I go to remote desktop login it throws an error about network level authentication. Something about domain controller cannot be contacted to perform Network Level Authentication (NLA).
I saw some video on work arounds at https://www.bing.com/videos/search?q=requires+network+level+authentication+error&docid=608000415751557665&mid=8CE580438CBAEAC747AC8CE580438CBAEAC747AC&view=detail&FORM=VIRE.
Is there a way to address this with terraform and up front instead?
To join domain I am using:
name = "domjoin"
virtual_machine_id = azurerm_windows_virtual_machine.vm_windows_vm.id
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
settings = <<SETTINGS
{
"Name": "mydomain.com",
"User": "mydomain.com\\myuser",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
depends_on = [ azurerm_windows_virtual_machine.vm_windows_vm ]
Is there an option I should add in this domjoin code perhaps?
I can log in with my local admin account just fine. I see the server is connected to the domain. A nslookup on the domain shows an ip address that was configured to be reachable by firewall rules, so it can reach the domain controller.
Seems like there might be some settings that could help out, see here, possibly all that is needed might be:
"EnableCredSspSupport": "true" inside your domjoin settings block.
You might also need to do something with the registry on the server side, which can be done by using remote-exec.
For example something like:
resource "azurerm_windows_virtual_machine" "example" {
name = "example-vm"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
network_interface_ids = [azurerm_network_interface.example.id]
vm_size = "Standard_DS1_v2"
storage_image_reference {
publisher = "MicrosoftWindowsServer"
offer = "WindowsServer"
sku = "2019-Datacenter"
version = "latest"
}
storage_os_disk {
name = "example-os-disk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
}
os_profile {
computer_name = "example-vm"
admin_username = "adminuser"
admin_password = "SuperSecurePassword1234!"
}
os_profile_windows_config {
provision_vm_agent = true
}
provisioner "remote-exec" {
inline = [
"echo Updating Windows...",
"powershell.exe -Command \"& {Get-WindowsUpdate -Install}\"",
"echo Done updating Windows."
]
connection {
type = "winrm"
user = "adminuser"
password = "SuperSecurePassword1234!"
timeout = "30m"
}
}
}
In order to set the correct keys in the registry you might need something like this inside the remote-exec block (I have not validated this code) :
Set-ItemProperty -Path 'HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\Winstations\RDP-tcp' -Name 'SecurityLayer' -Value 0
In order to make the Terraform config cleaner I would recommend using templates for the Powershell script, see here
Hope this helps

How to access custom fields of invoices using Zoho Deluge?

I'm trying to access the custom fields of my invoices and I'm getting the following error:
Any ideas on how to solve this?
To help debug the issue:
display the values in invoiceID and test2 and make sure they are not empty or zoho numbers (rather than strings)
info invoiceID;
info test2;
Try running the same url in a browser and/or curl to see if the behavior is the same or successful.
Try putting a 'try/catch' around the invokeurl cmd. Sometimes (but not always) the catch(err) will report more specific error information.
try {
webhooktest = invokeurl
[
url: "https:example.org?invoice_id=" + invoiceID + "&tips_ncf=" + test2
]
info "Webhooktest: [" + webhooktest + "]";
}
catch(err)
{
info err;
}

Event bus name not registering when attempting to connect eventbridge and lambda using terraform

I am attempting to create an Eventbridge that will get notifications from Datadog, and trigger a lambda function to store the notifications to an S3 bucket. This is all going to be done through Terraform.
The following is the code I have written:
###################################
# Eventbridge Integration to Lambda
###################################
data "aws_cloudwatch_event_source" "datadog_event_source" {
name_prefix = var.spog_event_bus_name # aws.partner/datadog.com/my_eventbus
}
resource "aws_cloudwatch_event_bus" "datadog_event_bus" {
name = data.aws_cloudwatch_event_source.datadog_event_source.name
event_source_name = data.aws_cloudwatch_event_source.datadog_event_source.name
}
resource "aws_cloudwatch_event_rule" "spog_cloudwatch_rule" {
name = "spog_cloudwatch_rule"
event_bus_name = aws_cloudwatch_event_bus.datadog_event_bus.name
event_pattern = <<EOF
{
"account": [
"${var.aws_account_id}"
]
}
EOF
}
resource "aws_cloudwatch_event_target" "spog_cloudwatch_event_target" {
rule = aws_cloudwatch_event_rule.spog_cloudwatch_rule.name
target_id = aws_lambda_function.write_datadog_events.function_name
arn = aws_lambda_function.write_datadog_events.arn
}
resource "aws_lambda_permission" "spog_allow_cloudwatch" {
statement_id = "AllowExecutionFromCloudWatch"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.write_datadog_events.function_name
principal = "events.amazonaws.com"
source_arn = aws_cloudwatch_event_rule.spog_cloudwatch_rule.arn
}
Here, the write_datadog_events is the lambda function to store the notifications.
The build succeeds, but when I try to apply the plan, I get an error saying that "validationException: EventBus name starting with 'aws.' is not valid". From inspecting the aws console, it seems that the actual eventbridge and rule are created successfully, but the event bus name is not registered properly on the eventbridge. The event bus name only says default, and I thought by changing the event_bus_name value within aws_cloudwatch_event_rule, it would not be default, but I was wrong.
Can anyone help me out with this. The lambda function itself is not wrong (since I ran a test case on it), it seems that the core issue is eventbridge not registering my event bus name. Also, although the rule is generated, the event bus is never generated.
Thanks for your help.

google-api-nodejs-client - Service Account credentials authentication issues

I am trying to use the google-api-nodejs library to manage some resources in the google Campaign Manager API.
I have confirmed that we currently have a project configured, and that this project has the google Campaign Manager API enabled (see screenshot at the bottom).
I have tried several ways of authenticating myself (particularly API keys, OAuth2, and Service account credentials). This question will focus on using a Service Account for authentication purposes.
Now, I have generated a new service account keyfile (see screenshot at the bottom)), and I configured my code as follows, following the service-account-credentials section of the library's repo. I've also extended the auth scope to include the necessary scope according to this endpoint API docs
import { assert } from "chai";
import { google } from "googleapis";
it("can query userProfiles using service account keyfile", async () => {
try {
const auth = new google.auth.GoogleAuth({
keyFile:
"/full-path-to/credentials-service-account.json",
scopes: [
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/dfatrafficking",
"https://www.googleapis.com/auth/ddmconversions",
"https://www.googleapis.com/auth/dfareporting",
],
});
const authClient = await auth.getClient();
// set auth as a global default
google.options({
auth: authClient,
});
const df = google.dfareporting("v3.5");
const res = await df.userProfiles.list({});
console.log("res: ", res);
assert(true);
} catch (e) {
console.error("error: ", e);
assert(false);
}
});
This results in the following error:
{
"code": 403,
"errors": [
{
"message": "Version v3.5 is no longer supported. Please upgrade to the latest version of the API.",
"domain": "global",
"reason": "forbidden"
}
]
}
This is an interesting error, because v3.5 is the latest version of that API (as of 14 April 2022) (This page shows the deprecation schedule: https://developers.google.com/doubleclick-advertisers/deprecation. Notice that v3.3 and v3.4 are deprecated, while v3.5 is not.)
In any case, using a different version of the dfareporting API still result in error:
// error thrown: "Version v3.5 is no longer supported. Please upgrade to the latest version of the API."
const df = google.dfareporting("v3.5");
// error thrown: "Version v3.4 is no longer supported. Please upgrade to the latest version of the API."
const df = google.dfareporting("v3.4");
// error thrown: 404 "The requested URL <code>/dfareporting/v3.3/userprofiles</code> was not found on this server"
const df = google.dfareporting("v3.3");
// Note 1: There are no other versions available
// Note 2: It is not possible to leave the version blank
const df = google.dfareporting();
// results in typescript error: "An argument for 'version' was not provided."
I also tried to query the floodlightActivities API, which failed with an authentication error.
// const res = await df.userProfiles.list({});
const res = await df.floodlightActivities.list({
profileId: "7474579",
});
This, in it's turn, results in the following error:
{
"code": 401,
"errors": [
{
"message": "1075 : Failed to authenticate. Google account can not access the user profile/account requested.",
"domain": "global",
"reason": "authError",
"location": "Authorization",
"locationType": "header"
}
]
}
Now, my question is:
am I doing something wrong while trying to authenticate using the service account credentials?
Or, is it possible that these endpoints do not support service-account-credentials?
Or, is something else going wrong here?

Prevent KeyVault from updating secrets using Terraform

I'm building a terraform template to create Azure resources including Keyvault Secrets. The customer Subscription policy doesn't allow anyone to update/delete/view keyvault secrets.
If I run terraform apply for the first time, it will work perfectly. However, running the same template again will give you the following error: Error:
Error updating Key Vault "####" (Resource Group "####"): keyvault.VaultsClient#Update: Failure responding to request: StatusCode=403 --
Original Error: autorest/azure: Service returned an error. Status=403 Code="RequestDisallowedByPolicy" Message="Resource '###' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"###nis-deny-keyvault-acl\", ...
on ..\..\modules\azure\keyvault\main.tf line 15, in resource "azurerm_key_vault" "keyvault":
15: resource "azurerm_key_vault" "keyvault" {
How can I get my CI/CD working while that means terraform apply will be continuously running?
Is there a way to pass this policy in terraform?
Is there a way to prevent terraform from updating KV once it created (other than locking the resource)?
Here is the Keyvault module:
variable "keyvault_id" {
type = string
}
variable "secrets" {
type = map(string)
}
locals {
secret_names = keys(var.secrets)
}
resource "azurerm_key_vault_secret" "secret" {
count = length(var.secrets)
name = local.secret_names[count.index]
value = var.secrets[local.secret_names[count.index]]
key_vault_id = var.keyvault_id
}
data "azurerm_key_vault_secret" "secrets" {
count = length(var.secrets)
depends_on = [azurerm_key_vault_secret.secret]
name = local.secret_names[count.index]
key_vault_id = var.keyvault_id
}
output "keyvault_secret_attributes" {
value = [for i in range(length(azurerm_key_vault_secret.secret.*.id)) : data.azurerm_key_vault_secret.secrets[i]]
}
And here is the module from my template:
locals {
secrets_map = {
appinsights-key = module.app_insights.app_insights_instrumentation_key
storage-account-key = module.storage_account.primary_access_key
}
output_secret_map = {
for secret in module.keyvault_secrets.keyvault_secret_attributes :
secret.name => secret.id
}
}
module "keyvault" {
source = "../../modules/azure/keyvault"
keyvault_name = local.kv_name
resource_group_name = azurerm_resource_group.app_rg.name
}
module "keyvault_secrets" {
source = "../../modules/azure/keyvault-secret"
keyvault_id = module.keyvault.keyvault_id
secrets = local.secrets_map
}
module "app_service_keyvault_access_policy" {
source = "../../modules/azure/keyvault-policy"
vault_id = module.keyvault.keyvault_id
tenant_id = module.app_service.app_service_identity_tenant_id
object_ids = module.app_service.app_service_identity_object_ids
key_permissions = ["get", "list"]
secret_permissions = ["get", "list"]
certificate_permissions = ["get", "list"]
}
Using Terraform for provisioning and managing a keyvault with that kind of limitations sounds like a bad idea. Terraforms main idea is to monitor the state of your resources - if it is not allowed to read the resource it becomes pretty useless. Your problem is not even that Terraform is trying to update something, it fails because it wants to check the current state of your resource and fails.
If your goal is just to create secrets in a keyvault, I would just us the az keyvault commands like this:
az login
az keyvault secret set --name mySecret --vault-name myKeyvault --value mySecretValue
An optimal solution would of course be that your service principal that you use for executing Terrafom commands has the sufficient rights to perform the actions it was created for.
I know this is a late answer, but for future visitors:
The pipeline running the Terraform Plan and Apply will need to have proper access to the key vault.
So, if you are running your CI/CD from Azure Pipelines, you would typically have a service connection that your pipeline uses for authentication.
The service connection you use for Terraform is most likely based on a service principal that has contributor rights (at least at resource group level) for it to provisioning anything at all.
If that is the case, then you must add a policy giving that same service principal (Use the Service Principals Enterprise Object Id) to have at least list, get and set permissions for secrets.

Resources