Applying a dependency to a service in icinga2 - icinga2

We are using icinga2 for monitoring. We have a lot service checks which are applied dynamically through apply rules. Additionally, these are services applied to a hashmap of database instances which are on various hosts. The long and the short of it is that our service names are determined dynamically so one might be, for example HOST!DBNAME-svcvheck.
So the scenario is that most of these services depend on a database is up, e.g., `HOST!DBNAME-tnsping". Unfortunately, the documentation examples are fairly simple and don't include dynamically creating a parent service reference. What I think I want to do is something like this:
apply Dependency "db-connectivity" to Service {
parent_service_name = "$host.name$!$service.vars.envname$-tnsping"
# also tried variants of this, e.g.
# parent_service_name = host.name + "!" + service.vars.envname + "-tnsping"
child_service_name = service.name
child_host_name = host.name
disable_checks = true
assign where "oracle-db-svc" in service.templates
}
The host doesn't really matter in my case because the dependencies are only the services but the child_host_name is a required field.
No matter what I do I can't seem to get it to recognize the parent service. For example:
Error: Dependency 'scan-szepdb041x.myhost.org!UAT2-beqfilelast!db-connectivity' references a parent host/service which doesn't exist.
The rules for referencing other object variables while applying a Dependency seem a bit different from applying a Service.
Does anyone have any ideas or examples of dynamically apply service dependencies to services which were generated dynamically?

you probably have to loop over existing hosts and see if they match. Then you define dependency inside of a loop.
I had a similar example for dynamically generating disk checks. If i find it, i'll post it here in a few days.
Not sure if that is possible with dependencies, but I'll see.
edit: see if somethig like that will be enough to get you started:
for (server in get_objects(Host)) {
if (match("somename*", server.name)) {
apply Dependency "db-connectivity" + server.name to Service use (server) {
parent_service_name = server.name + service.vars.envvname + "-tnsping"
child_service_name = service.name
child_host_name = host.name
disable_checks = true
assign where "oracle-db-svc" in service.templates
}
}
}

Related

How to force variant selection withResolvedConfiguration.getResolvedArtifacts?

I'm working with a large, multi-module Android application, and I'm trying to define a Gradle task that collects the jars of all runtime dependencies. I'm trying something like this in app/build.gradle:
task collectDeps {
doLast {
configurations.releaseRuntimeClasspath.resolvedConfiguration.resolvedArtifacts.each {
// do stuff
}
}
}
I've used this snippet in the past on other Java projects, so I know that it conceptually works; this is just my first time trying it on a project with multiple build types and/or variants.
When run on the Android project, executing this task throws a variant resolution error:
Execution failed for task ':app:collectDeps'.
> Could not resolve all dependencies for configuration ':app:releaseRuntimeClasspath'.
> The consumer was configured to find a runtime of a component, preferably optimized for Android, as well as attribute 'com.android.build.api.attributes.BuildTypeAttr' with value 'release', attribute 'com.android.build.api.attributes.AgpVersionAttr' with value '7.1.1', attribute 'org.jetbrains.kotlin.platform.type' with value 'androidJvm'. However we cannot choose between the following variants of project :myModule:
- Configuration ':myModule:releaseRuntimeElements' variant android-aar-metadata declares a runtime of a component, preferably optimized for Android, as well as attribute 'com.android.build.api.attributes.AgpVersionAttr' with value '7.1.1', attribute 'com.android.build.api.attributes.BuildTypeAttr' with value 'release', attribute 'org.jetbrains.kotlin.platform.type' with value 'androidJvm':
- Unmatched attributes:
- Provides attribute 'artifactType' with value 'android-aar-metadata' but the consumer didn't ask for it
- Provides attribute 'com.android.build.gradle.internal.attributes.VariantAttr' with value 'release' but the consumer didn't ask for it
- Provides a library but the consumer didn't ask for it
- Configuration ':myModule:releaseRuntimeElements' variant android-art-profile declares a runtime of a component, preferably optimized for Android, as well as attribute 'com.android.build.api.attributes.AgpVersionAttr' with value '7.1.1', attribute 'com.android.build.api.attributes.BuildTypeAttr' with value 'release', attribute 'org.jetbrains.kotlin.platform.type' with value 'androidJvm':
- Unmatched attributes:
- Provides attribute 'artifactType' with value 'android-art-profile' but the consumer didn't ask for it
- Provides attribute 'com.android.build.gradle.internal.attributes.VariantAttr' with value 'release' but the consumer didn't ask for it
- Provides a library but the consumer didn't ask for it
I've trimmed the error for brevity; there are ~20 variants in total. Note that myModule is a project dependency of the top-level app; if I remove that dependency, the error is the same but comes from a different module.
I should also note here that every other build target works fine; the application is quite mature, and the only change I've made is to add this new task to app/build.gradle. So I assume there's something about the way I'm resolving the dependencies that Gradle doesn't like, but I'm struggling to figure out what, or how to resolve it.
Googling this error is not very helpful; the Gradle documentation is quite vague about exactly how to resolve variants, and the solutions that are provided seem to focus on changing how dependencies are added to the project; but I don't necessarily want to do that, because the build works fine for every other use case.
Ideally, I'd like to be able to force a variant for the resolution specifically within my collectDeps task (in fact, ideally collectDeps would be defined in a plugin). Is this possible to do?
In case it matters, the build is using Gradle 7.2 and v7.1.1 of the Android Gradle Plugin
There may be a better way to handle this, but I ultimately managed to resolve my problem by taking inspiration from Sonatype's open source Nexus scanning plugin. The code looks like (this is in Kotlin, but can be modified to Groovy without much difficulty):
project.allprojects.forEach { project ->
val cfg = project.configurations.releaseRuntimeClasspath
try {
cfg.resolvedConfiguration.resolvedArtifacts.forEach {
// do stuff
}
} catch(e: Exception) {
when(e) {
is ResolveException, is AmbiguousVariantSelectionException -> {
val copyConfiguration = createCopyConfiguration(project)
cfg.allDependencies.forEach {
if(it is ProjectDependency) {
project.evaluationDependsOn(it.dependencyProject.path)
} else {
copyConfiguration.dependencies.add(it)
}
}
copyConfiguration.resolvedConfiguration.resolvedArtifacts.forEach {
// do stuff
}
}
else -> throw(e)
}
}
}
private fun createCopyConfiguration(project: Project): Configuration {
var configurationName = "myCopyConfiguration"
var i = 0
while(project.configurations.findByName(configurationName) != null) {
configurationName += i
i++
}
val copyConfiguration = project.configurations.create(configurationName)
copyConfiguration.attributes {
val factory = project.objects
this.attribute(Usage.USAGE_ATTRIBUTE, factory.named(Usage::class.java, Usage.JAVA_RUNTIME))
}
return copyConfiguration
}
The basic idea is that, if a configuration can't be resolved because of ambiguous variant selection, I create and inject a new parent configuration that specifies the attribute org.gradle.usage='java-runtime'; this is sufficient to disambiguate the variants.
Note that I didn't test this with any other attributes, so it's possible that it could work by setting, for example, the artifactType attribute instead; but my use case is more specifically related to the runtime classpath, so this worked for me

How to handle weird API flow with implicit create step in custom terraform provider

Most terraform providers demand a predefined flow, Create/Read/Update/Delete/Exists
I am in a weird situation developing a provider against an API where this behavior diverges a bit.
There are two kinds of resources, Host and Scope. A host can have many scopes. Scopes are updated with configurations.
This generally fits well into the terraform flow, it has a full CRUDE flow possible - except for one instance.
When a new Host is made, it automatically has a default scope attached to it. It is always there, cannot be deleted etc.
I can't figure out how to have my provider gracefully handle this, as I would want the tf to treat it like any other resource, but it doesn't have an explicit CREATE/DELETE, only READ/UPDATE/EXISTS - but every other scope attached to the host would have CREATE/DELETE.
Importing is not an option due to density, requiring an import for every host would render the entire thing pointless.
I originally was going to attempt to split Scopes and Configurations into separate resources so one could be full-filled by the Host (the host providing the Scope ID for a configuration, and then other configurations can get their scope IDs from a scope resource)
However this approach falls apart because the API for both are the same, unless I wanted to add the abstraction of creating an empty scope then applying a configuration against it, which may not be fully supported. It would essentially be two resources controlling one resource which could lead to dramatic conflicts.
A paraphrased example of an execution I thought about implementing
resource "host" "test_integrations" {
name = "test.integrations.domain.com"
account_hash = "${local.integrationAccountHash}"
services = [40]
}
resource "configuration" "test_integrations_root_configuration" {
name = "root"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations.root_scope_id}"
hostnames = ["test.integrations.domain.com"]
}
resource "scope" "test_integrations_other" {
account_hash = "${local.integrationAccountHash}"
host_hash = "${host.test_integrations.id}"
path = "/non/root/path"
name = "Some Other URI Path"
}
resource "configuration" "test_integrations_other_configuration" {
name = "other"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations_other.id}"
}
In this example flow, a configuration and scope resource unfortunately are pointing to the same resource which I am worried would cause conflicts or confusion on who is responsible for what and dramatically confuses the create/delete lifecycle
But I can't figure out how the TF lifecycle would allow for a resource that would only UPDATE/READ/EXISTS if say a flag was given (and how state would handle that)
An alternative would be to just have a Configuration resource, but then if it was the root configuration it would need to skip create/delete as it is inherently tied to the host
Ideally I'd be able to handle this situation gracefully. I am trying to avoid including the root scope/configuration in the host definition as it would create a split in how they are written and handled.
The documentation for providers implies you can use a resource AS a schema object in a resource, but does not explain how or why. If it works the way I imagine it, it may work to create a resource that is only used to inject into the host perhaps - but I don't know if that is how it works and if it is how to accomplish it.
I believe I tentatively have found a solution after asking some folks on the gopher slack.
Using AWS Provider Default VPC as a reference, I can "clone" the resource into one with a custom Create/Delete lifecycle
Loose Example:
func defaultResourceConfiguration() *schema.Resource {
drc := resourceConfiguration()
drc.Create = resourceDefaultConfigurationCreate
drc.Delete = resourceDefaultConfigurationDelete
return drc
}
func resourceDefaultConfigurationCreate(d *schema.ResourceData, m interface{}) error {
// double check it exists and update the resource instead
return resourceConfigurationUpdate(d, m)
}
func resourceDefaultConfigurationDelete(d *schema.ResourceData, m interface{}) error {
log.Printf("[WARN] Cannot destroy Default Scope Configuration. Terraform will remove this resource from the state file, however resources may remain.")
return nil
}
This should allow me to provide an identical resource that is designed to interact with the already existing one created by its parent host.

How does one configure VSTS specific load test context parameters for own azure agents intelligently

Recently moved from utilising AWS to Azure for the location of our load test agents, thus making the transition to making full use of VSTS.
It was described that, for the moment, to get a load test file working with VSTS to using our own VMs for testing, we need to provide two context parameters, UseStaticLoadAgents and StaticAgentsGroupName in each loadtest file.
Our load test solution is getting very large, and we have multiple loadtest files where we have to set these two values each time. This leads us into the situation where, if we were to change our agents group name for example, we would have to update each individual load test file with the new information.
Im looking at a way to centralise this until a nicer way is implemented by Microsoft. The idea was to use a load test plugin, to add these context parameters with the plugin drawing the needed values from a centralised config file.
However, it seems that none of the hooks in the load test plugin or simply using the initialise method to manually set these values is working. Likely because they are set after full initialisation.
Has anyone got a nice, code focused solution to manage this and stop us depending on adding brittle values in the editor? Or even gotten the above approach to work?
The loadtest file is the XML file, so you can update it programmatically, for example:
string filePath = #"XXX\LoadTest1.loadtest";
XmlDocument doc = new XmlDocument();
doc.Load(filePath);
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace("ns", "http://microsoft.com/schemas/VisualStudio/TeamTest/2010");
XmlNode root = doc.DocumentElement;
XmlNode nodeParameters = root.SelectSingleNode("//ns:RunConfigurations/ns:RunConfiguration[#Name='Run Settings1']/ns:ContextParameters", nsmgr);
if(nodeParameters!=null)
{
//nodeParameters.SelectSingleNode("//ns:ContextParameter[#Name='UseStaticLoadAgents']").Value = "agent1";
foreach (XmlNode n in nodeParameters.ChildNodes)
{
switch (n.Attributes["Name"].Value)
{
case "Parameter1":
n.Attributes["Value"].Value = "testUpdate";
break;
case "UseStaticLoadAgents":
n.Attributes["Value"].Value = "agent1";
break;
case "StaticAgentsGroupName":
n.Attributes["Value"].Value = "group1";
break;
}
}
}
doc.Save(filePath);

How do I use the appProperties with the ruby api-client

I can't determine how to add custom properties or search for them.
Everything I have tried is giving me a Error - #<Google::Apis::ClientError: invalid: Invalid query> when I attempt to search for them. I can successfully complete other queries but I don't know if the client is setup to work with appProperties (or even properties at all).
Basically I just need the correct syntax for searching and adding since it doesn't appear to be in the documentation.
Assuming you already have a reference to an authorized DriveService, you can search based on appProperties using a q-parameter (documented here), like this:
file_list = drive.list_files(
q: "appProperties has { key='my_app_key' and value='my_val' }",
fields: 'files(id, name, appProperties)',
spaces: 'drive')
If you omit the fields parameter then the search will still work but the properties themselves won't be returned.
Updating appProperties is definitely arcane and the documentation is opaque. What you need is the ID of the file, and a File value object as a container for the attributes to update. Something like this:
new_app_properties = { 'my_app_key' => 'my_val' }
update_f = Google::Apis::DriveV3::File.new(app_properties: new_app_properties)
drive.update_file(file_id, update_f)

Puppet: Making a custom function depend on a resource

I have a Puppet custom function that returns information about a user defined in OpenStack's Keystone identity service. Usage is something along the lines of:
$tenant_id = lookup_tenant_by_name($username, $password, "mytenant")
The problem is that the credentials used in this query ($username) are supposed to be created by another resource during the Puppet run (a Keystone_user resource from puppet-keystone). As far as I can tell, the call to the lookup_tenant_by_name function is being evaluated before any resource ordering happens, because no amount of dependencies in the calling code is able to force the credentials to be created prior to this function being executed.
In general, it is possible to write custom functions -- or place them appropriately in a manifest -- such that they will not be executed by Puppet until after some specified resource has been instantiated?
Short answer: You cannot make your manifest's behavior depend on resources declared inside of it.
Long answer: Parser functions are called during the compilation phase (on the master if you use one, or the agent if you use puppet apply). In neither case can it ever run before any resource is synced, because that will happen after the compiler has done all its work (including invocation of your functions).
To query information from the agent machine, you generally want to use custom facts. Still, those will be populated before even the compiler run.
Likely the best approach in this situation is to make the manifest tolerate the absence of the information, so that anything that depends on the value that your lookup_tenant_by_name function returns will only be evaluated if that value is available. This will usually be during the second Puppet run.
if $tenant_id == "" {
notify { "cannot yet find tenant $username": }
}
else {
# your code using the tenant ID
}

Resources