How to use kubebuilder's client.List method? - go

I'm working on a custom controller for a custom resource using kubebuilder (version 1.0.8). I have a scenario where I need to get a list of all the instances of my custom resource so I can sync up with an external database.
All the examples I've seen for kubernetes controllers use either client-go or just call the api server directly over http. However, kubebuilder has also given me this client.Client object to get and list resources. So I'm trying to use that.
After creating a client instance by using the passed in Manager instance (i.e. do mgr.GetClient()), I then tried to write some code to get the list of all the Environment resources I created.
func syncClusterWithDatabase(c client.Client, db *dynamodb.DynamoDB) {
// Sync environments
// Step 1 - read all the environments the cluster knows about
clusterEnvironments := &cdsv1alpha1.EnvironmentList{}
c.List(context.Background(), /* what do I put here? */, clusterEnvironments)
}
The example in the documentation for the List method shows:
c.List(context.Background, &result);
which doesn't even compile.
I saw a few method in the client package to limit the search to particular labels, or for a specific field with a specific value, but nothing to limit the result to a specific resource kind.
Is there a way to do this via the Client object? Should I do something else entirely?

So figured it out - the answer is to pass nil for the second parameter. The type of the output pointer determines which sort of resource it actually retrieves.

According to the latest documentation, the List method is defined as follows,
List(ctx context.Context, list ObjectList, opts ...ListOption) error
If the List method you are calling has the same definition as above, your code should compile. As it has variadic options to set the namespace and field match, the mandatory arguments are Context and objectList.
Ref: KubeBuilder Book

Related

Reading CloudWatch log query status in go SDK v2

I'm running a CloudWatch log query through the v2 SDK for Go. I've successfully submitted the query using the StartQuery method, however I can't seem to process the results.
I've got my query ID in a variable (queryID) and am using the GetQueryResults method as follows:
results, err := svc.GetQueryResults(context.TODO(), &cloudwatchlogs.GetQueryResultsInput{QueryId: queryId,})
How do I actually read the contents? Specifically, I'm looking at the Status field. If I run the query at the command line, this comes back as a string description. According to the SDK docs, this is a bespoke type "QueryStatus", which is defined as a string with enumerated constants.
I've tried comparing to the constant names, e.g.
if results.Status == cloudwatchlogs.GetQueryResultsOutput.QueryStatus.QueryStatusComplete
but the compiler doesn't accept this. How do I either reference the constants or get to the string value itself?
The QueryStatus type is defined in the separate types package. The Go SDK services are all organised this way.
import "github.com/aws/aws-sdk-go-v2/service/cloudwatchlogs/types"
if res.Status == types.QueryStatusComplete {
fmt.Println("complete!")
}

Nesting custom resources in chef

I am trying to build a custom resource which would in turn use another of my custom resource as part of its action. The pseudo-code would look something like this
customResource A
property component_id String
action: doSomething do
component_id = 1 if component_id.nil?
node.default[component_details][component_id] = ''
customResource_b "Get me component details" do
comp_id component_id
action :get_component_details
end
Chef::log.info("See the output computed by my customResourceB")
Chef::log.info(node[component_details][component_id])
end
Thing to note:
1. The role of customResource_b is to make a PS call to a REST web service and store the JSON result in node[component_details][component_id] overriding its value. I am creating this attribute node on this resource since I know it will be used later one, hence avoiding compile time issues.
Issues I am facing:
1. When testing a simple recipe that calls this resource in chef-client, the code in the resource gets executed to the last log line and after that the call to customResource_b is made. Which is something I am not expecting to happen.
Any advice would be appreciated. I am also quite new to Chef so any design improvements are also welcome
there is no need to nest chef resources, rather use chef idompotance, guards and notification.
and as usualy, you can always use a condition to decide which cookbook\recipe to run.

How to handle weird API flow with implicit create step in custom terraform provider

Most terraform providers demand a predefined flow, Create/Read/Update/Delete/Exists
I am in a weird situation developing a provider against an API where this behavior diverges a bit.
There are two kinds of resources, Host and Scope. A host can have many scopes. Scopes are updated with configurations.
This generally fits well into the terraform flow, it has a full CRUDE flow possible - except for one instance.
When a new Host is made, it automatically has a default scope attached to it. It is always there, cannot be deleted etc.
I can't figure out how to have my provider gracefully handle this, as I would want the tf to treat it like any other resource, but it doesn't have an explicit CREATE/DELETE, only READ/UPDATE/EXISTS - but every other scope attached to the host would have CREATE/DELETE.
Importing is not an option due to density, requiring an import for every host would render the entire thing pointless.
I originally was going to attempt to split Scopes and Configurations into separate resources so one could be full-filled by the Host (the host providing the Scope ID for a configuration, and then other configurations can get their scope IDs from a scope resource)
However this approach falls apart because the API for both are the same, unless I wanted to add the abstraction of creating an empty scope then applying a configuration against it, which may not be fully supported. It would essentially be two resources controlling one resource which could lead to dramatic conflicts.
A paraphrased example of an execution I thought about implementing
resource "host" "test_integrations" {
name = "test.integrations.domain.com"
account_hash = "${local.integrationAccountHash}"
services = [40]
}
resource "configuration" "test_integrations_root_configuration" {
name = "root"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations.root_scope_id}"
hostnames = ["test.integrations.domain.com"]
}
resource "scope" "test_integrations_other" {
account_hash = "${local.integrationAccountHash}"
host_hash = "${host.test_integrations.id}"
path = "/non/root/path"
name = "Some Other URI Path"
}
resource "configuration" "test_integrations_other_configuration" {
name = "other"
parent_host = "${host.test_integrations.id}"
account_hash = "${local.integrationAccountHash}"
scope_id = "${host.test_integrations_other.id}"
}
In this example flow, a configuration and scope resource unfortunately are pointing to the same resource which I am worried would cause conflicts or confusion on who is responsible for what and dramatically confuses the create/delete lifecycle
But I can't figure out how the TF lifecycle would allow for a resource that would only UPDATE/READ/EXISTS if say a flag was given (and how state would handle that)
An alternative would be to just have a Configuration resource, but then if it was the root configuration it would need to skip create/delete as it is inherently tied to the host
Ideally I'd be able to handle this situation gracefully. I am trying to avoid including the root scope/configuration in the host definition as it would create a split in how they are written and handled.
The documentation for providers implies you can use a resource AS a schema object in a resource, but does not explain how or why. If it works the way I imagine it, it may work to create a resource that is only used to inject into the host perhaps - but I don't know if that is how it works and if it is how to accomplish it.
I believe I tentatively have found a solution after asking some folks on the gopher slack.
Using AWS Provider Default VPC as a reference, I can "clone" the resource into one with a custom Create/Delete lifecycle
Loose Example:
func defaultResourceConfiguration() *schema.Resource {
drc := resourceConfiguration()
drc.Create = resourceDefaultConfigurationCreate
drc.Delete = resourceDefaultConfigurationDelete
return drc
}
func resourceDefaultConfigurationCreate(d *schema.ResourceData, m interface{}) error {
// double check it exists and update the resource instead
return resourceConfigurationUpdate(d, m)
}
func resourceDefaultConfigurationDelete(d *schema.ResourceData, m interface{}) error {
log.Printf("[WARN] Cannot destroy Default Scope Configuration. Terraform will remove this resource from the state file, however resources may remain.")
return nil
}
This should allow me to provide an identical resource that is designed to interact with the already existing one created by its parent host.

Puppet: Making a custom function depend on a resource

I have a Puppet custom function that returns information about a user defined in OpenStack's Keystone identity service. Usage is something along the lines of:
$tenant_id = lookup_tenant_by_name($username, $password, "mytenant")
The problem is that the credentials used in this query ($username) are supposed to be created by another resource during the Puppet run (a Keystone_user resource from puppet-keystone). As far as I can tell, the call to the lookup_tenant_by_name function is being evaluated before any resource ordering happens, because no amount of dependencies in the calling code is able to force the credentials to be created prior to this function being executed.
In general, it is possible to write custom functions -- or place them appropriately in a manifest -- such that they will not be executed by Puppet until after some specified resource has been instantiated?
Short answer: You cannot make your manifest's behavior depend on resources declared inside of it.
Long answer: Parser functions are called during the compilation phase (on the master if you use one, or the agent if you use puppet apply). In neither case can it ever run before any resource is synced, because that will happen after the compiler has done all its work (including invocation of your functions).
To query information from the agent machine, you generally want to use custom facts. Still, those will be populated before even the compiler run.
Likely the best approach in this situation is to make the manifest tolerate the absence of the information, so that anything that depends on the value that your lookup_tenant_by_name function returns will only be evaluated if that value is available. This will usually be during the second Puppet run.
if $tenant_id == "" {
notify { "cannot yet find tenant $username": }
}
else {
# your code using the tenant ID
}

Oracle Service Bus - Assign expression

I have this problem and I am not sure why it's happening and how to fix it. I have created an OSB peject. In the proxy service pipeline I am doing a Service Callout to a sync SOAP service in another application. The other service needs the request body as below:
<RequestSelectionValues xmlns="http://www.camstar.com/WebService/WSShopFloor">
<inputServiceData xmlns:q1="http://www.camstar.com/WebService/DataTypes" q1:type="OnlineQuery">
<OnlineQuerySetup>
<__CDOTypeName/>
<__name>xLot By FabLotNumber</__name>
</OnlineQuerySetup>
<Parameters>
<__listItem>
<Name>FabLotNumber</Name>
<DefaultValue>FAB_Lot_1</DefaultValue>
</__listItem>
<__listItem>
<Name>BLOCKOF200ROWS</Name>
<DefaultValue>1</DefaultValue>
</__listItem>
</Parameters>
</inputServiceData>
<queryOption xmlns:q2="http://www.camstar.com/WebService/DataTypes" q2:type="QueryOption">
<RowSetSize>1000</RowSetSize>
<StartRow>1</StartRow>
<QueryType>user</QueryType>
<ChangeCount>0</ChangeCount>
<RequestRecordCount>false</RequestRecordCount>
<RequestRecordSetAndCount>false</RequestRecordSetAndCount>
</queryOption>
<serviceInfo xmlns:q3="http://www.camstar.com/WebService/DataTypes" q3:type="OnlineQuery_Info">
<OnlineQuerySelection>
<RequestValue>false</RequestValue>
<RequestMetadata>false</RequestMetadata>
<RequestSubFieldValues>false</RequestSubFieldValues>
<RequestSelectionValues>true</RequestSelectionValues>
</OnlineQuerySelection>
</serviceInfo>
</RequestSelectionValues>
I am using an Assign to put the above expression in a variable.
Notice the line:
<serviceInfo xmlns:q3="http://www.camstar.com/WebService/DataTypes" q3:type="OnlineQuery_Info">
xmlns:q3="http://www.camstar.com/WebService/DataTypes" needs to be before q3:type="OnlineQuery_Info" for the other service to be called successfully otherwise the service call fails.
In the development it looks fine. I can test the assign of expression as well.
When I go to the OSB console to test the service I notice that in the Assign variable the namespace place switches and it becomes like this:
<serviceInfo q3:type="OnlineQuery_Info" xmlns:q3="http://www.camstar.com/WebService/DataTypes">
This makes the service calls to fail. I have tried putting the body payload in an xslt. Result is the same. I am not sure why it switches the type before namespace. The end result is that the service is not working as expected.
Any idea what I can do to fix this issue. How can I prevent the switching?
Thanks
I haven't found any settings in OSB that can prevent reordering of attributes for you. However, the above OSB behavior is completely XML standard compliant. In fact, the target service side should be XML compliant and treat the two variants mentioned above as the same, because according to XML standard, tow XML documents with only difference in attribute ordering should be treated as the same.
EDIT:
Please go here to download a modified config. My thoughts are:
Specify the business service to invoke in 'Text as Request' mode, as "CamstarLotQuery/business/CSWSShopFloor_Txt" shown below:
Manipulate messages as text, not XML, in your proxy service, as specified in "CamstarLotQuery/proxy/CamstarLotQueryTxt_Txt":
You might need to specify a SOAP Action in http header when calling a business service, depending on the target service.
One solution i can think of is to assign all the namespaces at the Parent Tag Level, and keep the attributes where they are applicable.
Example:
<RequestSelectionValues xmlns:q1="http://www.camstar.com/WebService/DataTypes" xmlns="http://www.camstar.com/WebService/WSShopFloor" xmlns:q2="http://www.camstar.com/WebService/DataTypes" xmlns:q3="http://www.camstar.com/WebService/DataTypes">
But the problem with this implementation is that since the namespace declaration is now Global, you have to declare your namespace prefixes (q1, q2, q3) to the blocks where the namespaces were previously defined.
Example:
<q3:serviceInfo q3:type="OnlineQuery_Info">
<q3:OnlineQuerySelection>
<q3:RequestValue>false</q3:RequestValue>
<q3:RequestMetadata>false</q3:RequestMetadata>
<q3:RequestSubFieldValues>false</q3:RequestSubFieldValues>
<q3:RequestSelectionValues>true</q3:RequestSelectionValues>
</q3:OnlineQuerySelection>
</q3:serviceInfo>
if this namespace prefix is not declared, then as per XML standards, the tag assume the 'default' namespace value - which will be the namespace of the parent.
However, even though this solution has a round-about way of implementation, this solution will definitely work.

Resources