Golang kubernetes client - patching an existing resource with a label - go

I want to patch an existing secret resource within Kubernetes. The object is called centos-secretstorage within the default namespace. I want to add a simple label of test: empty. However, this fails when the secret object centos-secretstorage exists, but it doesn't have any existing labels. If I manually label the secret with something else beforehand via kubectl label centos-secretstorage hello=world, and rerun my golang code. It is able to add the test: empty label successfully.
However, I want to have this be able to add a label regardless if existing labels exist or not.
type secret struct {
namespace string
name string
}
func main() {
k8sClient := k8CientInit()
vaultSecret := secret{
namespace: "default",
name: "centos-secretstorage",
}
vaultSecret.patchSecret(k8sClient)
}
type patchStringValue struct {
Op string `json:"op"`
Path string `json:"path"`
Value string `json:"value"`
}
func (p *secret) patchSecret(k8sClient *kubernetes.Clientset) {
emptyPayload := []patchStringValue{{
Op: "add",
Path: "/metadata/labels/test",
Value: "empty",
}}
emptyPayloadBytes, _ := json.Marshal(emptyPayload)
fmt.Println(string(emptyPayloadBytes))
emptyres, emptyerr := k8sClient.CoreV1().Secrets(p.namespace).Patch(p.name, types.JSONPatchType, emptyPayloadBytes)
if emptyerr != nil {
log.Fatal(emptyerr.Error())
}
fmt.Println(emptyres.Labels)
}
Error: the server rejected our request due to an error in our request

The problem is that the add operation in the JSON patch strategy requires the path to point to an existing map, while the object you are patching does not have this map at all. This is why when any label exists, the patch succeeds. We can work around this by using a different patch strategy. I think the merge strategy should work well.
I was able to reproduce this (on a namespace, but the object doesn't matter) using kubectl (which is generally useful when debugging the Kubernetes API):
kubectl patch ns a --type='json' -p='[{"op": "merge", "path": "/metadata/labels/test", "value":"empty"}]' -> fails
kubectl patch ns a --type='merge' -p='{"metadata": {"labels": {"test": "empty"}}}' -> succeeds
Using Golang client-go it would look something like this (didn't actually compile / run this):
payload := `{"metadata": {"labels": {"test": "empty"}}}`
emptyres, emptyerr := k8sClient.CoreV1().Secrets(p.namespace).Patch(p.name, types.MergePatchType, []byte(payload))
You can make the creation of the payload JSON nicer using structs, as you did with patchStringValue.
More info on patch strategies can be found here:
https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/
https://erosb.github.io/post/json-patch-vs-merge-patch/

Related

Grafeas golang filter string and DeleteOccurrence() API errors

I'm running into 2 separate issues using the Grafeas golang v1beta1 API.
What I'm trying to do
Call ListOccurrencesRequest() with a Filter to get a list of occurrences for deletion
Call DeleteOccurrence() on each occurrence from above list to delete it
Issue #1
I'm trying to set the Filter field using this GCP reference grafeas golang code as a guide.
filterStr := fmt.Sprintf(`kind=%q`, grafeas_common_proto.NoteKind_BUILD.String())
listReq := &grafeas_proto.ListOccurrencesRequest{
Parent: BuildProject,
Filter: filterStr,
PageSize: 100,
}
listOccResp, err := r.grafeasCommon.ListOccurrences(ctx, listReq)
for {
if err != nil {
log.Error("failed to iterate over occurrences", zap.NamedError("error", err))
return nil, err
}
...
But it looks like my filterStr is invalid, here's the error:
filterStr {"filterStr": "kind=\"BUILD\""}
failed to iterate over occurrences {"error": "rpc error: code = Internal desc = error while parsing filter expression: 4 errors occurred:\n\t* error parsing filter\n\t* Syntax error: token recognition error at: '=\"' (1:4)\n\t* Syntax error: token recognition error at: '\"' (1:11)\n\t* Syntax error: extraneous input 'BUILD' expecting <EOF> (1:6)\n\n"}
It looks like the \ escape character is causing trouble but I've tried it without it and get another flavor of same type of error.
Issue #2
When I call DeleteOccurrence(), I can see that the occurrence is in fact deleted from Grafeas by checking:
curl http://localhost:8080/v1beta1/projects/broker_builds/occurrences
But DeleteOccurrence() always sets the err
Code:
for _, o := range occToDelete {
log.Info("occToDelete", zap.String("occurrence", o))
_, err := r.grafeasCommon.DeleteOccurrence(ctx, &grafeas_proto.DeleteOccurrenceRequest{
Name: o,
})
if err != nil {
log.Error("failed to delete occurrence", zap.String("occurrence", o), zap.NamedError("error", err))
}
}
Error:
failed to delete occurrence {"occurrence": "projects/broker_builds/occurrences/f61a4c57-a3d3-44a9-86ee-5d58cb6c6052", "error": "rpc error: code = Internal desc = grpc: error while marshaling: proto: Marshal called with nil"}
I don't understand what the error is referring to.
This question was cross-posted on Grafeas message board.
Appreciate any help. Thanks.
Can you shed some details around the storage engine used, and the filtering implementations details?
Issue 1. filtering is not implemented in any of the storage engines in gitHub.com/grafeas/grafeas.
Issue 2. it depends what store you use, memstore/embededstore do not seem to be producing any errors similar to what you mentioned... if using postgresql store, are you trying to delete an occurrence twice?
Solution for Issue #1
I'm using grafeas-elasticsearch as the storage backend. It uses a different filter string format than the examples I had looked at in my original post.
For example, instead of = -> ==, AND -> &&, etc.
More examples can be seen here:
https://github.com/rode/grafeas-elasticsearch/blob/main/test/v1beta1/occurrence_test.go#L226
Solution for Issue #2
Known issue with grafeas
https://github.com/grafeas/grafeas/pull/456
https://github.com/grafeas/grafeas/pull/468
Unfortunately the latest tagged release of grafeas v0.1.6 does not include these fixes yet. So will need to pick them up on the next release.
Thanks to #Ovidiu Ghinet, that was a good tip

Using terraform yamldecode to access multi level element

I have a yaml file (also used in a azure devops pipeline so needs to be in this format) which contains some settings I'd like to directly access from my terraform module.
The file looks something like:
variables:
- name: tenantsList
value: tenanta,tenantb
- name: unitName
value: canary
I'd like to have a module like this to access the settings but I can't see how to get to the bottom level:
locals {
settings = yamldecode(file("../settings.yml"))
}
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
But the terraform plan errors with this:
Error: Unsupported attribute
on canary.tf line 16, in module "infra":
16: unitname = local.settings.variables.unitName
|----------------
| local.settings.variables is tuple with 2 elements
This value does not have any attributes.
It seems like the main reason this is difficult is because this YAML file is representing what is logically a single map but is physically represented as a YAML list of maps.
When reading data from a separate file like this, I like to write an explicit expression to normalize it and optionally transform it for more convenient use in the rest of the Terraform module. In this case, it seems like having variables as a map would be the most useful representation as a Terraform value, so we can write a transformation expression like this:
locals {
raw_settings = yamldecode(file("${path.module}/../settings.yml"))
settings = {
variables = tomap({
for v in local.raw_settings.variables : v.name => v.value
})
}
}
The above uses a for expression to project the list of maps into a single map using the name values as the keys.
With the list of maps converted to a single map, you can then access it the way you originally tried:
module "infra" {
source = "../../../infra/terraform/"
unitname = local.settings.variables.unitName
}
If you were to output the transformed value of local.settings as YAML, it would look something like this, which is why accessing the map elements directly is now possible:
variables:
tenantsList: tenanta,tenantb
unitName: canary
This will work only if all of the name strings in your input are unique, because otherwise there would not be a unique map key for each element.
(Writing a normalization expression like this also doubles as some implicit validation for the shape of that YAML file: if variables were not a list or if the values were not all of the same type then Terraform would raise a type error evaluating that expression. Even if no transformation is required, I like to write out this sort of expression anyway because it serves as some documentation for what shape the YAML file is expected to have, rather than having to study all of the references to it throughout the rest of the configuration.)
With my multidecoder for YAML and JSON you are able to access multiple YAML and/or JSON files with their relative paths in one step.
Documentations can be found here:
Terraform Registry -
https://registry.terraform.io/modules/levmel/yaml_json/multidecoder/latest?tab=inputs
GitHub:
https://github.com/levmel/terraform-multidecoder-yaml_json
Usage
Place this module in the location where you need to access multiple different YAML and/or JSON files (different paths possible) and pass
your path/-s in the parameter filepaths which takes a set of strings of the relative paths of YAML and/or JSON files as an argument. You can change the module name if you want!
module "yaml_json_decoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "failover/cosmosdb.json", "network/private_endpoints/*.yaml", "network/private_links/config_file.yml", "network/private_endpoints/*.yml", "pipeline/config/*.json"]
}
Patterns to access YAML and/or JSON files from relative paths:
To be able to access all YAML and/or JSON files in a folder entern your path as follows "folder/rest_of_folders/*.yaml", "folder/rest_of_folders/*.yml" or "folder/rest_of_folders/*.json".
To be able to access a specific YAML and/or a JSON file in a folder structure use this "folder/rest_of_folders/name_of_yaml.yaml", "folder/rest_of_folders/name_of_yaml.yml" or "folder/rest_of_folders/name_of_yaml.json"
If you like to select all YAML and/or JSON files within a folder, then you should use "*.yml", "*.yaml", "*.json" format notation. (see above in the USAGE section)
YAML delimiter support is available from version 0.1.0!
WARNING: Only the relative path must be specified. The path.root (it is included in the module by default) should not be passed, but everything after it.
Access YAML and JSON entries
Now you can access all entries within all the YAML and/or JSON files you've selected like that: "module.yaml_json_decoder.files.[name of your YAML or JSON file].entry". If the name of your YAML or JSON file is "name_of_your_config_file" then access it as follows "module.yaml_json_decoder.files.name_of_your_config_file.entry".
Example of multi YAML and JSON file accesses from different paths (directories)
first YAML file:
routes/nsg_rules.yml
rdp:
name: rdp
priority: 80
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 3399
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
---
ssh:
name: ssh
priority: 70
direction: Inbound
access: Allow
protocol: Tcp
source_port_range: "*"
destination_port_range: 24
source_address_prefix: VirtualNetwork
destination_address_prefix: "*"
second YAML file:
services/logging/monitoring.yml
application_insights:
application_type: other
retention_in_days: 30
daily_data_cap_in_gb: 20
daily_data_cap_notifications_disabled: true
logs:
# Optional fields
- "AppMetrics"
- "AppAvailabilityResults"
- "AppEvents"
- "AppDependencies"
- "AppBrowserTimings"
- "AppExceptions"
- "AppExceptions"
- "AppPerformanceCounters"
- "AppRequests"
- "AppSystemEvents"
- "AppTraces"
first JSON file:
test/config/json_history.json
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "S",
"GlossList": {
"GlossEntry": {
"ID": "SGML",
"SortAs": "SGML",
"GlossTerm": "Standard Generalized Markup Language",
"Acronym": "SGML",
"Abbrev": "ISO 8879:1986",
"GlossDef": {
"para": "A meta-markup language, used to create markup languages such as DocBook.",
"GlossSeeAlso": ["GML", "XML"]
},
"GlossSee": "markup"
}
}
}
}
}
main.tf
module "yaml_json_multidecoder" {
source = "levmel/yaml_json/multidecoder"
version = "0.2.1"
filepaths = ["routes/nsg_rules.yml", "services/logging/monitoring.yml", test/config/*.json]
}
output "nsg_rules_entry" {
value = module.yaml_json_multidecoder.files.nsg_rules.aks.ssh.source_address_prefix
}
output "application_insights_entry" {
value = module.yaml_json_multidecoder.files.monitoring.application_insights.daily_data_cap_in_gb
}
output "json_history" {
value = module.yaml_json_multidecoder.files.json_history.glossary.title
}
Changes to Outputs:
nsg_rules_entry = "VirtualNetwork"
application_insights_entry = 20
json_history = "example glossary"

Where is my GenesisConfig entry for my config storage?

I'm trying to reconstruct the Sudo module's root key behaviour before I extend it. Following the v1 documentation on GenesisConfig, I have a config() storage variable in my decl_storage:
RootKey get(fn rootkey) config(): T::AccountId;
(in the node-template template.rs for now)
Yet, if I look at the macro-expanded output, I have no template item in the GenesisConfig struct, and I cannot put in an entry like the following in the chain_spec's testnet_genesis function
template: Some(TemplateConfig {
rootkey: root_key,
}),
Because I get a complaint about both template and TemplateConfig, even though both are supposed to be constructed by the macro expansion.
Edit: Specifically, if it add the above with a TemplateConfig item in the use runtime::{} list, I am informed:
error[E0432]: unresolved import `runtime::TemplateConfig`
--> node-template/src/chain_spec.rs:4:14
|
4 | SudoConfig, TemplateConfig, IndicesConfig, SystemConfig, WASM_BINARY, Signature
| ^^^^^^^^^^^^^^ no `TemplateConfig` in the root
error[E0560]: struct `node_template_runtime::GenesisConfig` has no field named `template`
--> node-template/src/chain_spec.rs:142:3
|
142 | template: Some(TemplateConfig {
| ^^^^^^^^ `node_template_runtime::GenesisConfig` does not have this field
|
= note: available fields are: `system`, `aura`, `grandpa`, `indices`, `balances`, `sudo`
I also don't see any template items in polkadot.js under storage, whereas I do see sudo's key().
What obvious thing am I missing?
When trying to set up the genesis configuration for a runtime module you need to do the following:
Make sure your runtime module has "configurable storage items". This could be as simple as setting config() in the decl_storage! macro, but could also be a bit more complicated as documented here: `decl_storage! - GenesisConfig.
decl_storage! {
trait Store for Module<T: Trait> as Sudo {
Key get(fn key) config(): T::AccountId;
//--------------^^^^^^^^---------------
}
}
This will generate a GenesisConfig in your module, which will be used in the next step.
Next you need to expose your module specific GenesisConfig struct to the rest of your runtime's genesis configuration by adding the Config/Config<T> item to your construct_runtime! macro. In this example, we use Config<T> because we are configuring a generic T::AccountId:
construct_runtime!(
pub enum Runtime where
Block = Block,
NodeBlock = opaque::Block,
UncheckedExtrinsic = UncheckedExtrinsic
{
//--snip--
TemplateModule: template::{Module, Call, Storage, Event<T>, Config<T>},
//----------------------------------------------------------^^^^^^^^^--
}
}
This will generate an alias to your module specific GenesisConfig object based on the name you configured for your module (name + Config). In this case, the name of the object will be TemplateModuleConfig.
Finally, you need to configure this storage item in the chain_spec.rs file. To do this, make sure to import the TemplateModuleConfig item:
use node_template_runtime::{
AccountId, AuraConfig, BalancesConfig, GenesisConfig, GrandpaConfig,
SudoConfig, IndicesConfig, SystemConfig, WASM_BINARY, Signature,
TemplateModuleConfig,
//--^^^^^^^^^^^^^^^^^^^^
};
And then configure your genesis information:
template: Some(TemplateModuleConfig {
key: root_key,
}),
It sounds like you're missing use TemplateConfig at the beginning of your chain_spec.rs file. Something like this https://github.com/substrate-developer-hub/substrate-node-template/blob/8fea1dc6dd0c5547117d022fd0d1bf49868ee548/src/chain_spec.rs#L4
If this is not your issue please supply the exact error you're getting, and optionally a link to the full code.

Is it possible to read an SRML error message in Substrate UI, when a transaction fails?

I am not sure of the behaviour of error messages in Substrate runtimes in relation to Substrate UI, and if they inherently cause a transaction failure or not.
For example in the democracy SRML I see the following line:
ensure!(!<Cancellations<T>>::exists(h), "cannot cancel the same proposal twice");
Which presumably is a macro that ensures that the transaction fails or stops processing if the h (the proposal hash) already exists. There is clearly a message associated with this error.
Am I right to assume that the transaction fails (without the rest of the SRML code being executed) when this test fails?
If so, how do I detect the failure in Substrate UI, and possibly see the message itself?
If not, then presumably some further code is necessary in the runtime module which explicitly creates an error. I have seen Err() - but not in conjunction with ensure!()
As https://github.com/paritytech/substrate/pull/3433 is merged, the ExtrinsicFailed event now includes a DispatchError, which will provide additional error code.
There isn't much documentations available so I will just use system module as example.
First you need to decl_error, note the error variants can only be simple C like enum
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L334
decl_error! {
/// Error for the System module
pub enum Error {
BadSignature,
BlockFull,
RequireSignedOrigin,
RequireRootOrigin,
RequireNoOrigin,
}
}
Then you need to associate the declared Error type
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L253
decl_module! {
pub struct Module<T: Trait> for enum Call where origin: T::Origin {
type Error = Error;
Then you can just return your Error in dispatch calls when things failed
https://github.com/paritytech/substrate/blob/5420de3face1349a97eb954ae71c5b0b940c31de/srml/system/src/lib.rs#L543
pub fn ensure_root<OuterOrigin, AccountId>(o: OuterOrigin) -> Result<(), Error>
where OuterOrigin: Into<Result<RawOrigin<AccountId>, OuterOrigin>>
{
match o.into() {
Ok(RawOrigin::Root) => Ok(()),
_ => Err(Error::RequireRootOrigin),
}
}
Right now you will only able to see a two numbers, module index and error code from JS side. Later there could be support to include the error details in metadata so that frontend will be able to provide a better response.
Related issue:
https://github.com/paritytech/substrate/issues/2954
The ensure! macro is expaneded as:
#[macro_export]
macro_rules! fail {
( $y:expr ) => {{
return Err($y);
}}
}
#[macro_export]
macro_rules! ensure {
( $x:expr, $y:expr ) => {{
if !$x {
$crate::fail!($y);
}
}}
}
So basically, it's just a quicker way to return Err. At 1.0, the error msg will only be printed to stdout(at least what I've tested so far), doesn't know if it'll be included in blockchain in the future(so can be viewed in substrate ui)..

How to access JSON from external data source in Terraform?

I am receiving JSON from a http terraform data source
data "http" "example" {
url = "${var.cloudwatch_endpoint}/api/v0/components"
# Optional request headers
request_headers {
"Accept" = "application/json"
"X-Api-Key" = "${var.api_key}"
}
}
It outputs the following.
http = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
which is a string in terraform. In order to convert this string into JSON I pass it to an external data source which is a simple ruby function. Here is the terraform to pass it.
data "external" "component_ids" {
program = ["ruby", "./fetchComponent.rb",]
query = {
data = "${data.http.example.body}"
}
}
Here is the ruby function
#!/usr/bin/env ruby
require 'json'
data = JSON.parse(STDIN.read)
results = data.to_json
STDOUT.write results
All of this works. The external data outputs the following (It appears the same as the http output) but according to terraform docs this should be a map
external1 = {
data = [{"componentID":"k8QEbeuHdDnU","name":"Jenkins","description":"","status":"Partial Outage","order":1553796836},{"componentID":"ui","name":"ui","description":"","status":"Operational","order":1554483781},{"componentID":"auth","name":"auth","description":"","status":"Operational","order":1554483781},{"componentID":"elig","name":"elig","description":"","status":"Operational","order":1554483781},{"componentID":"kong","name":"kong","description":"","status":"Operational","order":1554483781}]
}
I was expecting that I could now access data inside of the external data source. I am unable.
Ultimately what I want to do is create a list of the componentID variables which are located within the external data source.
Some things I have tried
* output.external: key "0" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result[0]}
* output.external: At column 3, line 1: element: argument 1 should be type list, got type string in:
${element(data.external.component_ids.result["componentID"],0)}
* output.external: key "componentID" does not exist in map data.external.component_ids.result in:
${data.external.component_ids.result["componentID"]}
ternal: lookup: lookup failed to find 'componentID' in:
${lookup(data.external.component_ids.*.result[0], "componentID")}
I appreciate the help.
can't test with the variable cloudwatch_endpoint, so I have to think about the solution.
Terraform can't decode json directly before 0.11.x. But there is a workaround to work on nested lists.
Your ruby need be adjusted to make output as variable http below, then you should be fine to get what you need.
$ cat main.tf
variable "http" {
type = "list"
default = [{componentID = "k8QEbeuHdDnU", name = "Jenkins"}]
}
output "http" {
value = "${lookup(var.http[0], "componentID")}"
}
$ terraform apply
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
Outputs:
http = k8QEbeuHdDnU

Resources