Detect Spec update in the reconcile function - go

I am starting now with Kubernetes and the Operator SDK and I am trying to build my first operator and I have probably a simple question.
Question
How to detect a configuration change in the custom resource yaml in the reconcile loop and take an action according to the change?
I have some config properties specified in the my CR Spec:
apiVersion: my.example.com/v1alpha1
kind: StoreApp
metadata:
name: mystoreapp
spec:
username: technicalUser
password: abcd1234
catalogs:
- name: Bikes
description: Bikes_description
- name: Cars
description: Cars_description
I want when I add new custom resource of this kind my controller to create a new pod with my app image running inside (in a webserver). When my app is up and running for the first time I want to configure it (to add the catalogs from the spec) via HTTP request from the operator.
So far it's ok but I also what to change these catalogs while my app is up and running.
For example I want to add new catalog in the spec (through kubectl patch). My operator's reconcile method will be called and how can I understand that the spec is changed? I am not sure it's a good idea to make HTTP calls to my app to get all catalogs and compare them with the catalogs from the spec. Is this the correct way to understand there is a change?
I am thinking about two other ways to find that something is updated but I am not sure if they will work properly and are they the best way to do this.
First idea is to request the instance of StoreApp with client.Get(...) but as far as I understand this will call the API server and will get the updated version of mystoreapp. I read about some local index which acts like cache for these objects and I can check is there a difference between the cached object and the object returned from the API server. But I did not find how to get the object from this local index so I was not able to compare the two objects.
To create map in which I store the hash of the hole spec object and to check every time this hash with the hash of the object got with client.Get(...). I think this will work but there should be a better way to do this.
I read some Java Operators for K8s and there were methods like onAdd, onUpdate, onDelete. I couldn't find something similar in the Operatod SDK. Is there anything like this in the Operator SDK?
Every answer will be helpful. Thank you in advance!
Best Regards,
Hristiyan

The recommended practice is to look at the spec you received, and compare it to the state of the world/cluster, so retrieving the catalogs and comparing them to the spec is indeed the proper way to do it.
The reasoning for this recommandation is that the order of the events you get from Kubernetes is not guaranteed to be consistent, and it's also not guaranteed that you'll necessarily receive every event in a reasonable amount of time, or that you'll only receive each event once, so it's best to base your decision making on what was requested as compared to what is, rather than what specific event triggered the reconciliation.

Related

Is there a Go implementation of fs.ReadDir for Google Cloud Storage?

I am creating a web application in Go.
I have modified my working code so that it can read and write files on both a local filesystem and a bucket of Google Cloud Storage based on a flag.
Basically I included a small package in the middle, and I implemented my-own-pkg.readFile or my-own-pkg.WriteFile and so on...
I have replaced all calls in my code where I read or save files from the local filesystem with calls to my methods.
Finally these methods include a simple switch case that runs the standard code to read/write locally or the code to read/wrote from/to a gcp bucket.
My current problem
In some parts I need to perform a ReadDir to get the list of DirEntries and then cycle though them. I do not want to change my code except for replacing os.readDir with my-own-pkg.ReadDir.
So far I understand that there is not a native function in the gcp module. So I suppose (but here I need your help because I am just guessing) that I would need an implementation of fs.FS for the gcp. It being a new feature of go 1.6 I guess it's too early to find one.
So I am trying to create simply a my-own-pkg.ReadDir(folderpath) function that does the following:
case "local": { }
case "gcp": {
<Use gcp code sample to list objects in my bucket with Query.Prefix = folderpath and
Query.Delimiter="/"
Then create a slice of my-own-pkg.DirEntry (because fs.DkrEntry is just an interface and so it needs to be implemented... :-( ) and return them.
In order to do so I need to implement also the interface fs.DirEntry (which requires the implementation of interface for FileInfo and maybe something else...)
Question 1) is this the right path to follow to solve my issue or is there a better way?
Question 2) (only) if so, does the gcp method that lists object with a prefix and a delimiter return just files? I can't see a method that returns also the list of prefixes found
(If I have prefix/file1.txt and prefix/a/file2.txt I would like to get both "file1.txt" and "a" as files and prefixes...)
I hope I was enough clear... This time I can't include code because it's incomplete... But in case it helps I can paste what I can.
NOTE: by the way go 1.6 allowed me to solve elegantly a similar issue when dealing with assets either embedded or on the filesystem thanks to the existing implementation of fs.FS and the related ReadDirFS. So good if I could follow the same route 🙂
By the way I am going on studying and experimenting so in case I am successful I will contribute as well :-)
I think your abstraction layer is good but you need to know something on Cloud Storage: The directory doesn't exist.
In fact, all the object are put at the root of the bucket / and the fully qualified name of the object is /path/to/object.file. You can filter on a prefix, that return all the object (i.e. file because directory doesn't exist) with the same path prefix.
It's not a full answer to your question but I'm sure that you can think and redesign the rest of your code with this particularity in mind.

FHIR Search by Referencing Resources

Is there a way to search for a resource by its referencing resources? For example, is the a way to find all Observations of code = X with Provenance by agent Y?
GET [base]/Observation?code=X&???
One could:
GET [base]/Provenance?userid=Y&_include=Provenance:target:Observation
but that prevents any kind of filtering on Observation (which may create a volume problem in the response!). Also, I don't need the provenance resource - I just need to make sure that the Observations I'm using have a certain provenance.
Right now, to the best of my knowledge, there's no way to apply filters to multiple resources unless you're using _filter or using a custom OperationDefinition.

Magento audit in custom design

Recently we had a magento audit and one of their suggestion as follows:
Location:
app/design/frontend/enterprise/mytheme/template/catalog/product/list.phtml:59,
Type: Maintainability
Name: Hardcoded Value
Priority: Low
Description: Hardcoding values like product type code, store id, file name, credentials, etc. may cause serious issues during future upgrades or porting.
Recommendation: Such values can be stored in class constants or in the system configuration for the best flexibility.
Example:
Mage::getConfig()->getOptions()->getSomeSku()
If we add custom code in custom theme, will that affect during upgrade?
Custom vs. hard-coded
The issue here is not the fact that the code is custom. It's the fact that the code is not upgrade-safe.
Why hard-coding is bad
Hard-coded values aren't easily accessible for future changes. Updates may perform unexpected actions and you may end up with a broken page because of it.
The values themselves can become obsolete if an upgrade procedure re-creates an object (deletes and saves) and the object ID changes because of it.
Upgrade-path
The topics approached by an audit team are intended to help you to achieve an automated upgrade-path. Meaning that if you would respect all of those suggestions they made running your upgrade should be clean and error-less. Otherwise your debugging day has just arrived.

Ruby viewpoint with EWS

I am trying to get started using viewpoint against EWS within Ruby, and it's not making a lot of sense at the moment. I am wondering where I can get some good example code, or some pointers? I am using 1.0.0-beta.
For example: I know the name of the calendar folder I want to use, so I could search for it, but how to access methods in that folder once I find it? What are the appropriate parameters, etc...
Any advice?
If you haven't read it yet I would recommend the README file in the repository. It has a couple of examples that should put you on the right path. Also, the generated API documentation should give you enough to work with.
http://rubydoc.info/github/WinRb/Viewpoint/frames
At a very basic level you can get all of your calendar events with the following code:
calendar = client.get_folder :calendar
events = calendar.items
I hope that gives you a little more to get started with.
Follow-up:
Again, I would point you to the API docs for concrete methods like #items. There are however dynamically added methods depending on the type that you can fetch with obj.ews_methods. In the case of CalendarItem one of those methods is #name so you can call obj.name to get the folder name. The dynamic methods are all backed by a formatted Hash based on the returned SOAP packet. You can see it in its raw format by issuing obj.ews_item
Cheers,
Dan

distinguish use cases in NSAutosaveElsewhereOperation

I try to add AutoSave support to the Core Data File Wrapper example
Now if i have a new/untitled document writeSafelyToURL is called with the NSAutosaveElsewhereOperation type.
The bad thing is, I get this type in both typical use cases
- new file: which store a complete new document by creating the file wrapper and the persistent store file
- save diff: where the file wrapper already exists and only an update is required.
Does somebody else already handled this topic or did somebody already migrated this?
The original sample use the originalStoreURL to distinguish those two use cases, which solution worked best for you?
Thanks

Resources