Is there a way to search for a resource by its referencing resources? For example, is the a way to find all Observations of code = X with Provenance by agent Y?
GET [base]/Observation?code=X&???
One could:
GET [base]/Provenance?userid=Y&_include=Provenance:target:Observation
but that prevents any kind of filtering on Observation (which may create a volume problem in the response!). Also, I don't need the provenance resource - I just need to make sure that the Observations I'm using have a certain provenance.
Right now, to the best of my knowledge, there's no way to apply filters to multiple resources unless you're using _filter or using a custom OperationDefinition.
Related
I am creating a web application in Go.
I have modified my working code so that it can read and write files on both a local filesystem and a bucket of Google Cloud Storage based on a flag.
Basically I included a small package in the middle, and I implemented my-own-pkg.readFile or my-own-pkg.WriteFile and so on...
I have replaced all calls in my code where I read or save files from the local filesystem with calls to my methods.
Finally these methods include a simple switch case that runs the standard code to read/write locally or the code to read/wrote from/to a gcp bucket.
My current problem
In some parts I need to perform a ReadDir to get the list of DirEntries and then cycle though them. I do not want to change my code except for replacing os.readDir with my-own-pkg.ReadDir.
So far I understand that there is not a native function in the gcp module. So I suppose (but here I need your help because I am just guessing) that I would need an implementation of fs.FS for the gcp. It being a new feature of go 1.6 I guess it's too early to find one.
So I am trying to create simply a my-own-pkg.ReadDir(folderpath) function that does the following:
case "local": { }
case "gcp": {
<Use gcp code sample to list objects in my bucket with Query.Prefix = folderpath and
Query.Delimiter="/"
Then create a slice of my-own-pkg.DirEntry (because fs.DkrEntry is just an interface and so it needs to be implemented... :-( ) and return them.
In order to do so I need to implement also the interface fs.DirEntry (which requires the implementation of interface for FileInfo and maybe something else...)
Question 1) is this the right path to follow to solve my issue or is there a better way?
Question 2) (only) if so, does the gcp method that lists object with a prefix and a delimiter return just files? I can't see a method that returns also the list of prefixes found
(If I have prefix/file1.txt and prefix/a/file2.txt I would like to get both "file1.txt" and "a" as files and prefixes...)
I hope I was enough clear... This time I can't include code because it's incomplete... But in case it helps I can paste what I can.
NOTE: by the way go 1.6 allowed me to solve elegantly a similar issue when dealing with assets either embedded or on the filesystem thanks to the existing implementation of fs.FS and the related ReadDirFS. So good if I could follow the same route 🙂
By the way I am going on studying and experimenting so in case I am successful I will contribute as well :-)
I think your abstraction layer is good but you need to know something on Cloud Storage: The directory doesn't exist.
In fact, all the object are put at the root of the bucket / and the fully qualified name of the object is /path/to/object.file. You can filter on a prefix, that return all the object (i.e. file because directory doesn't exist) with the same path prefix.
It's not a full answer to your question but I'm sure that you can think and redesign the rest of your code with this particularity in mind.
I am starting now with Kubernetes and the Operator SDK and I am trying to build my first operator and I have probably a simple question.
Question
How to detect a configuration change in the custom resource yaml in the reconcile loop and take an action according to the change?
I have some config properties specified in the my CR Spec:
apiVersion: my.example.com/v1alpha1
kind: StoreApp
metadata:
name: mystoreapp
spec:
username: technicalUser
password: abcd1234
catalogs:
- name: Bikes
description: Bikes_description
- name: Cars
description: Cars_description
I want when I add new custom resource of this kind my controller to create a new pod with my app image running inside (in a webserver). When my app is up and running for the first time I want to configure it (to add the catalogs from the spec) via HTTP request from the operator.
So far it's ok but I also what to change these catalogs while my app is up and running.
For example I want to add new catalog in the spec (through kubectl patch). My operator's reconcile method will be called and how can I understand that the spec is changed? I am not sure it's a good idea to make HTTP calls to my app to get all catalogs and compare them with the catalogs from the spec. Is this the correct way to understand there is a change?
I am thinking about two other ways to find that something is updated but I am not sure if they will work properly and are they the best way to do this.
First idea is to request the instance of StoreApp with client.Get(...) but as far as I understand this will call the API server and will get the updated version of mystoreapp. I read about some local index which acts like cache for these objects and I can check is there a difference between the cached object and the object returned from the API server. But I did not find how to get the object from this local index so I was not able to compare the two objects.
To create map in which I store the hash of the hole spec object and to check every time this hash with the hash of the object got with client.Get(...). I think this will work but there should be a better way to do this.
I read some Java Operators for K8s and there were methods like onAdd, onUpdate, onDelete. I couldn't find something similar in the Operatod SDK. Is there anything like this in the Operator SDK?
Every answer will be helpful. Thank you in advance!
Best Regards,
Hristiyan
The recommended practice is to look at the spec you received, and compare it to the state of the world/cluster, so retrieving the catalogs and comparing them to the spec is indeed the proper way to do it.
The reasoning for this recommandation is that the order of the events you get from Kubernetes is not guaranteed to be consistent, and it's also not guaranteed that you'll necessarily receive every event in a reasonable amount of time, or that you'll only receive each event once, so it's best to base your decision making on what was requested as compared to what is, rather than what specific event triggered the reconciliation.
Trying to save an h2o model with some specific name that differs from the model's model_id field, but trying something like...
h2o.save_model(model=model,
path='/some/path/then/filename',
force=False)
just creates a dir/file structure like
some
|__path
|__then
|__filename
|__<model_id>
as opposed to
some
|__path
|__then
|__filename
Is this possible to do from the save_model method?
I can't / hesitate to simply change the model_id before calling the save method because the model names have timestamps appended to them to avoid name collisions with other models that may be on the h2o cluster (am trying to remove these timestamps when saving on disk and simplifying the name on the cluster before saving creates a time where naming collision can occur if other processes are also attempting to save such a model (of, say, a different timestamp)).
Any way to get this behavior or other common alternatives / workarounds?
This is currently not possible, however I created a feature request here. There is a related question here which shows a solution for R (could be adapted to Python). The work-around is just to rename the file manually using a few lines of R/Python code.
I'm using Prometheus to do some monitoring but I can't seem to find a way to delete labels I no longer want. I tried using the DELETE /api/v1/series endpoint but it doesn't remove it from the dropdown list on the main Prometheus Graph page. Is there a way to remove them from the dropdown without restarting from scratch?
Thanks
This happens to me also, try to include the metric name when querying for labels' values like this:
label_values(node_load1, instance)
ref: http://docs.grafana.org/features/datasources/prometheus/
If you delete every relevant timeseries then it should no longer be returned. If this is not the case, please file a bug.
Prometheus doesn't provide the ability to delete particular labels, because this may result to duplicate time series with identical labelsets. For example, suppose Prometheus contains the following time series:
http_requests_total{instance="host1",job="foobar"}
http_requests_total{instance="host2",job="foobar"}
If instance label is removed, then these two time series will become identical:
http_requests_total{job="foobar"}
http_requests_total{job="foobar"}
Now neither Prometheus nor user can differentiate these two time series.
Prometheus provides only the API for deleting time series matching the given series selector - see these docs for details.
Given the SPList.ID and a site collection (or an SPWeb with subwebs), how do I quickly find the document library with the given ID?
I can recursively enumerate through all webs and perform a web.Lists[guid] on each one of them, but there might be thousands of subwebs in my case, and I'm looking for a realtime solution.
If there is no way to do this quickly, any other suggestions on how to uniquely identify a document library? I could store the full path (url), but the identification will be publicly visible and I don't feel very comfortable giving away our exact SharePoint document structure like that. Should I resort to maintaining a manual ID <-> library mapping in a separate list?
I vote for the manual ID -> URL pair matching in a top-level, well-known list that's visible only to the elevated privileges account.
Since you are storing the ListID somewhere, you may also store the WebId. Lists are opened by the context SPWeb always, so if you go to:
http://toplevel/_layouts/ListGeneralSettings.aspx?ID={GUID1} // OK
http://toplevel/sub1/_layouts/ListGeneralSettings.aspx?ID={GUID1} // Wont Work (same Guid)
Having the WebId and ListId you can simply:
using(SPWeb subweb = (new SPSite("http://url")).OpenWeb(new Guid("{000...}")))
{
SPList list = subweb.Lists.GetList(new Guid("{111...}"), true);
// list logic
}
MS does not support this :)...
But take a look at this for giggles: http://weblogs.sqlteam.com/jhermiz/archive/2007/08/15/60288.aspx
If you have MOSS Search available, then it might help, depending on the lag you have between these lists getting created and needing to search for them. You could probably map list id as a managed property and do a quick search for list objects with the id in question.
For lots of classes of problems it seems like search is the fastest way to rip through huge sets of data. In fact if this approach worked for you, you really wouldn't even need to know the site collection up front. Don't have access to any of my MOSS environments at the moment, so can't verify this will work though.