how to rewrite the postgresCatalog to oracleCatalog?
now flinksql only surpport the postgres to store the metadata,if i want obtain the oracle metadata,how can i rewrite the postgresCatalog?
There is an open ticket for adding an OracleCatalog to Flink, see https://issues.apache.org/jira/browse/FLINK-17508
If you want to implement this yourself, I would recommend to have a look at https://issues.apache.org/jira/browse/FLINK-16471 and https://github.com/apache/flink/pull/11336 which are the original ticket and the pull request that introduced the PostgresCatalog to Flink. You could use those to determine how the implementation of an OracleCatalog would look like.
Related
I am starting now with Kubernetes and the Operator SDK and I am trying to build my first operator and I have probably a simple question.
Question
How to detect a configuration change in the custom resource yaml in the reconcile loop and take an action according to the change?
I have some config properties specified in the my CR Spec:
apiVersion: my.example.com/v1alpha1
kind: StoreApp
metadata:
name: mystoreapp
spec:
username: technicalUser
password: abcd1234
catalogs:
- name: Bikes
description: Bikes_description
- name: Cars
description: Cars_description
I want when I add new custom resource of this kind my controller to create a new pod with my app image running inside (in a webserver). When my app is up and running for the first time I want to configure it (to add the catalogs from the spec) via HTTP request from the operator.
So far it's ok but I also what to change these catalogs while my app is up and running.
For example I want to add new catalog in the spec (through kubectl patch). My operator's reconcile method will be called and how can I understand that the spec is changed? I am not sure it's a good idea to make HTTP calls to my app to get all catalogs and compare them with the catalogs from the spec. Is this the correct way to understand there is a change?
I am thinking about two other ways to find that something is updated but I am not sure if they will work properly and are they the best way to do this.
First idea is to request the instance of StoreApp with client.Get(...) but as far as I understand this will call the API server and will get the updated version of mystoreapp. I read about some local index which acts like cache for these objects and I can check is there a difference between the cached object and the object returned from the API server. But I did not find how to get the object from this local index so I was not able to compare the two objects.
To create map in which I store the hash of the hole spec object and to check every time this hash with the hash of the object got with client.Get(...). I think this will work but there should be a better way to do this.
I read some Java Operators for K8s and there were methods like onAdd, onUpdate, onDelete. I couldn't find something similar in the Operatod SDK. Is there anything like this in the Operator SDK?
Every answer will be helpful. Thank you in advance!
Best Regards,
Hristiyan
The recommended practice is to look at the spec you received, and compare it to the state of the world/cluster, so retrieving the catalogs and comparing them to the spec is indeed the proper way to do it.
The reasoning for this recommandation is that the order of the events you get from Kubernetes is not guaranteed to be consistent, and it's also not guaranteed that you'll necessarily receive every event in a reasonable amount of time, or that you'll only receive each event once, so it's best to base your decision making on what was requested as compared to what is, rather than what specific event triggered the reconciliation.
Is there any built-in API which provides a pagination feature in Pivotal GemFire, such as in the QueryService API or using Functions? We are currently using Pivotal GemFire 9.3.0 running in PCC.
TIA.
No yet. It is a planned feature for SD Lovelace/Moore release trains. See SGF-524 - Add support for PagingAndSortingRepositories. NOTE: sorting is already supported; paging is a WIP.
There is not. However there is a common pattern for this. Instead of directly querying the matching objects, write your query to return the keys of the matching objects. The client can then implement paging (e.g. page1 is key0-key99, page 2 is key 100-199, etc.). Use the "getAll" method with a list of keys to pull back one page at a time.
BTW, you can query the keys like this: "select key from /person.entries where value.ssn='222-22-2222'"
Is there a way to search for a resource by its referencing resources? For example, is the a way to find all Observations of code = X with Provenance by agent Y?
GET [base]/Observation?code=X&???
One could:
GET [base]/Provenance?userid=Y&_include=Provenance:target:Observation
but that prevents any kind of filtering on Observation (which may create a volume problem in the response!). Also, I don't need the provenance resource - I just need to make sure that the Observations I'm using have a certain provenance.
Right now, to the best of my knowledge, there's no way to apply filters to multiple resources unless you're using _filter or using a custom OperationDefinition.
I am looking for a good example to lock a document in elastic search. The below link explain how to do this in elasticsearch.
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/concurrency-solutions.html
How can i achieve this using NEST. can someone suggest an approach to start with.
To start with, read the NEST Quick Start article. In particular, note the following:
NEST is a high level elasticsearch client that still maps very closely to the original elasticsearch API. Requests and Responses have been mapped to CLR objects and NEST also comes with a powerful strongly typed query dsl.
Consider also, that you may not require the added level of abstraction that NEST provides, and may prefer to directly utilise Elasticsearch.NET. Read also, it's own Quick Start article.
With that in mind, implementing the steps in the Solving concurrency issues article using NEST / Elasticsearch.NET is simply a matter of identifying the .NET methods that correspond to the necessary Elasticsearch API methods. The first one for example:
PUT /fs/lock/global/_create
{}
In NEST would look something like:
var createGlobalLockResponse = esClient.Index<object>(new object(), f => f
.Index("fs")
.Type("lock")
.Id("global")
.OpType(global::Elasticsearch.Net.OpType.Create));
Regarding the check of whether "this create request fails with a conflict exception", check the properties on the createGlobalLockResponse object. In particular, Created and if you're intending to get a bit more specific in the handling, ServerError.
This questions is related to Kohana ORM AND Caching module. I use version 3.2 if it matters. I tried to research trust me, but I really couldn't find some good answer... so here it is:
What are the correct ways to use ORM::cached() and ORM::serialize() and ORM::$reload_on_wakeup?
I've seen many 2-line code examples but never anything really solid on the userguide/api...
What is the difference between enabling Cache module and 'caching' => true in Kohana::init?
Anyone has any recommended approach for the following specific situations? I have a catalogue page that upon profiling, I realized two very expensive actions:
I queried database each time for a currency model for each item, when the currency information can really be reused.
I queried database each time for each item's inventory item, this is an expensive query, which I wish I can cache until inventory level changes.
References that I found but couldn't answer fully my questions:
http://forum.kohanaframework.org/discussion/1782/tip-for-caching-orm-objects/p1
http://forum.kohanaframework.org/discussion/10600/does-kohana-orm-and-cache-work-together/p1
Just found your question, maybe too late, but maybe is useful to others:
cached, will force the Query builder to cache the DB query. It uses the KOhana:cache method (file cache) I am trying to find a workaround for this.
enables caching for the file search as says in the Kohana/Core.php file: Whether to use internal caching for [Kohana::find_file], does not apply to [Kohana::cache]. Set by [Kohana::init]
Enable caching true to speed up the file search, and enable the cache module, I am working on a way to cache the queries of DB using the instance used by the module. That would be better than using the file cache. Maybe I am missing something but stuck there right now.