I have been working on connecting clickhouse with mongo atlas and i found nothing in documentation, so i dig into the code and found that MongoDictionarySource.cpp in dbms folder there is no configuration for uri.
On more investigating i came to know that clickhouse is using poco c++ project for database connectivity. In clickhouse submoduled poco's Connection.cpp there is no constructor with uri support but when i go to https://github.com/pocoproject/poco/blob/develop/MongoDB/src/Connection.cpp , i found that Connection.cpp has constructor that support uri.
Is this updated poco version can be submoduled with clickhouse so that i can make dictionary in clickhouse with mongo atlas?
OR
Is there any release comming where it can be already provided?
This feature is further discussed over here: https://github.com/yandex/ClickHouse/pull/5384
Related
Im working in a quarkus project, I have to connect to an elasticsearch clusert and in production exists a mysql database with data.
Im thinking about using Hibernate Search but I have some questions.
1-Which version of hibernate search use quarkus? In the pom is not specified. Is 6?
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-hibernate-search-orm-elasticsearch</artifactId>
</dependency>
2-It compatible with elasticsearch 7.11.1?
3-In my project I will connect to the mysql database just once to initialize all the index, then the connection is going to be closed, is this possible? or hibernate search needs to be connected to mysql database always?
4-To initialize the indexs with hibernate search is mandatory to use hibernate annotations (for example #Entity and #Column) in the entitys?
5-As I said, the connection with mysql database is going to be close after first indexing, is there a way to add new records to index if I get a list of objects from other system? (for example something like batch)
Thanks
It's Hibernate Search 6 - in Quarkus 1.13, 6.0.2.Final
Yes, it should be. Our main testing is now against the latest Open Source version of Elasticsearch but we are still testing 7.11.
Hibernate Search handles reads/writes and also hydrate your search data from the database so you should have the MySQL database around. If you are only doing read-only stuff AND only using projections, maybe not having the database around is possible but I don't think it's a supported use case
Yes.
You will have to implement it yourself, there's nothing built-in.
In Apache NiFi, unable to configure databse schema. I'm trying to find a record in PostgesSQL using LookupRecord. But unable to specify the schema name as there is no such option. It's looking by default in public schema. Please help me with this.
You can specify the particular schema as part of the JDBC URL in newer versions of the Postgres driver. See example here.
I'm looking into FlyWay 6.3 and there is no support with Vertica 9.X
There is still this issue open:
https://github.com/flyway/flyway/issues/1855
Looking into the documentation I can import into the drivers directory the JDBC of Vertica.
My question is:
If I try do it by my self I'll find any problem?
Are there still some backend problem that needs to be fixed?
Unfortunately Vertica 9 support was removed in 2017. Flyway won't load JDBC drivers for databases it doesn't support.
There is a Pull Request that re-introduces support. So you could try building Flyway from that fork.
I want to know if there's a way to configure the datasource for Ignite as Elastic Search. I was browsing the web. But I did not find a solution.
I want to implement this integration for a Java application.
If I understand your idea correctly there's a way to do it. As far as I can see Elasticsearch supports SQL table-like data access and it's available through jdbc connection. From the Ignite's side we have 3rd party persistance, it uses jdbc to connect to an underlying store system. To be honest I haven't tested it but I suppose it should work.
Also I need mention that you can use GridGain WebConsole to generate simple Ignite project from existing jdbc connection. This functionality could be found on Configuration tab -> Create Cluster Configuration.
We are Implementing Mongo based session persistence using SparkJava (version 2.6.0 with Jetty 9.4.x). we required to store jetty session into mongoDB. How i can achieve it in SparkJava? I found many example of using MongoSessionIdManager and MongoSessionManager with jetty-nosql (9.3.x). but MongoSessionIdManager and MongoSessionManager no longer exist in jetty-nosql (9.4.x).
I think this topic was discussed at https://github.com/perwendel/spark/pull/836 but not able to find example of implementation..
Thanks a lot in advance !!
I Got it working after customizing SparkJava library and generated spark-core-2.6.1-SNAPSHOT.jar.
added working Example of MongoDB and JDBC based session clustering using SpakrJava at My GitHub repository