I have a gemfire cache v8.2.1 from which I want to access data using a third party tool which can only access data using jdbc driver only. Does anyone know how can I connect to gemfire cache for accessing data using jdbc? I don't require to write to cache, just want to read from the cache.
I came across with gemfirexd on internet but can see that its marked as "End of availability".
Is there any other way where persisted Objects can be retrieved or OQL can be fired but can mimic a jdbc driver so that the any tool that accepts only jdbc drivers can be used?
Please help.
Thanks
Apache Calcite has a Geode adapter that enables you read data from GemFire over JDBC. There is also this video explaining this.
Related
Can Apache Camel output a jdbc interface instead of Java object maps?
I need to read from CouchdB AND output standard JDBC query results
I thought I could use Camel as the connector.
It can read CouchDB but apparently can not output standard JDBC objects to a client app.
Is there anyway Camel can output JDBC results?
I have a tool that needs a JDBC connection and a standard JDBC interface.
Do you mean you want the data from the database as a javax.sql.ResultSet or what kind of interface do you think about? The camel-couchdb component uses a Couch Java Client and not a JDBC driver, so you cannot do this with Camel.
But you can look at using a JDBC driver for CouchDB and not use Camel at all.
I have a Spring Boot application that uses MongoDB. My plan is to store data in a distributed caching system before it gets inserted into Mongo. If the database fails, the caching will have a queue and send to the DB once it is up. So, the plan is to make the caching layer in between the application and Mongo.
Can you suggest some ideas on how to implement this using Apache Ignite?
Take a look at write-behind cache store mode. It retries writing to the underlying database if insertion to the underlying DB fails. Let me know how it works for you.
You can also implement a custom CacheStore for an Ignite cache that will do the caching and enable write through for it. If the connection is lost, then you'll be able to collect entries in a buffer, while retrying to establish the connection back.
See more: https://apacheignite.readme.io/docs/3rd-party-store
I know to write a Kafka consumer and insert/update each record into Oracle database but I want to leverage Kafka Connect API and JDBC Sink Connector for this purpose. Except the property file, in my search I couldn't find a complete executable example with detailed steps to configure and write relevant code in Java to consume a Kafka topic with json message and insert/update (merge) a table in Oracle database using Kafka connect API with JDBC Sink Connector. Can someone point demonstrate an example including configuration and dependencies? Are there any disadvantages with this approach? Do we anticipate any potential issues when table data increases to millions?
Thanks in advance.
There won't be an example for your specific use-case becuase the JDBC connector is meant to be generic.
Here is one configuration example with an Oracle database
All you need is
A topic of some format
key.converter and value.converter to be set to deserialize that topic
Your JDBC string and database schema (tables, projection fields, etc)
Any other JDBC Sink Specific Options
All this goes in a Java properties / JSON file, not Java source code
If you have a specific issue creating this configuration, please comment.
Do we anticipate any potential issues when table data increases to millions?
Well, those issues would be database server related, not with Kafka Connect. For example, disk filling up or increased load while accepting continuous writes.
Are there any disadvantages with this approach?
You'd have to handle de-deduplication or record expiration (e.g. GDPR) separately, if you did want that.
Airpal currently uses presto client to connect to PrestoDB. However as I understand, it can also use JDBC for this connectivity. Is there any code available for this purpose? Even if it is for connecting to any other database it might be helpful for me. The model for presto client looks a lot different than other models like JDBC etc.
Airpal is using presto client connectivity and also using these objects (mostly for schema and data like Column, QueryResults etc.) internally in its various modules.
One way for providing JDBC connectivity is to move its lowest layer of DB connectivity (executeWith invocations of com.airbnb.airpal.core.execution.QueryCliemt: there is 1 for data and about 6 for metadata) to JDBC query execution. The JDBC results (mostly data and schema) can then be converted to presto client api equivalent objects and rest of the logic in airpal would follow.
Another approach is to rewrite airpal with native JDBC support by moving over to JDBC objects for internal use and communication as well. It looks like a much bigger change.
I am planning to add support for dynamically choosing between presto client or JDBC connectivity. I will use the com.airbnb.airpal.presto.QueryRunner to hold either a presto client session or a JDBC connection accordingly.
Is there a way to force encryption of network traffic (that is, result set data) using Oracle thin client and jdbc?
I understand that this can be done by setting up a java.util.Properties object and passing that to DriverManager.getConnection( String, Properties), but is there a way to specify this in the jdbc url?
I'm using a third party tool written in Java, which handles creating its own connections, so creating and passing the Properties object won't work for me.
Thanks.
Have a look at the Oracle JDBC documentation. There is a chapter about Client Side Security Features, that talks about using system properties to configure a Thin Driver for SSL.