I love the push queries of Apache ksqlDB. https://developer.confluent.io/learn-kafka/ksqldb/push-queries-and-pull-queries/
It allows to get notified via HTTP2 of a new result of a query whenever the result set (or the data) changes. That is awesome.
How could we force Apache Pulsar resp. Pulsar SQL to send push-queries? Or is there a similar approach on how to pump query results to a service endpoint (and then further to a client via http2 or websockets).
I don't want to run queries, if there is no data change. Thus, polling is not an option.
Related
Context:
We are moving from ES 5.X to ES 7.X
Earlier we were using JEST Client, now we are planning to use ES High-Level Client
Our search queries are complex and we are planning to use SearchTemplate API
We will store template files locally & cache them to reduce the overhead of I/O
What I have tried so far:
I've read the documentation of EHLC and I can't find a mechanism to load & cache script files directly from the file system
I can see that we can store the script in E.S which we don't want to do, assumingly we won't be having changelogs there.
Question:
Is there an inbuilt mechanism to use the locally stored file as a script in EHLC? OR we shall use inline scripts and load & cache the script file using custom code
Based on the comments I'd suggest the following:
Keep track of the templates with git.
Monitor the changes and trigger a pub/sub message whenever applicable (PR merges etc.).
Configure your pub/sub handler to update the stored search template in ES.
Otherwise, when we talk about local loading + caching, the machines with slightly older EHLC processes wouldn't get notified about the most recent changes in git and would continue using stale scripts.
Very new to Datadog and need some help. I have crafted 2 SQL queries (one for on-prem database and one for cloud database) and I would like to run those queries through Datadog and be able display the query results and validate that the daily results fall within an expected variance between the two systems.
I have already set up Datadog on the cloud environment and believe I should use DogStatsD to create a custom metric but I am pretty lost with how I can incorporate my necessary SQL queries in the code to create the metric for eventual display on a dashboard. Any help will be greatly appreciated!!!
You probably want to be using the MySQL integration, and configure the 'custom queries' option: https://docs.datadoghq.com/integrations/faq/how-to-collect-metrics-from-custom-mysql-queries
You can follow those instructions after you configure the base integration https://docs.datadoghq.com/integrations/mysql/#pagetitle (This will give you a lot of use metrics in addition to the custom queries you want to run)
As you mentioned, DogStatsD is a library you can import to whatever script or application in order to submit metrics. But it really isn't a common practice in the slightest to modify the underlying code of your database. So instead it makes more sense to externally run a query on the database, take those results, and send them to datadog. You could totally write a python script or something to do this. However the Datadog agent already has this capability built in, so it's probably easier to just use that.
I am also just assuming SQL refers to MySQL, there are other integration for things like SQL Server, and PostgreSQL, and pretty much every implementation of sql. And the same pattern applies where you would configure the integration, and then add an extra line to the config file where you have the check run your queries.
I am using confluent Kafka Go for my project. When writing tests, because of the asynchronous nature of Kafka when creating a topic, I might be have errors (error code 3: UNKNOWN_TOPIC_OR_PARTITION) when create the topic then get back immediately.
As I understood, if I can query directly on the controller, I can always get the lastest meta data. So my question is: How can I get Kafka controller's IP or ID when using Confluent Kafka Go.
I configured the solr Search server in Tomcat server. I started tomcat server with below extra parameters.
Dcom.sun.management.jmxremote.port=9191
Dcom.sun.management.jmxremote.authenticate=false
Dcom.sun.management.jmxremote.ssl=false
Now I wants to test solr Searching request in my JMeter for load testing purpose. Will I be able to do it in Jmeter?
As per Solr Quick Start
Searching
Solr can be queried via REST clients, cURL, wget, Chrome POSTMAN, etc., as well as via the native clients available for many programming languages.
The Solr Admin UI includes a query builder interface - see the gettingstarted query tab at http://localhost:8983/solr/#/gettingstarted/query.
So you should be able to perform a search using HTTP Request sampler
Replace gettingstarted with your Solr core name and YOUR_QUERY_HERE with your actual query.
You will also be able to use XPath or JSON Path Extractor in order to extract some response parts into JMeter Variables if needed
I tried to put together a Horizon app with an externally hosted RethinkDB and I couldn't seem to get it to work with existing tools. I understand Horizon includes a server-side API component, which may be why.
I want to be able to directly insert and/or update documents in my RethinkDB from an external server, and have those updates be pushed to subscribed browsers. Is this possible and/or wise?
Preferably this would not involve my Horizon express server at all. I would prefer to not have to expose my own API to do this.
This is totally possible as long as the RethinkDB instance is visible to the service pushing data into RethinkDB in some way. You'd then just connect to RethinkDB via a standard driver connection with your language of choice. A simple in Python would look like this:
import rethinkdb as r
conn = r.connect('localhost', 28015)
r.db("horizon_project_name").table("things").insert({'text': 'Hello, World!'}).run(conn)
Then when you start Horizon, you'll want to make sure to use the --connect flag and provide the hostname and port of that same RethinkDB instance.
An example, if RethinkDB is running on the same machine as Horizon:
hz serve --connect localhost:28015
In Horizon, you'd be able to listen to these messages like so in the browser:
const horizon = Horizon();
horizon('things').subscribe((result) => {
// `result` is the entire collection as an array
console.log("result!", result);
});
If you need futher help with this, feel free to tweet me #dalanmiller or create a new topic in discuss.horizon.io!