How i can send the logs of my application which is in prestashop to kibana? as you know prestashop save the logs in database. Thanks
You can use one of these approachs :
Using the official framework (elasticsearch-php) you create a single file script that communicates directly with your ES cluster and sends the data via a direct query to the db :
select * from ps_log where date_add > xxxx
that you can implement in your script.
You create a daily CSV log of the table data with a PHP script and "ingest" this log to your elasticsearch cluster using logstash.
Related
I’m trying to learn about streaming services and reading kafka doc’s :
https://kafka.apache.org/quickstart
https://kafka.apache.org/24/documentation/streams/quickstart
To take a simple example I’m attempting to refactor a Spring web services GET request which accepts an ID parameter and returns a list of attributes associated with that ID. The DB backend is Oracle.
What is the approach for loading a single Oracle DB table which can be served by Kafka ? The above docs don't contain information for this. Do I need to replicate the Oracle DB to a NoSql DB such as MongoDB ? (Why we require Apache Kafka with NoSQL databases?)
Kafka is an event streaming platform. It is not a database. Instead of thinking about "loading a single Oracle DB table which can be served by Kafka", you need to think in terms of what events are you looking for that will trigger processing?
Change Data Capture (CDC) products like Oracle Golden Gate (there are other products too) will detect changes to rows and send messages into Kafka each time a row changes.
Alternatively you could configure a Kafka JDBC Source Connector to execute a query and pull data into Kafka.
I'm doing an application in Laravel that contains several connections to different databases, in which each one reads a service audit table. The application is to visualize logs of different applications.
To improve the reading speed, could be possible download every X minutes all data from different bases to a local base in Redis and read the queries directly from it?
You can do this via scheduled tasks:
https://laravel.com/docs/5.7/scheduling#scheduling-artisan-commands
This will allow you to run an Artisan Command
https://laravel.com/docs/5.7/artisan
In this command you can get the data from your DB and save it to your Redis table
To access mulitple Databases follow the details here:
https://laravel.com/docs/5.7/database#read-and-write-connections
And to setup redis here is the docs
https://laravel.com/docs/5.7/redis
All that you will need to do is track what you have transfered.
Get what you have not transferred and then save that data to the Redis table
I am working on a application built using Spring/Hibernate & PostgreSQL as a database, Recently I implement elastic search in my application, I am new to this so having some doubt about how to use ES in my application.
its good to save/update document in Elastic search where I save/update in PostgreSQL. if No what is the best way to do it?
Should I use data from ES for all my queries like get user profile, educational profile , contact profile etc or its good to fetch them from PostgreSQL and use ES data only for search?
What is the best way to keep my PostgreSQL and ES in Sync?
Thanks in advance for you help.
I have some meta data in ElasticSearch, when the meta data updated I need to sync the updates to Mysql. So I'm wondering that is there an ES Watch/Triger could do this automatically please?
There isn't any direct action in watcher calling mysql ( that I know of ).
But you could create a simple API called by watcher that would update your database on a Webhook Action
I have an application that performs SQL query on Spark DataFrame like this:
DataFrame sqlDataFrame = sqlContext.createDataFrame(accessLogs, ApacheAccessLog.class);
sqlDataFrame.registerTempTable("logs");
sqlContext.cacheTable("logs");
Row contentSizeStats = sqlContext.sql( "SELECT SUM(contentSize), COUNT(*), MIN(contentSize), MAX(contentSize) FROM logs")
.javaRDD()
.collect()
.get(0);
I can submit this application to Spark by using spark-submit, it works totally fine.
But now what I want is to develop a web application (using Spring or other frameworks), users write SQL script in the front-end, click Query button, and then the web server send the SQL script to Apache Spark to perform the query action, just like what the spark-submit did above. After SQL execution I hope that Spark can send the result back to the web server.
In the official documentation it is mentioned that we can use Thrift JDBC/ODBC, but it only presents how to connect to Thrift server. There is no other information about how to perform query action. Did I miss anything? Is there any example that I can take a look?
Thanks in advance!
Yes Thrift JDBC/ODBC is better option. You can use HiveServer2 service
Here is the code
HiveContext hiveContext = SparkConnection.getHiveContext();
hiveContext.setConf("hive.server2.thrift.port","10002");
hiveContext.setConf("hive.server2.thrift.bind.host","192.168.1.25");
HiveThriftServer2.startWithContext(hiveContext);
It will open a JDBC port. And via hive jdbc driver you can connect it.