So I've been working on implementing elastic search using the JDBC River plugin to get data from our SQL Server DB into elastic search.
I've got it working fine using the SQL Server credentials, but trying to use integrated security doesn't work. It will create the index, but it doesn't have data in it.
The parameters I've been using are:
PUT /_river/test_river/_meta
{
"type":"jdbc",
"jdbc":
{
"driver":"com.microsoft.sqlserver.jdbc.SQLServerDriver",
"url":"jdbc:sqlserver://testServer:1433;databaseName=TestDb;
integratedSecurity=true;",
"user":"",
"password":"",
"sql": "select * from users",
"poll":"30s",
"index":"testindex",
"type":"testusers"
}
}
I've tried quite a few things, including removing the user and password fields completely, removing integratedSecurity=true, but it gave the same result.
I've checked on their github for the river plugin and it says this issue was fixed back in January, but it still doesn't seem to be working.
Also I'm using elastic search version: 1.5.1
and jdbc river plugin version : 1.4.0.10
Any help would be much appreciated
Get rid of the user and password options. You're not gonna need them.
Check the console when running elasticserch.bat, you should see an error message when it tries to update the river. I'm going to go out on a limb and assume you're probably seeing an error stating that the file sqljdbc_auth.dll can't be found. If this is the case, you can download this file from here and copy the x64 version of sqljdbc_auth.dll to your java lib folder. For me, this folder is C:\ProgramData\Oracle\Java\javapath but you can type echo %path% in a console window to find yours.
Once you have followed these steps, restart elasticsearch.bat, and it should start processing your river. If not, post back with the output you're seeing when running elasticsearch.bat.
Related
I'm trying to integrate elsaicsearch with suitecrm.
I've followed the document as per https://docs.suitecrm.com/admin/administration-panel/search/elasticsearch/
Then I tried to run full indexing from suitecrm, I got "index_not_found_exception" hence created indexes manually in the elasticsearch.
After that also when I am trying to run the indexing, no logs showing in suitecrm or elasticsearch and search in elasticsearch is not working.
Suitecrm version
Version 7.12.5
Sugar Version 6.5.25 (Build 344)
Elasticsearch version
"number" : "7.17.5",
Please advise. Thanks.
I didn't get any suspicious logs in suitecrm.log so had enabled debug mode for the logs
https://docs.suitecrm.com/developer/logging/
Then I clicked on full indexing I found below log line
Elasticsearch trying to re-indexing a bean but this module is blacklisted: SchedulersJobs
I followed this document then https://docs.suitecrm.com/blog/scheduler-jobs/
And lastly this step Admin / Repairs / Quick Repair and Rebuild
After that it started working
I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.
Am trying to use elasticsearch with my neo4j database for fast querying.I tried many sites but they are all old articles so i didn't get any clear idea. Steps I followed until now,
Installed neo4j
Installed elasticsearch
Copy pasted elastic search plugins into neo4j plugins folder
added this line into neo4j. properties file
elasticsearch.host_name=http://localhost:9200
elasticsearch.index_spec=people:Person(first_name,last_name), places:Place(name)
Here my question is,
How elasticsearch and neo4j are integrated. Please clarify me on this.
I followed this ,
Link
You have to install Apoc procedures plugin (https://github.com/neo4j-contrib/neo4j-apoc-procedures). The documentation about ES integration is here : ES Integration with Apoc procedures
[edit]
download and drop apoc.jar in plugins's Neo4j directory, regarding the targetted Neo4j version
restart Neo4j
in Neo4j Web browser, launch the following Cypher query to show all ES procedures:
CALL apoc.help("apoc.es")
Sample query for logs:
CALL apoc.es.getRaw("localhost","_search?q=level:ERROR",null)
YIELD value
UNWIND value.hits.hits as hits
RETURN hits LIMIT 100
The recommanded way is to store the ES host in neo4j.conf by adding a key (after restart of Neo4j):
apoc.es.myKey.url=localhost
Then the query looks like:
CALL apoc.es.getRaw("myKey","_search?q=level:ERROR",null)
YIELD value
UNWIND value.hits.hits as hits
RETURN hits LIMIT 100
For those of you who already have APOC plugin installed and accessible, but don't have access to the neo4j.properties file (or are more comfortable working with ES through curl) you can do this without using apoc.es.getRaw and can instead use the JSON returned with apoc.load.json:
WITH "http://myelasticurl:9200/my_index/_search?q=level:ERROR" as search_url
CALL apoc.load.json(search_url) YIELD value
UNWIND value.hits.hits as hit
WITH hit._source as source
...
# do work
...
Good Day.
I've been developing with meteorJS which uses mongodb. No problems there. I've been using the mongo shell to access the database on my dev machine (osx 10.11). This is my first project with mongo and when the shell would load, it would connect to db.test and I'd always show dbs and get the list of database, then use myApp.
Yesterday whenever I go into the shell and I type show dbs the only one shown is local 0.078GB. However my app is still working and pulling and pushing data to the database.
I've checked the dbpath in the mongod.conf and that seems ok. I'm not entirely sure about the exact order of things, but two things where different (I'm not sure if these happened prior to the show dbs not showing everything or after, and I'm not sure which came first):
when loading the mongo shell I was getting this error:
WARNING: soft rlimits too low. Number of files is 256, should be at least 1000"
I followed these directions which seemed to stop that error from appearing (https://github.com/basho/basho_docs/issues/1402 )
I use Meteor Toys and for the first time I update user.profile.companyName (which is a custom field within the standard profile from within the Meteor Toys widget.
Just odd that the app can still access the database and collections, but that the mongo shell doesn't show. I've update mongod via brew upgrade mongodb from 3.0.2 to 3.0.7 to no avail.
Any ideas?
If you want to use the regular mongo console you have to specify the port to be 3001 for meteor apps instead of the default 27017. Otherwise it's much simpler to just type meteor mongo and connect that way. Then you can type 'show collections' and it will show them all just like normal.
MongoDB do not show the database unless if there is minimum of one collection with a document in it.
Refer to this link
I have a production database running DB2 at 10.1.2 workgroup (OpenSuse 12.2) and I have Full Text Search running pretty well there. Now I'm trying to build a test enviroment, but when I turn over de production backup into test machine with 10.1.2 express-c the FTS is presenting this error:
<message>IQQD0040E The client specified the wrong authentication token.
com.ibm.es.nuvo.inyo.common.InyoFactoryWrapper.authenticate(InyoFactoryWrapper.java:203)
com.ibm.es.nuvo.inyo.common.InyoFactoryWrapper.getHandler(InyoFactoryWrapper.java:85)
com.ibm.es.nuvo.inyo.common.InyoServer$InyoListener.run(InyoServer.java:425)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614)
java.lang.Thread.run(Thread.java:769)</message>
The redbook says to me that this error cause is: "Usually this error occurs when there are 2 or more text search instances configured with the same port number and one instance is already running".
I've already searched other instances but I've only found one. So "usually" does not apply to my situation.
Anyone know what else I can do to fix that?
Best regards,
jacker
I've found out a solution. When the backup is transported to a new instance of DB2, de FTS application engage it communication with a token. After restored, we just need to go to the bin directory of FTS, commonly at /home/db2inst1/db2tss/bin and run this command:
configTool generateToken -seed <username> -configPath ~/sqllib/db2tss/config
Hope this help anyone who's passing by this trouble.
Regards.