I'm trying to integrate elsaicsearch with suitecrm.
I've followed the document as per https://docs.suitecrm.com/admin/administration-panel/search/elasticsearch/
Then I tried to run full indexing from suitecrm, I got "index_not_found_exception" hence created indexes manually in the elasticsearch.
After that also when I am trying to run the indexing, no logs showing in suitecrm or elasticsearch and search in elasticsearch is not working.
Suitecrm version
Version 7.12.5
Sugar Version 6.5.25 (Build 344)
Elasticsearch version
"number" : "7.17.5",
Please advise. Thanks.
I didn't get any suspicious logs in suitecrm.log so had enabled debug mode for the logs
https://docs.suitecrm.com/developer/logging/
Then I clicked on full indexing I found below log line
Elasticsearch trying to re-indexing a bean but this module is blacklisted: SchedulersJobs
I followed this document then https://docs.suitecrm.com/blog/scheduler-jobs/
And lastly this step Admin / Repairs / Quick Repair and Rebuild
After that it started working
Related
Can someone please assist me, what all settings I can cross check at fortinet side to ensure that syslog matches Fortinet FortiGate logs integration requirement?
Current status:
Integration and all required assets are installed in kiana.
No error and warn noticed in elastic agent logs.
OLD question:
Could you please assist me on how I can add RFC3164 version to
logs-fortinet.firewall-1.7.2 ingest pipeline?
ALso, is it possible to add RFC3103 (using syslog_pri filter or kv filter) if yes, please assist with some examples to parse data?
I am trying to run the upgrade assistant in order to upgrade ElasticSearch from 5 to 6, however I am hitting the following error:
Deprecated script security settings changes
This issue must be resolved to upgrade. Read Documentation
Details: [node[instance-0000000093] used these script-security settings:[script.inline, script.stored, script.file, script.engine.painless.stored, script.engine.painless.file, script.engine.expression.stored, script.engine.expression.file, script.engine.mustache.inline, script.engine.mustache.stored, script.engine.mustache.file]]
I have looked over the settings and our Kibana config and cannot find any references to any scrips being used.
Has anyone else run into a similar issue and how did you solve it?
Thanks.
I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.
I recently upgraded Elastic Search from v 1.7 to 2.4. I'm working in python and using pyes library to communicate to Elastic Search. In my code, I have this line in place to refresh index.
con.indices.refresh()
This was working fine with ES 1.7, however with ES 2.4, I'm getting this exception:
ElasticSearchException: Unknown exception type: 408
Refreshing via curl works just fine, i.e
curl localhost:9200/_refresh
Is there any changes in Elastic Search 2.4 that is breaking this piece of code? Thanks
Did you update the version of pyes library. There is a version compatibility between ES and pyes. Which is strict.
refer to the documentation here.
http://pyelasticsearch.readthedocs.io/en/latest/migrate/
So I've been working on implementing elastic search using the JDBC River plugin to get data from our SQL Server DB into elastic search.
I've got it working fine using the SQL Server credentials, but trying to use integrated security doesn't work. It will create the index, but it doesn't have data in it.
The parameters I've been using are:
PUT /_river/test_river/_meta
{
"type":"jdbc",
"jdbc":
{
"driver":"com.microsoft.sqlserver.jdbc.SQLServerDriver",
"url":"jdbc:sqlserver://testServer:1433;databaseName=TestDb;
integratedSecurity=true;",
"user":"",
"password":"",
"sql": "select * from users",
"poll":"30s",
"index":"testindex",
"type":"testusers"
}
}
I've tried quite a few things, including removing the user and password fields completely, removing integratedSecurity=true, but it gave the same result.
I've checked on their github for the river plugin and it says this issue was fixed back in January, but it still doesn't seem to be working.
Also I'm using elastic search version: 1.5.1
and jdbc river plugin version : 1.4.0.10
Any help would be much appreciated
Get rid of the user and password options. You're not gonna need them.
Check the console when running elasticserch.bat, you should see an error message when it tries to update the river. I'm going to go out on a limb and assume you're probably seeing an error stating that the file sqljdbc_auth.dll can't be found. If this is the case, you can download this file from here and copy the x64 version of sqljdbc_auth.dll to your java lib folder. For me, this folder is C:\ProgramData\Oracle\Java\javapath but you can type echo %path% in a console window to find yours.
Once you have followed these steps, restart elasticsearch.bat, and it should start processing your river. If not, post back with the output you're seeing when running elasticsearch.bat.