couldn't connect to ElasticSearch inside GetCandy - laravel

As am following the documentation in the site here https://getcandy.io/docs/master/guides/introduction/01-installation
but when got to point to set this code:
php artisan candy:search:index
having exception error listed here:
Elastica\Exception\Connection\HttpException : Couldn't connect to host, Elasticsearch down?

Sounds most likely that Elasticsearch isn't running properly, rather than an issue with GetCandy.
If you run the following you should be able to determine if Elasticsearch is up.
curl localhost:9200
If you get a response with the Elasticsearch version etc, it is running. If it's not running, you'll need to check the Elasticsearch logs, normally found somewhere like /var/log/elasticsearch/

Related

Akeneo PIM No alive nodes found in your cluster ERROR

I keep getting the same error when starting the Akeneo Community Edition! It seems to be an error caused by Elastictsearch, but I cannot figure out what is wrong.
The Error message:
[OK] Database schema created successfully!
Updating database schema...
37 queries were executed
[OK] Database schema updated successfully!
Reset elasticsearch indexes
In StaticNoPingConnectionPool.php line 50:
No alive nodes found in your cluster
Im running on a uberspace server without docker and i'm trying to start it like mentioned here:
https://docs.akeneo.com/4.0/install_pim/manual/installation_ee_archive.html but with the community Edition instead.
Does anyone had the same error and knows how to help me out?
Maybe it a problem with the .env file for the entry point of elastic search. My .env: APP_INDEX_HOSTS=localhost:9200
Can you verify that the Elasticsearch search server is available on localhost:9200 when accessing it via curl/Postman/Sense or something else?
That error usually means the node is either not running, or not running on the configured port.
Pay also attention that your server follow the system requirements - https://docs.akeneo.com/4.0/install_pim/manual/system_requirements/system_requirements.html

Kibana unabe to connect to elasticsearch on windows

I am running elastic search 7.6 it is working ok on http://localhost:9200/ . I am able to use the REST API to add values to index.
Now when i start up kibana 7.6, i get following error:-
log [12:31:32.247] [info][plugins-service] Plugin "case" is disabled.
log [12:31:44.432] [info][plugins-system] Setting up [36] plugins: [taskManager,siem,licensing,infra,encryptedSavedObjects,code,timelion,features,security,usageCollection,metrics,canvas,apm_oss,translations,reporting,status_page,share,uiActions,data,navigation,newsfeed,kibana_legacy,management,dev_tools,home,spaces,cloud,graph,inspector,expressions,visualizations,embeddable,advancedUiActions,dashboard_embeddable_container,eui_utils,bfetch] log [12:31:44.435] [info]
log [12:31:44.587] [info][savedobjects-service] Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations...
log [12:31:44.617] [info][savedobjects-service] Starting saved objects migrations log [12:31:44.657] [info][savedobjects-service] Creating index .kibana_1.
log [12:31:44.663] [info][savedobjects-service] Creating index .kibana_task_manager_1.
log [12:32:14.663] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: Request Timeout after 30000ms
Unable to connect to Elasticsearch. Error: Request Timeout after
30000ms
I've got same problem like yours, and I've sloved it by switching cmd prompt window to PowerShell window. It seems that command prompt window is very sensitive. You may get some idea here. https://discuss.elastic.co/t/kibana-7-4-0-on-windows-command-prompt-not-able-to-start/203877/7
BTW, If you get a warning when you restart Kibana, like:
log [06:27:47.136] [warning][savedobjects-service] Unable to connect to Elasticsearch. Error: [resource_already_exists_exception] index [.kibana_task_manager_1/EmPx77s1TLWbLQdqQ8iC0w] already exists, with { index_uuid="EmPx77s1TLWbLQdqQ8iC0w" & index=".kibana_task_manager_1" }
log [06:27:47.140] [warning][savedobjects-service] Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_task_manager_1 and restarting Kibana.
Just do what it told you, delete the index .kibana_task_manager_1 and restart Kibana.
curl -XDELETE http://localhost:9200/.kibana_task_manager_1
Good Luck.

Logstash seemingly changes the Elasticsearch output URL

I have my Logstash configured with the following output:
output {
hosts => ["http://myhost/elasticsearch"]
}
This is a valid URL, as I can cURL commands to Elasticsearch with it, such as
curl "http://myhost/elasticsearch/_cat/indices?v"
returns my created indices.
However, when Logstash attempts to create a template, it uses the following URL:
http://myhost/_template/logstash
when I would expect it to use
http://myhost/elasticsearch/_template/logstash
It appears that the /elasticsearch portion of my URL is being chopped off. What's going on here? Is "elasticsearch" a reserved word in the URL that is removed? As far as I can tell, when I issue http://myhost/elasticsearch/elasticsearch, it attempts to find an index named "elasticsearch" which leads me to believe it isn't reserved.
Upon changing the endpoint URL to be
http://myhost/myes
Logstash is still attempting to access
http://myhost/_template/logstash
What might be the problem?
EDIT
Both Logstash and Elasticsearch are v5.0.0
You have not specified which version of logstash you are using. If you are using one of the 2.x versions, you need to use use the path => '/myes/' parameter to specify that your ES instance is behind a proxy. In 2.x, the hosts parameter was just a list of hosts, not URIs.

What is Nutch 1.10 crawl command for elasticsearch

Using Nutch 1.10 (newbie), I am trying to learn how to crawl using Nutch 1.10 and using ElasticSearch as my indexer. Not sure why, but I can not get this crawl command to work:
bin/crawl -i --elastic -D elastic.server.url=http://localhost:9200/elastic/ urls elasticTestCrawl 1
UPDATE: just used
bin/crawl -i -D elastic.server.url=http://localhost:9200/elastic/ urls/ elasticTestCrawl/ 2
--almost succesfully, received following error when it came to the indexing part of the command:
Error running:
/home/david/apache-nutch-1.10/bin/nutch clean -Delastic.server.url=http://localhost:9200/elastic/ elasticTestCrawl//crawldb
Failed with exit value 255.
What is exit value 255 for nutch 1.x? And why does the space get deleted between "-D and elastic..."
I have these ElasticSearch Properties from here in my nutch-site.xml file:
If someone can point my to the error of my ways, that would be great!
Update
I just posted my own answer below, its the second one. I had already accepted the first answer months ago when I initially got it working. My answer is simply more clear and concise to make it easier (and quicker) to get started with Nutch.
Unfortunately I can't tell you where you're going wrong as I'm in the same boat although from what I can see you are running nutch and elastic on the same box where as I've split it across two.
I've not got it to work but according to a guide I found on integrating nutch 1.7 with elastic it should just be
bin/crawl urls/ TestCrawl -depth 3 -topN 5
It may just be it isn't working for me because I've added the extra complication of networking.
I also assume you have created an index called elasticTestIndex in your elastic instance and launched it on the box before trying to run your crawl?
Should it be of help the guide I got that command from is
https://www.mind-it.info/integrating-nutch-1-7-elasticsearch/
Update:
I'm not sure I'm quite there yet but using your update I've got further than I had.
You are putting in port 9200 which is the web administartion port but you need to use port 9300 to interact with the service so change the port to 9300
I'm not sure but I thing the portion after the slash refers to the index so in your example make sure you have "elastic" set up as an index. or change
blah (low rep score so can't put in to many urls) blah localhost:9300/[index name]/
so that it uses and index you have created. If you haven't created one then you can do so from the putty with the following command.
curl -XPUT 'http://localhost:9200/[index name]/'
Using the command you supplied with the alternative port it did run although I've yet to extract the crawl data from elastic.
Supplemental Update:
It's successfully dumping data crawled from nutch into elastic for me and having put a different index in on the command line I can tell you it ignores that and uses what ever is in your nutch-site.xml
To help anyone else get it working
Start off by reading this blog post to help you get Elasticsearch configured to work with Nutch.
After that read this Nutch doc to get familiar with the NEW cli command for running the crawl script. (Works for 1.9+)
Follow the example in the new Nutch crawl script command on that page. You have to change it a bit for elasticsearch:
solr.server.url=http://localhost:8983/solr/ to something like
elastic.server.url=http://localhost:9300/yourelasticindex/
So basically there are 2 steps:
Configure Elasticsearch to work with Nutch (click on first link above)
Change the new cli command for solr to work with Elasticsearch (its
default is solr) Hope that helps!

Elastic Search JDBC River Plugin SQL Server Integrated Security

So I've been working on implementing elastic search using the JDBC River plugin to get data from our SQL Server DB into elastic search.
I've got it working fine using the SQL Server credentials, but trying to use integrated security doesn't work. It will create the index, but it doesn't have data in it.
The parameters I've been using are:
PUT /_river/test_river/_meta
{
"type":"jdbc",
"jdbc":
{
"driver":"com.microsoft.sqlserver.jdbc.SQLServerDriver",
"url":"jdbc:sqlserver://testServer:1433;databaseName=TestDb;
integratedSecurity=true;",
"user":"",
"password":"",
"sql": "select * from users",
"poll":"30s",
"index":"testindex",
"type":"testusers"
}
}
I've tried quite a few things, including removing the user and password fields completely, removing integratedSecurity=true, but it gave the same result.
I've checked on their github for the river plugin and it says this issue was fixed back in January, but it still doesn't seem to be working.
Also I'm using elastic search version: 1.5.1
and jdbc river plugin version : 1.4.0.10
Any help would be much appreciated
Get rid of the user and password options. You're not gonna need them.
Check the console when running elasticserch.bat, you should see an error message when it tries to update the river. I'm going to go out on a limb and assume you're probably seeing an error stating that the file sqljdbc_auth.dll can't be found. If this is the case, you can download this file from here and copy the x64 version of sqljdbc_auth.dll to your java lib folder. For me, this folder is C:\ProgramData\Oracle\Java\javapath but you can type echo %path% in a console window to find yours.
Once you have followed these steps, restart elasticsearch.bat, and it should start processing your river. If not, post back with the output you're seeing when running elasticsearch.bat.

Resources