Akeneo PIM No alive nodes found in your cluster ERROR - elasticsearch

I keep getting the same error when starting the Akeneo Community Edition! It seems to be an error caused by Elastictsearch, but I cannot figure out what is wrong.
The Error message:
[OK] Database schema created successfully!
Updating database schema...
37 queries were executed
[OK] Database schema updated successfully!
Reset elasticsearch indexes
In StaticNoPingConnectionPool.php line 50:
No alive nodes found in your cluster
Im running on a uberspace server without docker and i'm trying to start it like mentioned here:
https://docs.akeneo.com/4.0/install_pim/manual/installation_ee_archive.html but with the community Edition instead.
Does anyone had the same error and knows how to help me out?
Maybe it a problem with the .env file for the entry point of elastic search. My .env: APP_INDEX_HOSTS=localhost:9200

Can you verify that the Elasticsearch search server is available on localhost:9200 when accessing it via curl/Postman/Sense or something else?
That error usually means the node is either not running, or not running on the configured port.
Pay also attention that your server follow the system requirements - https://docs.akeneo.com/4.0/install_pim/manual/system_requirements/system_requirements.html

Related

couldn't connect to ElasticSearch inside GetCandy

As am following the documentation in the site here https://getcandy.io/docs/master/guides/introduction/01-installation
but when got to point to set this code:
php artisan candy:search:index
having exception error listed here:
Elastica\Exception\Connection\HttpException : Couldn't connect to host, Elasticsearch down?
Sounds most likely that Elasticsearch isn't running properly, rather than an issue with GetCandy.
If you run the following you should be able to determine if Elasticsearch is up.
curl localhost:9200
If you get a response with the Elasticsearch version etc, it is running. If it's not running, you'll need to check the Elasticsearch logs, normally found somewhere like /var/log/elasticsearch/

Search fails after upgrade to TFS 2018 Update 2

After performing an upgrade of a TFS server to 2018 Update 2 the search and indexing seems to be broken on one of our environments.
Any search gives "We encountered an unexpected error when processing your request" and I have work through all the troubleshooting docs to clean and reindex all collections. Also completely reinstalled the search package to the separate server we run for search and indexing to make sure we got the correct version running.
In the event logs on the TFS App Server a large number of these exceptions are logged:
Events (81277) completed with status FailedAndRetry. Event 81277
completed with message 'BeginBulkIndex-PushEventNotification: The
operation did not complete successfully because of exception
Microsoft.VisualStudio.Services.Search.Common.FeederException: Lots of
files rejected by Elasticsearch, failing this job. Failure Reason :
Microsoft.VisualStudio.Services.Search.Common.SearchPlatformException:
ES Exception: [HTTP Status Code: [200] BULK_API_ERROR: [ index
returned 404 _index: codesearchshared_1_0 _type:
SourceNoDedupeFileContractV3 _version: 0 error: Type:
type_missing_exception Reason: "type[SourceNoDedupeFileContractV3]
missing"
And another exception type also logged a lot of times indicating failure to index work items:
Microsoft.VisualStudio.Services.Search.Common.SearchPlatformException:
ES Exception: [HTTP Status Code: [200] BULK_API_ERROR: [ update
returned 404 _index: workitemsearchshared_0_2 _type: workItemContract
_version: 0 error: Type: type_missing_exception Reason: "type[workItemContract] missing" update returned 404 _index:
workitemsearchshared_0_2 _type: workItemContract _version: 0 error:
Type: type_missing_exception Reason: "type[workItemContract] missing"
The exceptions seems to indicate that some type registrations are missing like the workItemContract and SourceNoDedupeFileContractV3 but I can not find any errors on the search server installation logs.
Anyone got some suggestions on how to solve this and get the Elastic Search back into a working state?
We resolved to situation by completely uninstalling and then reinstalling everything related to search.
Uninstalled all Code/Work/Wiki extensions from all collections from the extension management in web admin
Removed the TFS Search Service feature from the TFS Admin Console.
Uninstalled the elasticsearch service from the separate search server, using the PowerShell script .\Configure-TFSSearch.ps1 -operation remove
Restart TFS Job Agent service
Deleted old Search related databas content from ALL collection databases using
DELETE FROM [Search].[tbl_IndexingUnit]
DELETE FROM [Search].[tbl_IndexingUnitChangeEvent]
DELETE FROM [Search].[tbl_IndexingUnitChangeEventArchive]
DELETE FROM [Search].[tbl_JobYield]
DELETE FROM [Search].[tbl_TreeStore]
DELETE FROM [Search].[tbl_DisabledFiles]
DELETE FROM [Search].[tbl_ItemLevelFailures]
DELETE FROM [Search].[tbl_ResourceLockTable]
Restart TFS Job Agent service
Reboot Search server
Run Configure Search feature wizard from TFS Admin Console, using existing search server
Install search package according to instructions by PowerShell script
.\Configure-TFSSearch.ps1 -Operation install -TFSSearchInstallPath
D:\ES -TFSSearchIndexPath D:\ESDATA -Port 9200 -Verbose
Completed Search configuration wizard from TFS Admin Console enabling code search for all existing collections
Checked that services were running and tested searching from the web application, it works!
According to your error info and TFS version, this issue similar to Unable to start search after upgrade to TFS 2018 Update 2
Try the solution in the question below:
It seemed that I had an invalid/problematic setting in following reg
key that update/install did not fix.
HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Apache Software
Foundation\Procrun
2.0\elasticsearch-service-x64\Parameters\Java\Options
Value contained:
-Xms0m
-Xmx0m
Changing both from '0m' to '1g' fixed the issue. As far as I
understand '0m' defaults to 4GB, which might collide with my server
only having 2,8 GB of RAM. I will upgrade the server to follow the
minimum requirements.
Maybe the configuration tool could warn about this problem or set the
value to something that is possible.
Also take a look at this article, which maybe helpful: Elasticsearch 6.2.2 fails to run as a windows service
[This post refers to azure devops server 2019 to 2020 upgrade]
I also got
"We encountered an unexpected error when processing your request"
for any search - this was after migrating azure devops server 2019 to 2020 on a new box (was unable to perform the upgrade on the same box)
http://localhost:9200/_cat/health?v showed the only cluster TFS_Search_AZURE-DEVOPS in red status
In my particular case everything on the same box - windows 2019 + sql server 2019 + search service
Followed Per Salmi instructions and unfortunately that did not resolve the issue.
The solution in my case was rebuilding elastic search indexes (details and scripts):
Reindex repo
To reindex a Git or TFVC repository, execute the appropriate version of the script Re-IndexingCodeRepository.ps1 for your Azure DevOps Server or TFS version with administrative privileges. You're prompted to enter: [my values in brackets]
The SQL server instance name where the Azure DevOps Server or TFS configuration database is [azure-devops]
The name of the Azure DevOps Server or TFS collection database [Tfs_DefaultCollection]
The name of the Azure DevOps Server or TFS configuration database [Tfs_Configuration]
The type of reindexing to execute. Can be one of the following values: Git_Repository TFVC_Repository [Git_Repository]
The name of the collection [DefaultCollection]
The name of the repository to reindex. [your repo name here]
Reindex collection
To reindex a collection, execute the script TriggerCollectionIndexing.ps1 with administrative privileges. You're prompted to enter: [my values in brackets]
The SQL server instance name where the Azure DevOps Server or TFS configuration database is [azure-devops]
The name of the Azure DevOps Server or TFS collection database [Tfs_DefaultCollection]
The name of the Azure DevOps Server or TFS configuration database [Tfs_Configuration]
The name of the collection [DefaultCollection]
The entities to reindex. Can be one of the following values: All Code WorkItem Wiki [All]
The scripts above place reindexing jobs that typically take a few minutes to complete.
Right after execution I was getting an encouraging "We are not able to show results because one or more projects in your organization are still being indexed"
And after a few minutes results started to come in, http://localhost:9200/_cat/health?v showing the only cluster TFS_Search_AZURE-DEVOPS in green status

Mesos framework stays inactive due to "Authentication failed: EOF"

I'm currently trying to deploy Eremetic (version 0.28.0) on top of Marathon using the configuration provided as an example. I actually have been able to deploy it once, but suddenly, after trying to redeploy it, the framework stays inactive.
By inspecting the logs I noticed a constant attempt to connect to some service that apparently never succeeds because of some authentication problem.
2017/08/14 12:30:45 Connected to [REDACTED_MESOS_MASTER_ADDRESS]
2017/08/14 12:30:45 Authentication failed: EOF
It looks like the service returning an error is ZooKeeper and more precisely it looks like the error can be traced back to this line in the Go ZooKeeper library. ZooKeeper however seems to work: I've tried to query it directly with zkCli and to run a small Spark job (where the Mesos master is given with zk:// URL) and everything seems to work.
Unfortunately I'm not able to diagnose the problem further, what could it be?
It turned out to be a configuration problem. The master URL was simply wrong and this is how the error was reported.

Cassandra start error - Exception encountered during startup: The seed provider lists no seeds

I am thus far unable to successfully run cassandra. I have arrived at a point in my efforts where I believe it is more efficient to reach out for help.
Installation method: datastax-ddc-64bit-3.9.0.msi
OS: Windows 7
Symptoms:
cmd> net start DataStax-DDC-Server
results in cmd output 'service is starting' and 'service was started successfully'.
datastax_ddc_server-stdout.log has this subsequent output, that is likely relevant:
WARN 10:38:17 Seed provider couldn't lookup host *myIPaddress*
Exception (org.apache.cassandra.exceptions.ConfigurationException) encountered during startup: The seed provider lists no seeds.
ERROR 10:38:17 Exception encountered during startup: The seed provider lists no seeds.
$
cmd> nodetool status
results in the following error message:
Error: Failed to connect to ‘127.0.0.1:7199’: Connection refused
I would also like to note that the Cassandra CQL Shell closes immediately after I open it.. I think that it quickly flashes an error similar to above.
Please be patient with me if I have included some useless information or am not approaching my issue from the correct perspective. I have not worked with apache cassandra before, nor have I configured a machine to facilitate an installation of any database engine.
Any help/insight is much appreciated,
Thanks!

DB2 Full Text Search IQQD0040E Error

I have a production database running DB2 at 10.1.2 workgroup (OpenSuse 12.2) and I have Full Text Search running pretty well there. Now I'm trying to build a test enviroment, but when I turn over de production backup into test machine with 10.1.2 express-c the FTS is presenting this error:
<message>IQQD0040E The client specified the wrong authentication token.
com.ibm.es.nuvo.inyo.common.InyoFactoryWrapper.authenticate(InyoFactoryWrapper.java:203)
com.ibm.es.nuvo.inyo.common.InyoFactoryWrapper.getHandler(InyoFactoryWrapper.java:85)
com.ibm.es.nuvo.inyo.common.InyoServer$InyoListener.run(InyoServer.java:425)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:614)
java.lang.Thread.run(Thread.java:769)</message>
The redbook says to me that this error cause is: "Usually this error occurs when there are 2 or more text search instances configured with the same port number and one instance is already running".
I've already searched other instances but I've only found one. So "usually" does not apply to my situation.
Anyone know what else I can do to fix that?
Best regards,
jacker
I've found out a solution. When the backup is transported to a new instance of DB2, de FTS application engage it communication with a token. After restored, we just need to go to the bin directory of FTS, commonly at /home/db2inst1/db2tss/bin and run this command:
configTool generateToken -seed <username> -configPath ~/sqllib/db2tss/config
Hope this help anyone who's passing by this trouble.
Regards.

Resources