I am getting this error message, when reindexing, and I want to inspect the internal server error it reports.
[2014-12-11 10:37:43 +0000] Start Indexing Error - RSolr::Error::Http
- 500 Internal Server Error - retrying... Error - RSolr::Error::Http - 500 Internal Server Error - ignoring... [2014-12-11 10:37:44 +0000]
Completed Indexing. Rows indexed 850. Rows/sec: 36.41987405255847
(Elapsed: 1.372876796 sec.)
Websolr support here. Websolr doesn't (yet) provide user-accessible logs. The best bet is to send an email to support#websolr.com and we can get you the info you need.
run something like
locate solr.log
The log files are normally located in the root of your installation of solr instance inside a folder called logs.
Related
my logs on a publicly accessible server are bloated with this error
Error 2022-10-11 22:11:50 staging [0] "" on line 44 of file /var/www/html/vendor/laravel/framework/src/Illuminate/Routing/AbstractRouteCollection.php
that line just throws a \Symfony\Component\HttpKernel\Exception\NotFoundHttpException, a 404 basically.
I suspect the huge amount is due to the attempts at hacking the webserver (reflected from the apache access logs.
First of all, can I add the info about the url that's failing?
Then, can anyone suggest a way to deal with this to reduce the amount of errors (even because I use a monitoring app and get charged on this events)?
Thanks
I keep getting the same error when starting the Akeneo Community Edition! It seems to be an error caused by Elastictsearch, but I cannot figure out what is wrong.
The Error message:
[OK] Database schema created successfully!
Updating database schema...
37 queries were executed
[OK] Database schema updated successfully!
Reset elasticsearch indexes
In StaticNoPingConnectionPool.php line 50:
No alive nodes found in your cluster
Im running on a uberspace server without docker and i'm trying to start it like mentioned here:
https://docs.akeneo.com/4.0/install_pim/manual/installation_ee_archive.html but with the community Edition instead.
Does anyone had the same error and knows how to help me out?
Maybe it a problem with the .env file for the entry point of elastic search. My .env: APP_INDEX_HOSTS=localhost:9200
Can you verify that the Elasticsearch search server is available on localhost:9200 when accessing it via curl/Postman/Sense or something else?
That error usually means the node is either not running, or not running on the configured port.
Pay also attention that your server follow the system requirements - https://docs.akeneo.com/4.0/install_pim/manual/system_requirements/system_requirements.html
I am running a load test on JMeter with 200 users. Around 10 percent of the request sent for each sampler results into failure with a status code 404 - Not found error. However, if I run my test with a load of 100 users I do not encounter 404 errors. Please advice me on what can be the issue and possible solution for this.?
It’s a server issue only. Some applications handle server error in a kind of strange way.
So you would need to:
analyze access logs
add monitoring and APM to diagnose
check error logs
I get this error sometimes when trying to save things to Parse or to fetch data from it.
This is not constant and appear once in a while making the operation to fail.
I have contacted Parse for that. Here is their answer:
Starting on 4/28/2016, apps that have not migrated their database may see a "428" error code if the request cannot be handled by the remaining shared pool of resources. If you see this error in your logs, we highly recommend migrating the database for your app without delay.
Means this happens because of starting this date all apps are on low priority but those who started DB migration. So, Migration of the DB should resolve that.
I am currently using Nutch 2.2.1 and HBase 0.90.4. I am expecting around 300K urls from about 10 URLS in seed. I have already generated so much while using Nutch 1.6. Since I want to manipulate data, I preferred to go Nutch 2.2.1 + HBase route. But I get all sorts of weird errors and crawl doesn't seem to progress.
Various errors such as:
zookeeper.ClientCnxn - Session for server null, unexpected error, closing socket connection and attempting reconnect. - I get this more frequently
bin/crawl: line 164: killed - I get this error from fetch step and the crawling gets killed all of a sudden.
RSS parse error
I am using a all-in-one crawl command - bin/crawl urls 1 http://localhost:8983/solr/ 10
<crawl> <seed-dir> <crawl-id> <solr-url> <number of rounds>
Please suggest where am I going wrong. I have Nutch 2.2.1 installed and HBase (standalone) installed as per the Quick start guide recommended from Nutch site. I am not sure following HBase 0.90.4 standalone set up from Quick start guide link is sufficient to achieve 300K crawled urls.
Edit # 1: RSS Parse Error - log information
Error tika.TikaParser - Error parsing http://www.###.###.##/###/abc.xml
org.apache.tika.exception.TikaException: RSS parse error