I have a "Elastic-search + Kibana" instance which has a lot of data. I have another HAProxy instance which redirects connection to Kibana dashboard.
I am having a issue where the Kibana Dashboard isn't able to search (*) , it takes too much time & eventually throws this -:
I want to know why is this happening or what exactly "Bad Gateway" mean. Moreover what to can be done to solve this.
Found the solution , it was the issue of Elastic_search tuning. I had to allocate half of the memory to ES_HEAP_SIZE & did some tweaks on elasticsearch.yml which doesn't allow JVM to swap memory.
I got the same problem, I restarted my browser, then it back to normal again.
Related
i am using elk 7.1.1 with x-pack installed.
I am trying to perform a get command in kibana dev console to get the list of all snapshots
GET _cat/snapshots/<myrepositoryname>/
output
{
"statusCode": 504,
"error": "Gateway Time-out",
"message": "Client request timeout"
}
and i also tried
GET _cat/snapshots/<myrepositoryname>/?waitforcompletion=true
But its not working.Please help me solve it.
Old Questions but maybe its helps others. To increase timout you can set the timout parameter as query parameter like:
POST my_index/_search?timeout=9000s
The s is for seconds. You can use other time shortcuts as well.
I am not aware of a way to set a query-specific timeout by using the Elasticsearch Query DSL. Also, that option doesn't seem dynamically updateable at all (I got a illegal_argument_exception when I tried to update it by using the _cluster/settings API).
Then, the only way I know to increase the period to wait for a response is by increasing the value of the timeout setting in your elasticsearch.yml configuration file.
However, I would suggest you (1) to check whether the resources (RAM, CPU) you assigned to your cluster are sufficient, and (2) to use some naming/lifecycle conventions for your snapshots, so that to add a more fine-grained way to filter them other than just by grouping snapshots by a repository name (e.g., naming convention <year>-<month>-<day_time>-snapshot, then you could run narrow down your search e.g., GET _cat/snapshots/<myrepositoryname>/2020-January-*).
PS. The wait_for_completion query parameter is only blocking the request until a response from the server is received, but has nothing to do with the timeout.
I tried to play with LowCardinality setting, I got a message saying that this is an experimental feature and I have to SET allow_experimental_low_cardinality_type = 1 in order to use it.
I executed this command inside clickhouse-client and then I restarted the server. But I got
clickhouse-server.service: Unit entered failed state
Now I am trying to find out how to disable this setting and make my clickhouse-server start again.
Can you help with this please ?
PS: The version I use is the 18.12.17 and I installed it on Linux Ubuntu 16.04
ClickHouse has different layers for settings. If you used SET <setting> = <value> then you set it for current session. You don't need to restart ClickHouse. Please, take a look here.
I suppose you faced with another problem during starting your server. There a bunch of reasons why. So, firstly try to recollect what were done in configs since last restart (because you have just applied changes by restarting server).
Digging into logs also an awesome idea. Don't hesitate to check other similar issues on github.com, for example like this one
I am facing a strange problem with solr. After running solr for few hours, client starts reporting error message that it is unable to contact the solr, although solr instance is up on the server.
I can't see any high traffic on website which sometimes is the reason of connection refusal.
This issue gets fixed after solr restart.
Any idea what is going wrong here ?
Answer to most of the problems can be found in logs. Thanks D_K for reminding me.
SEVERE: java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
I have increased the heap size to fix this issue.
java -Xms<initial heap size> -Xmx<maximum heap size>
Also, we have reduced document size by removing unnecessary information which we don't need to retain in solr.
If you have a client with long running connection but low amount of traffic, you may have a firewall in between. Firewalls have limited-size routing tables, so they eventually drop the mapping for connections they haven't seen for a while.
Try sending a ping query every 30 minutes or so through that specific connection and see if the issue goes away. If you need to validate it, run Wireshark on the client and see whether the client is getting RST (reset) packets from an unexpected end point (that would be firewall).
You just need to add your collection in solr by following the step given in this url ( https://drupal.stackexchange.com/questions/95897/apache-solr-4-6-0-insta... ) and then select your collection from your solr which is running on localhost or live site (http://localhost:8983/solr/) and go to schema tab. Click schema tab and then you can see you schema file attach in apachesolr module.
You now just need to your schema url which just look like this http://localhost:8983/solr/your_core_name/. Now add this url in apachesolr module.
Then it will show that your site has contacted apache solr server in your drupal site.
It's the concern about Admin console's performance from Websphere Application Server.
I can login smoothly without any problem, but it's got be very slow on response when doing operations such as showing Node status by clicking "Nodes" under "System administration", showing AppServer status by clicking "Application Servers" under "Servers" etc. However, the funny thing is the problem is more serious on remote node than the local nodes which locate on same box with DmgrNode.
So i suspect it should be the problem with network communication between DmgrNode and remote Node, but i don't know how to fix it.
Anybody got the same issue here??? Or any idea to figure it out? Please do me a favor, please please .......
When the console gets the list of servers to display, it does make mbean calls to all of the servers, and if there is a network problem between the dmgr and the node, this could cause some delay in displaying the server page. The nodes page, however, should not have that issue. What is your topology? How many nodes/servers and how many are local/remote?
How can you tell the problem is more serious on the remote nodes than the local nodes?
Are other console operations slower, or only ones that display status? Do you have the same problem with wsadmin commands? The console issues a queryNames to search for the server mbeans. Does the following wsadmin command run much more slowly on the remote node than the local node? If so, how much more slowly?
print AdminControl.queryNames('WebSphere:type=Server,node=myNode')
Replace myNode with either the local or remote node name.
I assume you are using IE to fire up the console. Fire the console from Chrome and see if it helps.
I started to mess around with the EC2 “Micro Instance” for a new site i’m working on. I put on an ubuntu lamp server and loaded up our favorite php framework and started along the coding path.
One frustrating thing i’m finding is whenever I make a mistake coding (which is rare! j/k), it gives my a “Server Error 500” and won’t display the php error line number or the helpful references to where the mistake might have happened.
Also when ever an error does appear and I try to fix the mistake it will remain the same for a couple of minutes. Its like its caching on my system or something. If I do something like this :
echo "test" //leaving off the semicolon
refresh the browser it comes up with the error. Then when I fix it:
echo "foo"; //corrected
I still get the Server Error 500. Not sure if anyone else has run in to these issues. Maybe its a php.ini configuration, .htaccess configuration (i’m using Paul Irish’s HTML5Boilerplate .htaccess code), or a LAMP configuration issue. Any pointers to where the problem might lie would be a huge help.
Thanks! Steve
this has nothing to do with ec2.
see php error directives in /etc/php5/apache2/php.ini