Elasticsearch allocates primary shards only on first node in the cluster - elasticsearch
I have 5 node elasticsearch cluster to store logs ( 4 master eligble and data and 1 master only node)
and all primary shards that was created allocate only on firt node in the cluster so the 1st node almost totally overloaded by cpu
why this happens? nodes have the same specs (cpu,ram,etc)
here is my cat/allocation ( i manually move shards from elastic-01 every morning, and on elastic-01 is fewer shards that on the others, but anyway load on elastic-01 is big)
shards disk.indices disk.used disk.avail disk.total disk.percent host ip node
354 4.9tb 4.9tb 804.7gb 5.7tb 86 10.0.5.22 10.0.5.252 elastic-01
435 4tb 4tb 1.7tb 5.7tb 70 10.0.5.23 10.0.5.23 elastic-02
434 4.6tb 4.6tb 1tb 5.7tb 80 10.0.5.27 10.0.5.27 elastic-06
434 4.7tb 4.7tb 1014.8gb 5.7tb 82 10.0.5.28 10.0.5.28 elastic-07
and here is nodeattrs
node host ip attr value
elastic-07 10.0.5.28 10.0.5.28 ml.machine_memory 42002579456
elastic-07 10.0.5.28 10.0.5.28 ml.max_open_jobs 512
elastic-07 10.0.5.28 10.0.5.28 xpack.installed true
elastic-07 10.0.5.28 10.0.5.28 ml.max_jvm_size 27917287424
elastic-07 10.0.5.28 10.0.5.28 zone zone2
elastic-07 10.0.5.28 10.0.5.28 transform.node true
elastic-01 10.0.5.22 10.0.5.22 ml.machine_memory 42002583552
elastic-01 10.0.5.22 10.0.5.22 ml.max_open_jobs 512
elastic-01 10.0.5.22 10.0.5.22 xpack.installed true
elastic-01 10.0.5.22 10.0.5.22 ml.max_jvm_size 27917287424
elastic-01 10.0.5.22 10.0.5.22 zone zone1
elastic-01 10.0.5.22 10.0.5.22 transform.node true
elastic-02 10.0.5.23 10.0.5.23 ml.machine_memory 42407346176
elastic-02 10.0.5.23 10.0.5.23 ml.max_open_jobs 512
elastic-02 10.0.5.23 10.0.5.23 xpack.installed true
elastic-02 10.0.5.23 10.0.5.23 ml.max_jvm_size 27917287424
elastic-02 10.0.5.23 10.0.5.23 zone zone1
elastic-02 10.0.5.23 10.0.5.23 transform.node true
elastic-06 10.0.5.27 10.0.5.27 ml.machine_memory 42002579456
elastic-06 10.0.5.27 10.0.5.27 ml.max_open_jobs 512
elastic-06 10.0.5.27 10.0.5.27 xpack.installed true
elastic-06 10.0.5.27 10.0.5.27 ml.max_jvm_size 27917287424
elastic-06 10.0.5.27 10.0.5.27 zone zone2
elastic-06 10.0.5.217 10.0.5.27 transform.node true
elastic-11 10.0.25.24 10.0.25.24 ml.machine_memory 12428124160
elastic-11 10.0.25.24 10.0.25.24 ml.max_open_jobs 512
elastic-11 10.0.25.24 10.0.25.24 xpack.installed true
elastic-11 10.0.25.24 10.0.25.24 ml.max_jvm_size 2147483648
elastic-11 10.0.25.24 10.0.25.24 zone zone3
elastic-11 10.0.25.24 10.0.25.24 transform.node false
here is output from GET ilm/policy
{".alerts-ilm-policy":{"version":19,"modified_date":"2022-08-23T04:29:09.081Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"50gb","max_age":"30d"}}}}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[]}},".deprecation-indexing-ilm-policy":{"version":1,"modified_date":"2022-01-12T18:09:22.265Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"10gb","max_age":"14d"}}}}},"in_use_by":{"indices":[".ds-.logs-deprecation.elasticsearch-default-2022.04.06-000007",".ds-.logs-deprecation.elasticsearch-default-2022.03.09-000005",".ds-.logs-deprecation.elasticsearch-default-2022.02.09-000003",".ds-.logs-deprecation.elasticsearch-default-2022.02.23-000004",".ds-.logs-deprecation.elasticsearch-default-2022.06.29-000013",".ds-.logs-deprecation.elasticsearch-default-2022.07.13-000014",".ds-.logs-deprecation.elasticsearch-default-2022.08.10-000016",".ds-.logs-deprecation.elasticsearch-default-2022.07.27-000015",".ds-.logs-deprecation.elasticsearch-default-2022.01.26-000002",".ds-.logs-deprecation.elasticsearch-default-2022.08.24-000017",".ds-.logs-deprecation.elasticsearch-default-2022.01.12-000001",".ds-.logs-deprecation.elasticsearch-default-2022.06.15-000012",".ds-.logs-deprecation.elasticsearch-default-2022.04.20-000008",".ds-.logs-deprecation.elasticsearch-default-2022.05.04-000009",".ds-.logs-deprecation.elasticsearch-default-2022.03.23-000006",".ds-.logs-deprecation.elasticsearch-default-2022.06.01-000011",".ds-.logs-deprecation.elasticsearch-default-2022.05.18-000010"],"data_streams":[".logs-deprecation.elasticsearch-default"],"composable_templates":[".deprecation-indexing-template"]}},".fleet-actions-results-ilm-policy":{"version":1,"modified_date":"2022-01-12T17:12:45.443Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"300gb","max_age":"30d"}}},"delete":{"min_age":"90d","actions":{"delete":{"delete_searchable_snapshot":true}}}}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[]}},".items-forteforex":{"version":1,"modified_date":"2022-07-22T05:20:41.438Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb"}}}}},"in_use_by":{"indices":[".items–000001"],"data_streams":[],"composable_templates":[]}},".lists":{"version":1,"modified_date":"2022-07-22T05:20:41.356Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb"}}}}},"in_use_by":{"indices":[".lists-forteforex-000001"],"data_streams":[],"composable_templates":[]}},"180-days-default":{"version":1,"modified_date":"2022-01-12T18:09:22.121Z","policy":{"phases":{"warm":{"min_age":"2d","actions":{"forcemerge":{"max_num_segments":1},"shrink":{"number_of_shards":1}}},"cold":{"min_age":"30d","actions":{}},"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"50gb","max_age":"30d"}}},"delete":{"min_age":"180d","actions":{"delete":{"delete_searchable_snapshot":true}}}},"_meta":{"description":"built-in ILM policy using the hot, warm, and cold phases with a retention of 180 days","managed":true}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[]}},"30-days-default":{"version":1,"modified_date":"2022-01-12T18:09:22.229Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"50gb","max_age":"30d"}}},"delete":{"min_age":"30d","actions":{"delete":{"delete_searchable_snapshot":true}}},"warm":{"min_age":"2d","actions":{"forcemerge":{"max_num_segments":1},"shrink":{"number_of_shards":1}}}},"_meta":{"description":"built-in ILM policy using the hot and warm phases with a retention of 30 days","managed":true}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[]}},"365-days-default":{"version":1,"modified_date":"2022-01-12T18:09:21.893Z","policy":{"phases":{"warm":{"min_age":"2d","actions":{"forcemerge":{"max_num_segments":1},"shrink":{"number_of_shards":1}}},"cold":{"min_age":"30d","actions":{}},"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"50gb","max_age":"30d"}}},"delete":{"min_age":"365d","actions":{"delete":{"delete_searchable_snapshot":true}}}},"_meta":{"description":"built-in ILM policy using the hot, warm, and cold phases with a retention of 365 days","managed":true}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[]}},"7-days-default":{"version":1,"modified_date":"2022-01-12T18:09:22.046Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"50gb","max_age":"7d"}}},"delete":{"min_age":"7d","actions":{"delete":{"delete_searchable_snapshot":true}}},"warm":{"min_age":"2d","actions":{"forcemerge":{"max_num_segments":1},"shrink":{"number_of_shards":1}}}},"_meta":{"description":"built-in ILM policy using the hot and warm phases with a retention of 7 days","managed":true}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[]}},"90-days-default":{"version":1,"modified_date":"2022-01-12T18:09:22.192Z","policy":{"phases":{"warm":{"min_age":"2d","actions":{"forcemerge":{"max_num_segments":1},"shrink":{"number_of_shards":1}}},"cold":{"min_age":"30d","actions":{}},"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"50gb","max_age":"30d"}}},"delete":{"min_age":"90d","actions":{"delete":{"delete_searchable_snapshot":true}}}},"_meta":{"description":"built-in ILM policy using the hot, warm, and cold phases with a retention of 90 days","managed":true}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[]}},"ilm-history-ilm-policy":{"version":1,"modified_date":"2021-04-16T05:52:27.071Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb","max_age":"30d"}}},"delete":{"min_age":"90d","actions":{"delete":{"delete_searchable_snapshot":true}}}}},"in_use_by":{"indices":["ilm-history-2-000017","ilm-history-2-000016","ilm-history-2-000015","ilm-history-2-000014",".ds-ilm-history-5-2022.07.11-000010",".ds-ilm-history-5-2022.06.11-000008",".ds-ilm-history-5-2022.05.12-000006",".ds-ilm-history-5-2022.08.10-000012"],"data_streams":["ilm-history-5"],"composable_templates":["ilm-history"]}},"kibana-event-log-policy":{"version":1,"modified_date":"2021-04-23T06:56:32.286Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb","max_age":"30d"}}},"delete":{"min_age":"90d","actions":{"delete":{"delete_searchable_snapshot":true}}}}},"in_use_by":{"indices":[".kibana-event-log-7.16.2-000005",".kibana-event-log-7.16.2-000007",".kibana-event-log-7.16.2-000006",".kibana-event-log-7.16.2-000008"],"data_streams":[],"composable_templates":[".kibana-event-log-7.16.2-template"]}},"kibana-reporting":{"version":1,"modified_date":"2022-01-12T18:12:47.764Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{}}}},"in_use_by":{"indices":[".reporting-2022-08-07",".reporting-2022-06-05",".reporting-2022-08-28",".reporting-2022-05-29",".reporting-2022-03-27",".reporting-2022-05-08",".reporting-2022-01-16",".reporting-2022-08-21",".reporting-2022-01-23",".reporting-2022-09-04",".reporting-2022-05-22"],"data_streams":[],"composable_templates":[]}},"logs":{"version":1,"modified_date":"2021-04-16T05:52:26.933Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb","max_age":"30d"}}}}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":["logs"]}},"metrics":{"version":1,"modified_date":"2021-04-16T05:52:26.979Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb","max_age":"30d"}}}}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":["metrics"]}},"ml-size-based-ilm-policy":{"version":1,"modified_date":"2021-04-16T05:52:26.877Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb"}}}}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[".ml-state",".ml-stats"]}},"slm-history-ilm-policy":{"version":1,"modified_date":"2021-04-16T05:52:27.112Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_size":"50gb","max_age":"30d"}}},"delete":{"min_age":"90d","actions":{"delete":{"delete_searchable_snapshot":true}}}}},"in_use_by":{"indices":[".slm-history-2-000014",".slm-history-2-000015",".slm-history-2-000016",".slm-history-2-000017"],"data_streams":[],"composable_templates":[".slm-history"]}},"synthetics":{"version":1,"modified_date":"2022-01-12T18:09:21.976Z","policy":{"phases":{"hot":{"min_age":"0ms","actions":{"rollover":{"max_primary_shard_size":"50gb","max_age":"30d"}}}},"_meta":{"description":"default policy for the synthetics index template installed by x-pack","managed":true}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":["synthetics"]}},"watch-history-ilm-policy":{"version":1,"modified_date":"2021-04-16T05:52:27.024Z","policy":{"phases":{"delete":{"min_age":"7d","actions":{"delete":{"delete_searchable_snapshot":true}}}}},"in_use_by":{"indices":[],"data_streams":[],"composable_templates":[".watch-history-13"]}}}
nothing criminal, right?
and yeah, i have 50 indices with docs.deleted and at the moment i do force_merge 1 by one of this indices
for example this is shards that created in 03.09.2022
cat teks
lile-abz-2022.09.03 0 r STARTED 1818053 565.8mb 10.0.5.28 elastic-07
lile-abz-2022.09.03 0 p STARTED 1818053 545.6mb 10.0.5.22 elastic-01
pari-kurma-market-a-2022.09.03 0 r STARTED 9258217 5.7gb 10.0.5.27 elastic-06
pari-kurma-market-a-2022.09.03 0 p STARTED 9258217 5.7gb 10.0.5.22 elastic-01
gavn-pyati-2022.09.03 0 r STARTED 33761356 14.7gb 10.0.5.27 elastic-06
gavn-pyati-2022.09.03 0 p STARTED 33761356 14.8gb 10.0.5.22 elastic-01
gavn-pyati-3-2022.09.03 0 r STARTED 9768830 3.9gb 10.0.5.28 elastic-07
gavn-pyati-3-2022.09.03 0 p STARTED 9768830 3.9gb 10.0.5.23 elastic-02
lola-chik-a-2022.09.03 0 r STARTED 9581756 3.2gb 10.0.5.27 elastic-06
lola-chik-a-2022.09.03 0 p STARTED 9581756 3.2gb 10.0.5.22 elastic-01
lila-xyu-a-2022.09.03 0 r STARTED 1441592 605.5mb 10.0.5.27 elastic-06
lila-xyu-a-2022.09.03 0 p STARTED 1441592 603.3mb 10.0.5.23 elastic-02
gavn-pyati-1-2022.09.03 0 r STARTED 10179423 3.9gb 10.0.5.27 elastic-06
gavn-pyati-1-2022.09.03 0 p STARTED 10179423 3.9gb 10.0.5.22 elastic-01
gorilla-energ-a-2022.09.03 0 r STARTED 19341369 26.6gb 10.0.5.28 elastic-07
gorilla-energ-a-2022.09.03 0 p STARTED 19341369 26.4gb 10.0.5.23 elastic-02
chih-bimochk-a-2022.09.03 1 r STARTED 23188624 22.5gb 10.0.5.27 elastic-06
chih-bimochk-a-2022.09.03 1 p STARTED 23188624 22.4gb 10.0.5.22 elastic-01
chih-bimochk-a-2022.09.03 2 r STARTED 23181295 22.5gb 10.0.5.28 elastic-07
chih-bimochk-a-2022.09.03 2 p STARTED 23181295 22.4gb 10.0.5.22 elastic-01
chih-bimochk-a-2022.09.03 3 r STARTED 23175665 22.5gb 10.0.5.27 elastic-06
chih-bimochk-a-2022.09.03 3 p STARTED 23175665 22.4gb 10.0.5.22 elastic-01
chih-bimochk-a-2022.09.03 0 r STARTED 23182594 22.5gb 10.0.5.28 elastic-07
chih-bimochk-a-2022.09.03 0 p STARTED 23182594 22.4gb 10.0.5.22 elastic-01
prod-ca-a-2022.09.03 0 r STARTED 97339290 16.8gb 10.0.5.27 elastic-06
prod-ca-a-2022.09.03 0 p STARTED 97339290 16.8gb 10.0.5.22 elastic-01
jerusal-abracatd-a-2022.09.03 0 r STARTED 3647629 7.6gb 10.0.5.28 elastic-07
jerusal-abracatd-a-2022.09.03 0 p STARTED 3647629 7.6gb 10.0.5.22 elastic-01
prod-anstasia-a-2022.09.03 0 r STARTED 42794060 14.9gb 10.0.5.28 elastic-07
prod-anstasia-a-2022.09.03 0 p STARTED 42794060 14.9gb 10.0.5.22 elastic-01
sasha-log-2022.09.03 0 r STARTED 67602 6.5mb 10.0.5.28 elastic-07
sasha-log-2022.09.03 0 p STARTED 67602 6.5mb 10.0.5.23 elastic-02
prod-adedimitrious-a-2022.09.03 0 r STARTED 10874246 22.4gb 10.0.5.28 elastic-07
prod-adedimitrious-a-2022.09.03 0 p STARTED 10874246 22.4gb 10.0.5.22 elastic-01
as you can see almost all primaries located in elatic-01 with highest disk utiization, but nor on elastic-02 or other nodes
ADDED
i'm just create a index with 5 primary and 0 replicas and all of the primaries was allocated on elastic-01
curl -s -X GET http://10.0.5.22:9200/_cat/shards?v | grep swiss
i_want_swiss_passport 4 p STARTED 0 226b 10.0.5.22 elastic-01
i_want_swiss_passport 1 p STARTED 0 226b 10.0.5.22 elastic-01
i_want_swiss_passport 2 p STARTED 0 226b 10.0.5.22 elastic-01
i_want_swiss_passport 3 p STARTED 0 226b 10.0.5.22 elastic-01
i_want_swiss_passport 0 p STARTED 0 226b 10.0.5.22 elastic-01
Related
Elasticsearch one specific shard keep initializing in different data nodes
I am getting ElasticsearchStatusWarning saying that the cluster state is yellow. Upon running the health check API, I see below curl -X GET http://localhost:9200/_cluster/health/ {"cluster_name":"my-elasticsearch","status":"yellow","timed_out":false,"number_of_nodes":8,"number_of_data_nodes":3,"active_primary_shards":220,"active_shards":438,"relocating_shards":0,"initializing_shards":2,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":99.54545454545455} initializing_shards is 2. So, I further run the below call curl -X GET http://localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason |grep INIT % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 33457 100 33457 0 graph_vertex_24_18549 0 r INITIALIZING ALLOCATION_FAILED 0 79609 0 --:--:-- --:--:-- --:--:-- 79659 curl -X GET http://localhost:9200/_cat/shards/graph_vertex_24_18549 graph_vertex_24_18549 0 p STARTED 8373375 8.4gb IP1 elasticsearch-data-1 graph_vertex_24_18549 0 r INITIALIZING IP2 elasticsearch-data-2 And rerunning the same command in few mins, shows now it's being initialized in elasticsearch-data-0. See below graph_vertex_24_18549 0 p STARTED 8373375 8.4gb IP1 elasticsearch-data-1 graph_vertex_24_18549 0 r INITIALIZING IP0 elasticsearch-data-0 If i rerun it again in few mins, I can see it's again being initialized in elasticsearch-data-2 again. But it never gets STARTED. curl -X GET http://localhost:9200/_cat/allocation?v shards disk.indices disk.used disk.avail disk.total disk.percent host ip node 147 162.2gb 183.8gb 308.1gb 492gb 37 IP1 IP1 elasticsearch-data-2 146 217.3gb 234.2gb 257.7gb 492gb 47 IP2 IP2 elasticsearch-data-1 147 216.6gb 231.2gb 260.7gb 492gb 47 IP3 IP3 elasticsearch-data-0 curl -X GET http://localhost:9200/_cat/nodes?v ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name IP1 7 77 20 4.17 4.57 4.88 mi - elasticsearch-master-2 IP2 72 59 7 2.59 2.38 2.19 i - elasticsearch-5f4bd5b88f-4lvxz IP3 57 49 3 0.75 1.13 1.09 di - elasticsearch-data-2 IP4 63 57 21 2.69 3.58 4.11 di - elasticsearch-data-0 IP5 5 59 7 2.59 2.38 2.19 mi - elasticsearch-master-0 IP6 69 53 13 4.67 4.60 4.66 di - elasticsearch-data-1 IP7 8 70 14 2.86 3.20 3.09 mi * elasticsearch-master-1 IP8 30 77 20 4.17 4.57 4.88 i - elasticsearch-5f4bd5b88f-wnrl4 curl -s -XGET http://localhost:9200/_cluster/allocation/explain -d '{ "index": "graph_vertex_24_18549", "shard": 0, "primary": false }' -H 'Content-type: application/json' {"index":"graph_vertex_24_18549","shard":0,"primary":false,"current_state":"initializing","unassigned_info":{"reason":"ALLOCATION_FAILED","at":"2020-11-04T08:21:45.756Z","failed_allocation_attempts":1,"details":"failed shard on node [1XEXS92jTK-wwanNgQrxsA]: failed to perform indices:data/write/bulk[s] on replica [graph_vertex_24_18549][0], node[1XEXS92jTK-wwanNgQrxsA], [R], s[STARTED], a[id=RnTOlfQuQkOumVuw_NeuTw], failure RemoteTransportException[[elasticsearch-data-2][IP:9300][indices:data/write/bulk[s][r]]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [4322682690/4gb], which is larger than the limit of [4005632409/3.7gb], real usage: [3646987112/3.3gb], new bytes reserved: [675695578/644.3mb]]; ","last_allocation_status":"no_attempt"},"current_node":{"id":"o_9jyrmOSca9T12J4bY0Nw","name":"elasticsearch-data-0","transport_address":"IP:9300"},"explanation":"the shard is in the process of initializing on node [elasticsearch-data-0], wait until initialization has completed"} Thing is I was earlier getting alerted for Unassigned Shards due to the same exception as above - "CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [4322682690/4gb], which is larger than the limit of [4005632409/3.7gb]" But back then heap was only 2G. I increased it to 4G. And now I am seeing same error, but this time with respect to Initialising shards instead of Unallocated shards. How can I remediate this?
Elasticsearch is using way too much disk space per document
I'am using a 1.7 Elastic Search and try to index documents in it. With 23 000 documents my index size is 17Go... This seems a way too large for only 23K docs (as my doc, in json, are around 13ko). My doc have a lot of compound doc in it (the 13Ko is for all, doc + compound). I keep the _all in my doc (and I need it). I use this nGram tokeniser (maybe 2 is too low?) 'min_gram' => 2, 'max_gram' => 20 How a 13Ko per document become a 775Ko per doc after being added to Elastic? Here a sample of my shards, where I had indexed only 1K doc : dyb-fr_fr 4 p STARTED 7188 263.1mb 10.20.40.29 Doyoubuzz dyb-fr_fr 0 p STARTED 7675 258.6mb 10.20.40.29 Doyoubuzz dyb-fr_fr 3 p STARTED 7268 258.5mb 10.20.40.29 Doyoubuzz dyb-fr_fr 1 p STARTED 8560 300.1mb 10.20.40.29 Doyoubuzz dyb-fr_fr 2 p STARTED 7287 244.3mb 10.20.40.29 Doyoubuzz And by going deeper on segments : index shard prirep ip segment generation docs.count docs.deleted size size.memory committed searchable version compound dyb-fr_fr 0 p 127.0.0.1 _2 2 291 0 9.7mb 173746 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _7 7 57 0 2.4mb 52650 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _d 13 43 0 2.8mb 71242 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _e 14 322 0 11.2mb 197706 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _f 15 1912 0 64.8mb 928522 false true 4.10.4 false dyb-fr_fr 0 p 127.0.0.1 _k 20 64 0 1.4mb 43090 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _p 25 12 0 170.5kb 17794 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _t 29 1612 0 48.5mb 692322 false true 4.10.4 false dyb-fr_fr 0 p 127.0.0.1 _y 34 228 0 6.9mb 128042 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _z 35 159 0 6.3mb 121266 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _13 39 232 0 6.4mb 125386 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _15 41 127 0 4.2mb 97738 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _16 42 1675 0 44.3mb 637266 false true 4.10.4 false dyb-fr_fr 0 p 127.0.0.1 _17 43 203 0 4.3mb 92282 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _18 44 146 0 5.4mb 108730 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _19 45 236 0 6.3mb 115474 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _1a 46 63 0 2.1mb 52762 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _1b 47 118 0 4.2mb 88050 false true 4.10.4 true dyb-fr_fr 0 p 127.0.0.1 _1c 48 175 0 5.4mb 105570 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _2 2 89 0 2mb 56578 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _7 7 49 0 1.9mb 49810 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _9 9 314 0 10.6mb 184426 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _b 11 139 0 2.7mb 66218 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _f 15 1634 0 63.3mb 916226 false true 4.10.4 false dyb-fr_fr 1 p 127.0.0.1 _g 16 72 0 1.3mb 48850 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _n 23 67 0 2.1mb 56826 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _o 24 43 0 1009.2kb 32458 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _u 30 2097 0 55.2mb 770266 false true 4.10.4 false dyb-fr_fr 1 p 127.0.0.1 _x 33 35 0 877.5kb 29978 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _12 38 114 0 1.9mb 46818 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _15 41 292 0 5.7mb 116850 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _16 42 2264 0 64.6mb 923826 false true 4.10.4 false dyb-fr_fr 1 p 127.0.0.1 _17 43 193 0 5.4mb 110674 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _18 44 79 0 1.8mb 44858 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _19 45 198 0 7mb 136298 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _1a 46 170 0 5.7mb 118922 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _1b 47 173 0 6.8mb 130610 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _1c 48 162 0 3.7mb 79610 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _1d 49 205 0 7.2mb 130818 false true 4.10.4 true dyb-fr_fr 1 p 127.0.0.1 _1e 50 171 0 5.5mb 117946 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _0 0 404 0 16mb 270562 true true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _4 4 67 0 1.4mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _b 11 12 0 168.7kb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _f 15 24 0 245.3kb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _g 16 2003 0 57.6mb 808882 true true 4.10.4 false dyb-fr_fr 2 p 127.0.0.1 _j 19 67 0 3.4mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _l 21 100 0 2mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _o 24 136 0 3.4mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _s 28 28 0 396.9kb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _t 29 2149 0 57.6mb 822498 true true 4.10.4 false dyb-fr_fr 2 p 127.0.0.1 _u 30 171 0 5.7mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _v 31 144 0 7mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _w 32 123 0 2.7mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _x 33 147 0 6.2mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _y 34 129 0 4.3mb 0 true false 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _13 39 32 0 1.8mb 44994 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _14 40 614 0 14mb 233474 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1f 51 44 0 676.9kb 26306 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1g 52 42 0 1.3mb 35514 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1h 53 10 0 226.5kb 19578 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1j 55 30 0 1.1mb 43706 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1k 56 53 0 1.1mb 42610 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1o 60 234 0 5.7mb 113986 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1p 61 1366 0 43mb 626498 false true 4.10.4 false dyb-fr_fr 2 p 127.0.0.1 _1q 62 5 0 118.6kb 13162 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1r 63 78 0 7.7mb 144554 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1s 64 213 0 5.4mb 112610 false true 4.10.4 true dyb-fr_fr 2 p 127.0.0.1 _1t 65 10 0 314kb 18122 false true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _e 14 1873 0 63.9mb 915586 true true 4.10.4 false dyb-fr_fr 3 p 127.0.0.1 _g 16 75 0 1.6mb 40210 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _k 20 20 0 242.4kb 20426 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _o 24 74 0 2.1mb 58242 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _p 25 13 0 324.8kb 22514 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _s 28 1770 0 44.1mb 636786 true true 4.10.4 false dyb-fr_fr 3 p 127.0.0.1 _t 29 268 0 9.5mb 171306 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _u 30 26 0 837.3kb 27474 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _v 31 50 0 3mb 73322 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _10 36 78 0 2.6mb 69178 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _14 40 306 0 6.7mb 129666 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _15 41 1751 0 58.8mb 849858 true true 4.10.4 false dyb-fr_fr 3 p 127.0.0.1 _16 42 138 0 6.3mb 134242 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _17 43 37 0 512.7kb 24842 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _18 44 58 0 1.7mb 52410 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _19 45 286 0 5.3mb 99234 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _1a 46 202 0 7.3mb 132914 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _1b 47 53 0 1.6mb 43634 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _1c 48 111 0 3.9mb 88362 true true 4.10.4 true dyb-fr_fr 3 p 127.0.0.1 _1d 49 79 0 1.1mb 33410 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _4 4 66 0 1.6mb 41970 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _a 10 66 0 1mb 43786 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _c 12 269 0 17.4mb 296738 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _e 14 53 0 1.1mb 33954 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _f 15 2099 0 62.3mb 866978 true true 4.10.4 false dyb-fr_fr 4 p 127.0.0.1 _k 20 52 0 1mb 42882 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _r 27 71 0 2.4mb 51186 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _s 28 2016 0 48.8mb 689106 true true 4.10.4 false dyb-fr_fr 4 p 127.0.0.1 _x 33 36 0 1.6mb 48210 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _10 36 247 0 2.8mb 56842 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _12 38 258 0 10.9mb 189882 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _15 41 5 0 145.7kb 13386 true true 4.10.4 true dyb-fr_fr 4 p 127.0.0.1 _16 42 1950 0 53.3mb 749626 true true 4.10.4 false
ngrams are producing a lot of tokens. If you index "abcdef", you will index actually: ab,bc,cd,de,ef,abc,bcd,cde,def,abcd,bcde,cdef,abcde,bcdef,abcdef It's even worse if you have very big strings as the combinatory explodes.
sort the numbers in multiple lines in vim
I have a file formatted as such: ... [ strNADPplus ] 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 11153 11154 11155 11156 11157 11158 11159 11160 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 12964 12965 12966 12967 12968 12969 12970 5360 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 13110 13111 13112 13113 13114 13115 13116 13117 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 13173 13174 13175 13176 13177 13178 13179 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 13557 13558 13559 13560 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 13683 13684 13685 13686 6021 6022 6023 6024 6025 6026 6027 6028 6029 13718 13719 13720 13721 13722 13723 6339 6340 6341 6342 6343 6344 6345 6346 6347 14044 14045 14046 14047 14048 14049 ... I want to sort the numbers in that block of lines to have something that looks like: 1 2 3 4 7 8 9 100 101 121 345 346 348 10232 16654 ... I first tried with :4707,4743%sort n (4707 and 4743 are the lines of that block), but I was only able to sort the first values of each line. I then tried to join the selection and sort the line: visual mode + J and :'<,'>sort n. But it doesn't sort correctly. 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 11153 11154 11155 11156 11157 11158 11159 11160 5255 5256 5257 5258 5259 5260 5261 5262 5263 5264 5265 5266 5267 5268 5269 5270 5271 5272 5273 5274 5275 5276 5277 12964 12965 12966 12967 12968 12969 12970 5360 13057 13058 13059 13060 13061 13062 13063 13064 13065 13066 13067 13068 13069 13070 5361 5362 5363 5364 5365 5366 5367 5368 5369 5370 5371 5372 5373 5374 5375 5400 5401 5402 5403 5404 5405 5406 5407 5408 5409 5410 5411 5412 5413 5414 5415 5416 5417 5418 5419 5420 5421 13110 13111 13112 13113 13114 13115 13116 13117 5464 5465 5466 5467 5468 5469 5470 5471 5472 5473 5474 5475 5476 5477 5478 5479 5480 5481 5482 5483 5484 5485 5486 13173 13174 13175 13176 13177 13178 13179 5860 5861 5862 5863 5864 5865 5866 5867 5868 5869 5870 13557 13558 13559 13560 5983 5984 5985 5986 5987 5988 5989 5990 5991 5992 5993 13683 13684 13685 13686 6021 6022 6023 6024 6025 6026 6027 6028 6029 13718 13719 13720 13721 13722 13723 6339 6340 6341 6342 6343 6344 6345 6346 6347 14044 14045 14046 14047 14048 14049 7421 7422 7423 7424 7425 7426 7427 7428 7429 7430 7431 7432 7433 7434 7435 7436 7437 15124 15125 15126 15127 15128 15129 15130 15131 15132 15133 15134 15135 15136 7502 7503 7504 7505 7506 7507 7508 7509 15208 15209 15210 15211 15212 15213 15214 7677 7678 7679 7680 7681 7682 7683 7684 7685 7686 7687 15377 15378 15379 15380 11161 11162 11163 11164 11165 11166 11167 11168 11169 11170 11171 11172 11173 11174 5254 12971 12972 12973 12974 12975 12976 12977 12978 12979 12980 12981 12982 12983 12984 12985 12986 12987 5347 5348 5349 5350 5351 5352 5353 5354 5355 5356 5357 5358 5359 13071 13072 13073 13074 13075 13076 13077 13078 13079 13080 13081 13082 13083 13084 13085 13118 13119 13120 13121 13122 13123 13124 13125 13126 13127 13128 13129 13130 13131 5463 13180 13181 13182 13183 13184 13185 13186 13187 13188 13189 13190 13191 13192 13193 13194 13195 13196 5847 5848 5849 5850 5851 5852 5853 5854 5855 5856 5857 5858 5859 13561 13562 13563 13564 13565 13566 13567 13568 13569 13570 13571 13572 13573 13574 13575 13576 13577 13578 13579 13580 5973 5974 5975 5976 5977 5978 5979 5980 5981 5982 13687 13688 13689 13690 13691 13692 13693 13694 13695 13696 13697 13698 13699 13700 13701 13702 13703 6008 6009 6010 6011 6012 6013 6014 6015 6016 6017 6018 6019 6020 13724 13725 13726 13727 13728 13729 13730 13731 13732 13733 13734 13735 13736 13737 13738 13739 6303 6304 6305 6306 6307 6308 6309 6310 6311 6312 6313 6314 14013 14014 14015 14016 14017 14018 14019 14020 14021 14022 14023 14024 6334 6335 6336 6337 6338 14050 14051 14052 14053 14054 14055 14056 14057 7414 7415 7416 7417 7418 7419 7420 15137 15138 15139 15140 15141 15142 15143 15144 15145 15146 15147 7498 7499 7500 7501 15215 15216 15217 15218 15219 7667 7668 7669 7670 7671 7672 7673 7674 7675 7676 15381 15382 15383 15384 15385 15386 15387 15388 15389 15390 15391 15392 15393 15394 15395 15396 15397 How do I sort everything and keep that layout?
I would simply use standard external unix tools: :'<,'>!tr ' ' '\n' | sort -n | tr '\n' ' ' | fold -w 15 -s This wraps lines to 15 characters. :'<,'>!tr ' ' '\n' | sort -n | paste -d' ' - - - This wraps to 3 numbers per line.
elasticsearch in yellow status
I have a two nodes elasticsearch cluster with logstash and kibana. The cluster was on green status until I have started the logstash the elasticsearch cluster went to yellow status. running curl -XGET http://localhost:9200/_cat/shards?pretty will show the following .kibana 0 p STARTED 2 8.2kb 172.17.0.1 Ereshkigal .kibana 0 r UNASSIGNED logstash-2015.10.18 4 r STARTED 69 101.9kb 172.17.0.1 Ereshkigal logstash-2015.10.18 4 p STARTED 69 101.7kb 172.17.0.2 Hargen the Measurer logstash-2015.10.18 0 r STARTED 62 65.7kb 172.17.0.1 Ereshkigal logstash-2015.10.18 0 p STARTED 62 89.4kb 172.17.0.2 Hargen the Measurer logstash-2015.10.18 3 p STARTED 76 48.1kb 172.17.0.1 Ereshkigal logstash-2015.10.18 3 r UNASSIGNED logstash-2015.10.18 1 p STARTED 74 78.8kb 172.17.0.1 Ereshkigal logstash-2015.10.18 1 r UNASSIGNED logstash-2015.10.18 2 r STARTED 79 56.8kb 172.17.0.1 Ereshkigal logstash-2015.10.18 2 p STARTED 79 65.1kb 172.17.0.2 Hargen the Measurer logstash-2015.10.19 4 p STARTED 7 43.4kb 172.17.0.1 Ereshkigal logstash-2015.10.19 4 r UNASSIGNED logstash-2015.10.19 0 r STARTED 7 50.8kb 172.17.0.1 Ereshkigal logstash-2015.10.19 0 p STARTED 7 58.3kb 172.17.0.2 Hargen the Measurer logstash-2015.10.19 3 r STARTED 9 67.4kb 172.17.0.1 Ereshkigal logstash-2015.10.19 3 p STARTED 9 67.3kb 172.17.0.2 Hargen the Measurer logstash-2015.10.19 1 r STARTED 12 76.4kb 172.17.0.1 Ereshkigal logstash-2015.10.19 1 p STARTED 12 13.8kb 172.17.0.2 Hargen the Measurer logstash-2015.10.19 2 p STARTED 13 78kb 172.17.0.1 Ereshkigal logstash-2015.10.19 2 r UNASSIGNED How can I get the cluster back in green status?
Yellow means you have unassigned shards, as shown by your output. Make sure your shard allocation is enabled: curl -XPUT localhost:9200/_cluster/settings -d '{ "transient" : { "cluster.routing.allocation.enable" : "all" } }' You can also try forcing the shard onto a node: https://www.elastic.co/guide/en/elasticsearch/reference/current/cluster-reroute.html
Passenger/Ruby memory usage goes out of control on Ubuntu
The last few days Passenger has been eating up loads of memory on my Slicehost VPS, and I can't seem to get it under control. It runs fine for a few hours, and then all of a sudden spawns tons of rubies. I thought Apache was the problem, so I switched to Nginx, but the problem persists. Here's a dump of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 5048 avishai 20 0 160m 43m 1192 S 0 10.9 0:00.77 ruby1.8 5102 avishai 20 0 151m 41m 1392 S 0 10.6 0:00.07 ruby1.8 5091 avishai 20 0 153m 30m 1400 D 0 7.6 0:00.27 ruby1.8 5059 avishai 20 0 158m 27m 1344 D 0 7.0 0:00.64 ruby1.8 4809 avishai 20 0 161m 27m 1208 D 0 6.9 0:06.65 ruby1.8 4179 avishai 20 0 162m 23m 1140 D 0 5.9 0:25.25 ruby1.8 5063 avishai 20 0 159m 23m 1200 D 0 5.9 0:00.65 ruby1.8 5044 avishai 20 0 159m 12m 1172 S 0 3.3 0:00.79 ruby1.8 5113 avishai 20 0 149m 9.8m 1576 D 0 2.5 0:00.00 ruby1.8 5076 avishai 20 0 155m 9.8m 1128 S 0 2.5 0:00.33 ruby1.8 3269 mysql 20 0 239m 5356 2156 S 0 1.3 0:00.35 mysqld 3510 root 20 0 49948 3580 736 S 0 0.9 1:01.86 ruby1.8 4792 root 20 0 98688 3560 644 S 0 0.9 0:00.84 ruby1.8 4799 avishai 20 0 148m 2204 600 S 0 0.5 0:01.64 ruby1.8 3508 root 20 0 295m 1972 1044 S 0 0.5 0:35.77 PassengerHelper 3562 nobody 20 0 39776 964 524 D 0 0.2 0:00.82 nginx 3561 nobody 20 0 39992 948 496 D 0 0.2 0:00.72 nginx 4238 avishai 20 0 19144 668 456 R 0 0.2 0:00.06 top 3293 syslog 20 0 123m 636 420 S 0 0.2 0:00.06 rsyslogd 3350 nobody 20 0 139m 432 220 S 0 0.1 0:00.05 memcached 3364 redis 20 0 50368 412 300 S 0 0.1 0:00.33 redis-server 1575 avishai 20 0 51912 324 216 S 0 0.1 0:00.00 sshd 3513 nobody 20 0 72272 192 160 S 0 0.0 0:00.02 PassengerLoggin 3330 root 20 0 21012 180 124 S 0 0.0 0:00.00 cron 3335 root 20 0 49184 152 144 S 0 0.0 0:00.01 sshd 1 root 20 0 23500 92 88 S 0 0.0 0:00.08 init 1573 root 20 0 51764 88 80 S 0 0.0 0:00.00 sshd 3505 root 20 0 89044 84 80 S 0 0.0 0:00.00 PassengerWatchd 3319 root 20 0 5996 68 64 S 0 0.0 0:00.00 getty 3323 root 20 0 6000 68 64 S 0 0.0 0:00.00 getty 3325 root 20 0 5996 68 64 S 0 0.0 0:00.00 getty 3326 root 20 0 6000 68 64 S 0 0.0 0:00.00 getty 3328 root 20 0 5996 68 64 S 0 0.0 0:00.00 getty 3383 root 20 0 5996 68 64 S 0 0.0 0:00.01 getty Here's my environment: RubyGems Environment: - RUBYGEMS VERSION: 1.6.2 - RUBY VERSION: 1.8.7 (2011-02-18 patchlevel 334) [x86_64-linux] - INSTALLATION DIRECTORY: /home/avishai/.rvm/gems/ruby-1.8.7-p334 - RUBY EXECUTABLE: /home/avishai/.rvm/rubies/ruby-1.8.7-p334/bin/ruby - EXECUTABLE DIRECTORY: /home/avishai/.rvm/gems/ruby-1.8.7-p334/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/avishai/.rvm/gems/ruby-1.8.7-p334 - /home/avishai/.rvm/gems/ruby-1.8.7-p334#global - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gem" => "--no-ri --no-rdoc" - :sources => ["http://gems.rubyforge.org", "http://gems.github.com"] - REMOTE SOURCES: - http://gems.rubyforge.org - http://gems.github.com
It appears you have a lot of instances running. Try limiting this as is appropriate for your system. passenger_max_pool_size 2 I tend to go for one instance per 128MB of RAM you have. Full documentation: http://www.modrails.com/documentation/Users%20guide%20Nginx.html#PassengerMaxPoolSize