I want my redis key will never expire so I have given a big expire number
await _redisDatabase.StringSetAsync(key, serializedObject, TimeSpan.FromSeconds(Int32.MaxValue), When.NotExists);
But after every 30 min when every I am trying to retrieve the values in redis cache it is showing multiple operations and taking time.
"subscribe" "Booksleeve_MasterChanged"
1572964169.258388 [0 10.0.1.123:37602] "get" "Booksleeve_TieBreak"
1572964169.258395 [0 10.0.1.123:37602] "get" "Booksleeve_TieBreak"
1572964169.258400 [0 10.0.1.123:37602] "info" "replication"
1572964169.258411 [0 10.0.1.123:37602] "ping"
1572964169.258413 [0 10.0.1.123:37602] "ping"
1572964169.279451 [0 10.0.1.123:37602] "get" "3118"
1572964171.281458 [0 10.0.1.123:37602] "get" "sr-dev-Countries"
1572964185.990889 [0 10.0.2.175:29181] "info" "replication"
1572964185.990960 [0 10.0.2.175:20613] "unsubscribe" "\xab\x1a\xfb\xa5\x8a]RB\x8
Can anybody help me to understand what I am doing wrong.
If you don't want your key to expire, do not put a TTL. Redis doesn't require a TTL for keys.
Related
On my single test server with 8G of RAM (1955m to JVM) having es v 7.4, I have 12 application indices + few system indices like (.monitoring-es-7-2021.08.02, .monitoring-logstash-7-2021.08.02, .monitoring-kibana-7-2021.08.02) getting created daily. So on an average I can see daily es creates 15 indices.
today I can see only two indices are created.
curl -slient -u elastic:xxxxx 'http://127.0.0.1:9200/_cat/indices?v' -u elastic | grep '2021.08.03'
Enter host password for user 'elastic':
yellow open metricbeat-7.4.0-2021.08.03 KMJbbJMHQ22EM5Hfw 1 1 110657 0 73.9mb 73.9mb
green open .monitoring-kibana-7-2021.08.03 98iEmlw8GAm2rj-xw 1 0 3 0 1.1mb 1.1mb
and reason for above I think is below,
While looking into es logs, found
[2021-08-03T12:14:15,394][WARN ][o.e.x.m.e.l.LocalExporter] [elasticsearch_1] unexpected error while indexing monitoring document org.elasticsearch.xpack.monitoring.exporter.ExportException: org.elasticsearch.common.ValidationException: Validation Failed: 1: this action would add [1] total shards, but this cluster currently has [1000]/[1000] maximum shards open;
logstash logs for application index and filebeat index
[2021-08-03T05:18:05,246][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"ping_server-2021.08.03", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x44b98479], :response=>{"index"=>{"_index"=>"ping_server-2021.08.03", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}}}}
[2021-08-03T05:17:38,230][WARN ][logstash.outputs.elasticsearch][main] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-7.4.0-2021.08.03", :_type=>"_doc", :routing=>nil}, #LogStash::Event:0x1e2c70a8], :response=>{"index"=>{"_index"=>"filebeat-7.4.0-2021.08.03", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"validation_exception", "reason"=>"Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;"}}}}
Adding active and unassigned shards totals to 1000
"active_primary_shards" : 512,
"active_shards" : 512,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 488,
"delayed_unassigned_shards" : 0,
"active_shards_percent_as_number" : 51.2
If I check with below command, I see all unassigned shards are replica shards
curl -slient -XGET -u elastic:xxxx http://localhost:9200/_cat/shards | grep 'UNASSIGNED'
.
.
dev_app_server-2021.07.10 0 r UNASSIGNED
apm-7.4.0-span-000028 0 r UNASSIGNED
ping_server-2021.07.02 0 r UNASSIGNED
api_app_server-2021.07.17 0 r UNASSIGNED
consent_app_server-2021.07.15 0 r UNASSIGNED
Q. So for now, can I safely delete unassigned shards to free up some shards as its single node cluster?
Q. Can I changed the settings from allocating 2 shards (1 primary and 1 replica) to 1 primary shard being its a single server for each index online?
Q. If I have to keep one year of indices, Is below calculation correct?
15 indices daily with one primary shard * 365 days = 5475 total shards (or say 6000 for round off)
Q. Can I set 6000 shards as shard limit for this node so that I will never face this mentioned shard issue?
Thanks,
You have a lot of unassigned shards (probably because you have a single node and all indices have replicas=1), so it's easy to get rid of all of them and get rid of the error at the same time, by running the following command
PUT _all/_settings
{
"index.number_of_replicas": 0
}
Regarding the count of the indices, you probably don't have to create one index per day if those indexes stay small (i.e. below 10GB each). So the default 1000 shards count is more than enough without you have to change anything.
You should simply leverage Index Lifecycle Management in order to keep your index size at bay and not create too many small ones of them.
I'm new in using elasticseach. I to use elasticsearch to aggregate logs. My problem is with the storage, I deleted all indices and now I have only one index.
When I call /_cat/allocation?v disk.indices is 23.9mb and disk.used is 16.4gb. Why is this difference? How can I remove unused data or how can I remove properly indices?
I ran the command:
curl -XPOST "elasticsearch:9200/_forcemerge?only_expunge_deletes=true"
But I didn't see any improvement.
Output of _cat/allocation?v :
shards disk.indices disk.used disk.avail
12 24.3mb 16.4gb 22.7gb
Output of _cat/shards?v :
index shard prirep state docs store ip node
articles 0 p STARTED 3666 24.2mb 192.168.1.21 lW9hsd5
articles 0 r UNASSIGNED
storage_test 2 p STARTED 0 261b 192.168.1.21 lW9hsd5
storage_test 2 r UNASSIGNED
storage_test 3 p STARTED 0 261b 192.168.1.21 lW9hsd5
storage_test 3 r UNASSIGNED
storage_test 4 p STARTED 0 261b 192.168.1.21 lW9hsd5
storage_test 4 r UNASSIGNED
storage_test 1 p STARTED 0 261b 192.168.1.21 lW9hsd5
storage_test 1 r UNASSIGNED
storage_test 0 p STARTED 0 261b 192.168.1.21 lW9hsd5
storage_test 0 r UNASSIGNED
twitter 3 p STARTED 1 4.4kb 192.168.1.21 lW9hsd5
twitter 3 r UNASSIGNED
twitter 2 p STARTED 0 261b 192.168.1.21 lW9hsd5
twitter 2 r UNASSIGNED
twitter 4 p STARTED 0 261b 192.168.1.21 lW9hsd5
twitter 4 r UNASSIGNED
twitter 1 p STARTED 0 261b 192.168.1.21 lW9hsd5
twitter 1 r UNASSIGNED
twitter 0 p STARTED 0 261b 192.168.1.21 lW9hsd5
twitter 0 r UNASSIGNED
.kibana 0 p STARTED 4 26.4kb 192.168.1.21 lW9hsd5
Thanks
https://www.elastic.co/guide/en/elasticsearch/guide/current/delete-doc.html
As already mentioned in Updating a Whole Document, deleting a document
doesn’t immediately remove the document from disk; it just marks it as
deleted. Elasticsearch will clean up deleted documents in the
background as you continue to index more data.
You might be facing some side effects of a _forcemerge on an a non-read-only index:
Warning: Force merge should only be called against read-only indices. Running force merge against a read-write index can cause very large segments to be produced (>5Gb per segment), and the merge policy will never consider it for merging again until it mostly consists of deleted docs. This can cause very large segments to remain in the shards.
In this case I would suggest to first make the index read-only:
PUT your_index/_settings
{
"index": {
"blocks.read_only": true
}
}
Then to do force merge again and enable back writing to the index:
PUT your_index/_settings
{
"index": {
"blocks.read_only": false
}
}
In case this does not work, you can do a reindex from an old index into a new index and then delete the old index.
Is there a better way of deleting old logs?
Looks like you want to delete old log messages. Although you could issue a delete by query, there is in fact a better way: using Rollover API.
The idea is to create a new index every time the old index gets too big. Writes will happen into a fixed alias, and Rollover API will make alias point into a new index when the old one is too old or too big. Then to delete the old data you will only have to delete the old indexes.
Hope that helps!
I intend to use a Prometheus Histogram vector to monitor the execution time of request handlers in Go.
I register it so:
var RequestTimeHistogramVec = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "request_duration_seconds",
Help: "Request duration distribution",
Buckets: []float64{0.125, 0.25, 0.5, 1, 1.5, 2, 3, 4, 5, 7.5, 10, 20},
},
[]string{"endpoint"},
)
func init() {
prometheus.MustRegister(RequestTimeHistogramVec)
}
I use it so:
startTime := time.Now()
// handle request here
metrics.RequestTimeHistogramVec.WithLabelValues("get:" + endpointName).Observe(time.Since(startTime).Seconds())
When I do a HTTP GET to the /metrics endpoint after using my endpoint a couple of times, I get - amongst other things - the following:
# HELP request_duration_seconds Request duration distribution
# TYPE request_duration_seconds histogram
request_duration_seconds_bucket{endpoint="get:/position",le="0.125"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="0.25"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="0.5"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="1"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="1.5"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="2"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="3"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="4"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="5"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="7.5"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="10"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="20"} 6
request_duration_seconds_bucket{endpoint="get:/position",le="+Inf"} 6
request_duration_seconds_sum{endpoint="get:/position"} 0.022002387
request_duration_seconds_count{endpoint="get:/position"} 6
From the looks of it, all buckets are filled by the same amount, equal to the total amount of times I used my endpoint (6 times).
Why does this happen and how may I fix it?
Prometheus histogram buckets are cumulative, so in this case all the requests took less than or equal to 125ms.
In this case your choice of buckets may not be the best, you might want to make some of the buckets smaller.
This is not an error. Notice the rule for filling the bucket is le=..., meaning less or equal. Since all 6 requests succeeded quickly, all buckets were filled.
I'm working with Elasticsearch 5.2.2 and I would like to fully merge the segments of my index after an intensive indexing operation.
I'm using the following rest API in order to merge all the segments:
http://localhost:9200/my_index/_forcemerge
(I've tried also to add max_num_segments=1 in the POST request.)
And ES replies with:
{
"_shards": {
"total": 16,
"successful": 16,
"failed": 0
}
}
Note that my_index is composed by 16 shards.
But when I ask for node stats (http://localhost:9200/_nodes/stats) it replies with:
segments: {
count: 64,
[...]
}
So it seems that all the shards are split into 4 segments (64/16 = 4). In fact, an "ls" on the data directory confirms that there are 4 segments per shards:
~# ls /var/lib/elasticsearch/nodes/0/indices/ym_5_99nQrmvTlR_2vicDA/0/index/
_0.cfe _0.cfs _0.si _1.cfe _1.cfs _1.si _2.cfe _2.cfs _2.si _5.cfe _5.cfs _5.si segments_6 write.lock
And no concurrent merges are running (http://localhost:9200/_nodes/stats):
merges: {
current: 0,
[...]
}
And all the force_merge requests have been completed (http://localhost:9200/_nodes/stats):
force_merge: {
threads: 1,
queue: 0,
active: 0,
rejected: 0,
largest: 1,
completed: 3
}
I hadn't this problem with ES 2.2.
Who knows how to fully merge these segments?
Thank you all!
I am not sure whether your problem solved. just post here to let other people know.
this should be a bug. you can see following issue. use empty json body can make it work.
https://github.com/TravisTX/elasticsearch-head-chrome/issues/16
API call:
https://www.googleapis.com/youtube/analytics/v1/reports?ids=channel==UCs4uXj0TcstDHqhHDUWlINg&start-date=2016-10-02&end-date=2016-10-02&metrics=views%2ccomments%2clikes%2cdislikes%2cshares%2cestimatedMinutesWatched%2caverageViewDuration%2caverageViewPercentage%2cannotationClickThroughRate%2cannotationCloseRate%2csubscribersGained%2csubscribersLost
token:
ya29
.OAJc4bDrDoA6XVEmCI9KZK6rfIz68aXjibhZFQowWZxHJx7tt0qyvpxUryxtPZtN8IrN
Observe the parameters I used in reports.query Try-it:
The correct format in the ids textfield is 'channel=={YOUR_CHANNEL_ID}'
The response will correspond to the order you enumerated your metrics. For example, the response to the parameters above was:
"rows": [
[
201,
0,
9
]
]
meaning:
201 - views
0 - likes
9 - comments
because my metrics textfield was in this order -> views,likes,comments.