How to track total number of rejected and dropped events - elasticsearch

What is the correct way to track number of dropped or rejected events in the managed elasticsearch cluster?

GET /_nodes/stats/thread_pool which gives you something like:
"thread_pool": {
"bulk": {
"threads": 4,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 4,
"completed": 42
}
....
"flush": {
"threads": 0,
"queue": 0,
"active": 0,
"rejected": 0,
"largest": 0,
"completed": 0
}
...

Another way to get more concise and better formatted info (especially if you are dealing with several nodes) about thread pools is to use the _cat threadpool API
$ curl -XGET 'localhost:9200/_cat/thread_pool?v'
host ip bulk.active bulk.queue bulk.rejected index.active index.queue index.rejected search.active search.queue search.rejected
10.10.1.1 10.10.1.1 1 10 0 2 0 0 10 0 0
10.10.1.2 10.10.1.2 2 0 1 4 0 0 4 10 2
10.10.1.3 10.10.1.3 1 0 0 1 0 0 5 0 0
UPDATE
You can also decide which thread pools to show and for each thread pool which fields to include in the output. For instance below, we're showing the following fields from the search threadpool:
sqs: The maximum number of search requests that can be queued before being rejected
sq: The number of search requests in the search queue
sa: The number of currently active search threads
sr: The number of rejected search threads (since the last restart)
sc: The number of completed search threads (since the last restart)
Here is the command:
curl -s -XGET 'localhost:9200/_cat/thread_pool?v&h=ip,sqs,sq,sa,sr,sc'
ip sqs sq sa sr sc
10.10.1.1 100 0 1 0 62636120
10.10.1.2 100 0 2 0 15528863
10.10.1.3 100 0 4 0 64647299
10.10.1.4 100 0 5 372 103014657
10.10.1.5 100 0 2 0 13947055

Related

Why are there so many free movable DMA32 blocks on the x86 64bits platform?

Why are there so many free movable DMA32 blocks on the x86 64bits platform?
As its name, I think it is used for DMA. But 730 free blocks(with order 10) means more than 1GB memory. How huge the memory is!
cat /proc/pagetypeinfo says:
sudo cat /proc/pagetypeinfo
Page block order: 9
Pages per block: 512
Free pages count per migrate type at order 0 1 2 3 4 5 6 7 8 9 10
Node 0, zone DMA, type Unmovable 0 1 1 0 2 1 1 0 1 0 0
Node 0, zone DMA, type Movable 0 0 0 0 0 0 0 0 0 1 3
Node 0, zone DMA, type Reclaimable 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone DMA, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone DMA, type Isolate 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone DMA32, type Unmovable 1 0 0 0 0 0 1 1 1 1 0
Node 0, zone DMA32, type Movable 3 4 5 4 2 3 4 4 1 2 730
Node 0, zone DMA32, type Reclaimable 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone DMA32, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone DMA32, type Isolate 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone Normal, type Unmovable 17 2 2 1 0 0 0 1 2 13 0
Node 0, zone Normal, type Movable 15 4 0 15 4 1 1 0 0 0 934
Node 0, zone Normal, type Reclaimable 0 6 21 9 6 3 3 1 2 0 0
Node 0, zone Normal, type HighAtomic 0 0 0 0 0 0 0 0 0 0 0
Node 0, zone Normal, type Isolate 0 0 0 0 0 0 0 0 0 0 0
Number of blocks type Unmovable Movable Reclaimable HighAtomic Isolate
Node 0, zone DMA 1 7 0 0 0
Node 0, zone DMA32 2 1526 0 0 0
Node 0, zone Normal 160 2314 78 0 0

Lua (trAInsported): trying to implement Wavefront Algorithm, not working

I'm trying to implement a wavefront algorithm and I have a problem with the function, that produces the map with specific gradients.
I've tried several different versions of the code below and none of them worked.
The starting point for the algorithm (the goal) is set to 1 before and from that point on each neighbour's gradient should be increased (similar to every wavefront algorithm), if the gradient hasn't bin altered yet.
originX and originY is the goal, from which the alorithm should start. mapMatrix is a global variable
mapMatrix looks like this:
0 0 0 0 0 0 0
0 0 N 0 0 0 0
0 0 0 0 0 0 0
0 0 0 0 0 0 0
0 N 0 0 N 0 N
N N 0 0 N 0 0
0 0 0 0 0 0 0
(0 for rails, N(nil) for obstacles)
expected output example:
7 6 5 4 3 4 5
6 5 N 3 2 3 4
5 4 3 2 1 2 3
6 5 4 3 2 3 3
7 N 5 4 N 4 N
N N 6 5 N 5 6
9 8 7 6 7 6 7
And with this code for example:
function pathFinder(originX, originY)
northDir = originY - 1
eastDir = originX + 1
southDir = originY + 1
westDir = originX - 1
if northDir > 0 and mapMatrix[originX][northDir] == 0 then
mapMatrix[originX][northDir] = mapMatrix[originX][originY] + 1
pathFinder(originX, northDir)
end
if eastDir <= 7 and mapMatrix[eastDir][originY] == 0 then
mapMatrix[eastDir][originY] = mapMatrix[originX][originY] + 1
pathFinder(eastDir, originY)
end
if southDir <= 7 and mapMatrix[originX][southDir] == 0 then
mapMatrix[originX][southDir] = mapMatrix[originX][originY] + 1
pathFinder(originX, southDir)
end
if westDir > 0 and mapMatrix[westDir][originY] == 0 then
mapMatrix[westDir][originY] = mapMatrix[originX][originY] + 1
pathFinder(westDir, originY)
end
end
I get this mapMatrix:
0 0 0 0 3 4 5
0 0 N 0 2 10 6
0 0 0 0 1 9 7
0 0 0 0 0 0 8
0 N 0 0 N 0 N
N N 0 0 N 0 0
0 0 0 0 0 0 0
And if I switch the if statements arround it produces different version of mapMatrix
After making northDir, etc local the output looks like this: EDIT
33 24 23 22 3 4 5
32 25 N 21 2 11 6
31 26 27 20 1 10 7
30 29 28 19 20 9 8
31 N 29 18 N 10 N
N N 30 17 N 11 12
33 32 31 16 15 14 13
If more code or information is needed, I'd be happy to help
Your code is just wrong at all. As pathFinder is called recursively in the first check, it will be just going in that direction until any obstacle appears, and than going in the next direction, and so on.
BFS is actually a pretty simple algorithm. It can be easily implemented iteratively on a queue without any recursion as follow:
Put initial node to a queue;
Pop first node from the queue and process it;
Push unprocessed adjacent nodes to the end of the queue;
If queue is not empty, go to the step 2.
In Lua on a rectangular matrix it can be implemented in about two or three dozen of lines:
function gradient(matrix, originX, originY)
-- Create queue and put origin position and initial value to it.
local queue = { { originX, originY, 1 } }
repeat
-- Pop first position and value from the queue.
local x, y, value = unpack(table.remove(queue, 1))
-- Mark this position in the matrix.
matrix[y][x] = value
-- Check position to the top.
if y > 1 and matrix[y - 1][x] == 0 then
-- If it is not already processed, push it to the queue.
table.insert(queue, { x, y - 1, value + 1 })
end
-- Check position on the left.
if x > 1 and matrix[y][x - 1] == 0 then
table.insert(queue, { x - 1, y, value + 1 })
end
-- Check position to the bottom.
if y < #matrix and matrix[y + 1][x] == 0 then
table.insert(queue, { x, y + 1, value + 1 })
end
-- Check position on the right.
if x < #matrix[y] and matrix[y][x + 1] == 0 then
table.insert(queue, { x + 1, y, value + 1 })
end
-- Repeat, until queue is not empty.
until #queue == 0
end
-- Just helper function to print a matrix.
function printMatrix(matrix)
for _, row in pairs(matrix) do
for _, value in pairs(row) do
io.write(string.format("%2s", value))
end
io.write('\n')
end
end
local mapMatrix = {
{ 0, 0, 0, 0, 0, 0, 0, },
{ 0, 0, 'N', 0, 0, 0, 0, },
{ 0, 0, 0, 0, 0, 0, 0, },
{ 0, 0, 0, 0, 0, 0, 0, },
{ 0, 'N', 0, 0, 'N', 0, 'N', },
{ 'N', 'N', 0, 0, 'N', 0, 0, },
{ 0, 0, 0, 0, 0, 0, 0, },
}
gradient(mapMatrix, 5, 3)
printMatrix(mapMatrix)
--[[
Produces:
7 6 5 4 3 4 5
6 5 N 3 2 3 4
5 4 3 2 1 2 3
6 5 4 3 2 3 4
7 N 5 4 N 4 N
N N 6 5 N 5 6
9 8 7 6 7 6 7
]]
This is a complete script, runnable in the console.
But although, for illustrative purposes, this code is very simple, it is not very efficient. Each removal of the first item from the queue causes reindexing of the remaining items. For production code you should implement a linked list or something similar for the queue.

How to create relational matrix?

I have the following data:
client_id <- c(1,2,3,1,2,3)
product_id <- c(10,10,10,20,20,20)
connected <- c(1,1,0,1,0,0)
clientID_productID <- paste0(client_id,";",product_id)
df <- data.frame(client_id, product_id,connected,clientID_productID)
client_id product_id connected clientID_productID
1 1 10 1 1;10
2 2 10 1 2;10
3 3 10 0 3;10
4 1 20 1 1;20
5 2 20 0 2;20
6 3 20 0 3;20
The goal is to produce a relational matrix:
client_id product_id clientID_productID client_pro_1_10 client_pro_2_10 client_pro_3_10 client_pro_1_20 client_pro_2_20 client_pro_3_20
1 1 10 1;10 0 1 0 0 0 0
2 2 10 2;10 1 0 0 0 0 0
3 3 10 3;10 0 0 0 0 0 0
4 1 20 1;20 0 0 0 0 0 0
5 2 20 2;20 0 0 0 0 0 0
6 3 20 3;20 0 0 0 0 0 0
In other words, when product_id equals 10, clients 1 and 2 are connected. Importantly, I do not want client 1 to be connected with herself. When product_id=20, I have only one client, meaning that there is no connection, so I should have only zeros.
To be more specific, all that I am trying to create is a square matrix of relations, with all the combinations of client/product in the columns. A client can only be connected with another if they bought the same product.
I have searched a bunch and played with other code. The difference between this problem and others already answered is that I want to keep on my table client number 3, even though she never bought any product. I want to show that she does not have a relationship with any other client. Right now, I am able to create the matrix by stacking the relationships by product (How to create relational matrix in R?), but I am struggling with a way to not stack them.
I apologize if the question is not specific enough, or too specific. Thank you anyway, stackoverflow is a lifesaver for beginners.
I believe I figured it out.
It is for sure not the most elegant answer, though.
client_id <- c(1,2,3,1,2,3)
product_id <- c(10,10,10,20,20,20)
connected <- c(1,1,0,1,0,0)
clientID_productID <- paste0(client_id,";",product_id)
df <- data.frame(client_id, product_id,connected,clientID_productID)
df2 <- inner_join(df[c(1:3)], df[c(1:3)], by = c("product_id", "connected"))
df2$Source <- paste0(df2$client_id.x,"|",df2$product_id)
df2$Target <- paste0(df2$client_id.y,"|",df2$product_id)
df2 <- df2[order(df2$product_id),]
indices = unique(as.character(df2$Source))
mtx <- as.matrix(dcast(df2, Source ~ Target, value.var="connected", fill=0))
rownames(mtx) = mtx[,"Source"]
mtx <- mtx[,-1]
diag(mtx)=0
mtx = as.data.frame(mtx)
mtx = mtx[indices, indices]
I got the result I wanted:
1|10 2|10 3|10 1|20 2|20 3|20
1|10 0 1 0 0 0 0
2|10 1 0 0 0 0 0
3|10 0 0 0 0 0 0
1|20 0 0 0 0 0 0
2|20 0 0 0 0 0 0
3|20 0 0 0 0 0 0

Elasticsearch query cyclically returns different results

Background:
I've inherited an Elasticsearch project that is returning some very odd results, and I can't really determine what I need to do to properly fix this.
Based on my reading of the code, it appears that 4 queries are run against the index based on search terms - the first one being exact match, the second and subsequent searches allowing more "slop" and "fuzziness". The search results with the highest scores are then combined into a single return; duplicate matches with lower scores are discarded.
Problem:
The queries with "slop" and "fuzziness" seem to cycle through results every 3 times I run the query. I've determined this by looking for a specific unique item in the query result and it only shows up 1 out of every 3 times I run the query. This cycle happens for all 3 of the non-exact match queries.
Additional information:
Based on the results of _cat/segments?v&index=[MY_INDEX_NAME], it appears that we have the index spread across 3 computers but only one shard. This gives me some hope that this is the reason we're getting the correct result only 1 out of every 3 times, but it's still very confusing as to why this would happen.
Band-aid fix:
I've been able to get consistent results for these problematic queries by increasing the "size" parameter from 50 to 150. This does slow the query down by a small amount, but at least it works for now. I am pretty sure that this isn't the correct solution.
Topology:
/_cat/nodes:
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
xx.x.xx.x 2 58 0 0.00 0.00 0.00 i - elastic-ingest-001
xx.x.xx.x 55 86 9 0.59 0.47 0.33 md - elastic-data-002
xx.x.xx.x 1 57 0 0.03 0.02 0.00 i - elastic-ingest-000
xx.x.xx.x 21 94 9 1.05 0.96 0.64 md - elastic-data-001
xx.x.xx.x 18 84 7 0.22 0.21 0.19 md * elastic-data-000
xx.x.xx.xx 7 58 0 0.00 0.00 0.00 i - elastic-ingest-002
/_cat/indices?v:
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .watcher-history-7-2018.06.05 aCfGd37MT5W2fJfK6HZjsQ 1 1 7 0 203kb 101.5kb
green open .watcher-history-7-2018.06.29 WpTLI_WUSVeDUblRh59uvg 1 1 12114 0 25.2mb 12.7mb
green open .watcher-history-7-2018.06.22 vt-LYb9NRSaZ46eReuixXg 1 1 11953 0 23.7mb 11.9mb
green open .monitoring-es-6-2018.06.30 dnNVGu7pQ1GriAZLfIAQaA 1 1 458763 672 587.3mb 292mb
green open .watcher-history-7-2018.06.02 8zM5yosrQIGJiSfzvMEC_A 1 1 0 0 460b 230b
green open .triggered_watches J9SWF-w8R2yd0aYtPBigAg 1 1 2 61157 53.7mb 24.4mb
green open .watcher-history-7-2018.06.07 x0aT6E71RNCdXjIFEhHOPw 1 1 12094 0 24.3mb 12.3mb
green open .watcher-history-7-2018.06.28 1nhqH54JQJOj9g63ov_NPw 1 1 8909 0 21mb 10.5mb
green open .watcher-history-7-2018.06.26 _rpVOWKkS1mERWgFA5Myag 1 1 9144 0 22.1mb 11.1mb
green open .watcher-history-7-2018.06.17 8zK45nMcR8WmGda4wGW82Q 1 1 12219 0 24.7mb 12.3mb
green open [DIFFERENT_INDEX01] 0GCz0zu3R6SaRjNHiWOa6g 1 2 1818246 0 1.3gb 470.9mb
green open .watcher-history-7-2018.06.20 FGhBth4OTJW-xusT7gplaw 1 1 12180 0 24.2mb 12.2mb
green open .watcher-history-7-2018.06.27 -lK0pwYiTvi3a08dO7AoyQ 1 1 8955 0 20.9mb 10.4mb
green open .watcher-history-7-2018.07.03 JmTpXIY7SXqoVodSpKRtMA 1 1 11896 0 24.3mb 12mb
green open .watcher-history-7-2018.07.05 GMCpHn7MTc-D1HEtDa-Ydw 1 1 7853 0 16.5mb 8.3mb
green open .watcher-history-7-2018.06.04 GXgFHhDdS9GJDou4sBd6RA 1 1 0 0 460b 230b
green open .watches a3dbI5smSauUB7nSc8alTw 1 1 6 0 221.2kb 110.5kb
green open .watcher-history-7-2018.06.19 aCzHvUa5SJ6n6wzKoXBJwA 1 1 12026 0 24mb 12.1mb
green open .watcher-history-7-2018.06.09 56pGfAiWQmeNog8JVZtICw 1 1 11983 0 23.9mb 11.9mb
green open .watcher-history-7-2018.06.01 MRqAmVqmThaIF_6KK5AlRQ 1 1 0 0 460b 230b
green open .watcher-history-7-2018.07.02 Ij_8wgk4T-aJ6-PYAf9gqg 1 1 12015 0 24.4mb 12.2mb
green open .watcher-history-7-2018.06.18 oZViVas5SoWd1D2_naVr3w 1 1 11996 0 23.9mb 11.9mb
green open .watcher-history-7-2018.06.03 2_V6x656RCKGTe0IZyCkqA 1 1 0 0 460b 230b
green open .watcher-history-7-2018.06.11 F4STy7gFS9a7e8qOV81AOA 1 1 11780 0 23.8mb 11.9mb
green open .watcher-history-7-2018.06.10 MjxPItf4SOKtk4l0bPH7Tg 1 1 11909 0 23.7mb 12mb
green open .monitoring-es-6-2018.07.04 3FPHjJFfTvuZrb71X3hcZA 1 1 501436 212 608mb 306.2mb
green open .watcher-history-7-2018.06.12 STvls1wbSvCOU_kRerqckg 1 1 11897 0 24.1mb 12.1mb
green open .monitoring-es-6-2018.07.05 k0wjXw5tR2KaBqrmvJAgCg 1 1 336928 0 488.2mb 242.3mb
green open .security-6 ldkFJ1TkRVScBdJIpA0Aeg 1 2 1 0 10.4kb 3.4kb
green open [DIFFERENT_INDEX02] RAcmKwl3RuiXMgGiRlX1HQ 2 2 46436060 0 60.8gb 20.2gb
green open .monitoring-es-6-2018.07.03 nmBQmnnoTL2wZuF0O_pt1w 1 1 484715 306 593.1mb 305.2mb
green open .monitoring-es-6-2018.06.28 lZR6SssRRx-yPQXk_vfBsw 1 1 97451 192 124.2mb 62mb
green open .watcher-history-7-2018.07.04 8nDY3NoORYmWLOGpX5hb_g 1 1 12082 0 24.9mb 12.4mb
green open .watcher-history-7-2018.07.01 _hmho-_zSu-D9H90gCKzWg 1 1 12072 0 24.9mb 12.5mb
green open .watcher-history-7-2018.06.15 PGXkh70YTjOYhFLjK9a8pA 1 1 11946 0 24.3mb 12.1mb
green open .watcher-history-7-2018.06.21 BEPkxD46TKm2y3yEaGgHNQ 1 1 12077 0 24mb 12.1mb
green open .watcher-history-7-2018.06.14 Y74e7fY4SKS1aT8PK-S2vg 1 1 11907 0 23.9mb 12mb
green open .watcher-history-7-2018.06.06 7opzBsl1SF-mQ_O8Y_5sJg 1 1 1424 0 3.1mb 1.5mb
green open .monitoring-es-6-2018.07.01 AOG4_pk8RB-UanCjMM6GHg 1 1 467312 294 583.3mb 284.9mb
green open .watcher-history-7-2018.06.24 pYKR7RG3RuGdgw7naxn-5Q 1 1 11955 0 23.8mb 11.8mb
green open .watcher-history-7-2018.06.30 j4GdW5xhSNKeqT_c1376AQ 1 1 12125 0 25.1mb 12.7mb
green open [DIFFERENT_INDEX03] CDDpop1nTv6E3466IIhzCg 1 2 4591962 766253 9.4gb 2.6gb
green open .watcher-history-7-2018.06.08 5eP2tPteTwGnoGJhQ37HoA 1 1 11848 0 23.8mb 12.1mb
green open .watcher-history-7-2018.06.25 7xbkQaObSQWJhg93_PmFQw 1 1 12041 0 24.8mb 12.4mb
green open .monitoring-es-6-2018.07.02 HBRphDn_TcSEXIiFn0ZtQg 1 1 475272 300 593.9mb 295mb
green open .watcher-history-7-2018.06.13 CWOQnBuKTNa-DLGvo8XlMQ 1 1 11909 0 23.7mb 11.9mb
green open [MY_INDEX] NdA3qJ16RGa5hpxvKpsDsg 1 2 10171359 1260206 24.1gb 6.4gb
green open .monitoring-alerts-6 5HGKo73hQqa0dakVhdon6w 1 1 48 3 127.1kb 52.1kb
green open .watcher-history-7-2018.06.16 7xyor_rvTemap3DWx6vkqg 1 1 12015 0 24.2mb 12.1mb
green open .monitoring-es-6-2018.06.29 UfXjNo-ATjKKA0Hv5jZw-A 1 1 450751 0 580.1mb 287.3mb
green open .watcher-history-7-2018.06.23 MyZMWHeYSm65MDen6WSGkw 1 1 11919 0 23.8mb 11.9mb
/MY_INDEX/_search_shards
{
"nodes": {
"9cP8Z9B8SFqq9Plszz7-HQ": {
"name": "elastic-data-000",
"ephemeral_id": "5Mm87T8lR5CFoIjoFGLQGg",
"transport_address": "removed",
"attributes": {}
},
"gg6rbEX8QdqujYjuAu9kvw": {
"name": "elastic-data-002",
"ephemeral_id": "I6ZpdVLgTyigh-2f7gtMFQ",
"transport_address": "removed",
"attributes": {}
},
"JDakz0EGT6aib0m87CfiCg": {
"name": "elastic-data-001",
"ephemeral_id": "c-Z3VRmtTsubCbXiSfsyOg",
"transport_address": "removed",
"attributes": {}
}
},
"indices": {
"MY_INDEX": {}
},
"shards": [
[
{
"state": "STARTED",
"primary": true,
"node": "9cP8Z9B8SFqq9Plszz7-HQ",
"relocating_node": null,
"shard": 0,
"index": "MY_INDEX",
"allocation_id": {
"id": "IQzbKGCMR9O0BobnePiKpg"
}
},
{
"state": "STARTED",
"primary": false,
"node": "gg6rbEX8QdqujYjuAu9kvw",
"relocating_node": null,
"shard": 0,
"index": "MY_INDEX",
"allocation_id": {
"id": "3fvXIyXGTa2NgAsb_uv78A"
}
},
{
"state": "STARTED",
"primary": false,
"node": "JDakz0EGT6aib0m87CfiCg",
"relocating_node": null,
"shard": 0,
"index": "MY_INDEX",
"allocation_id": {
"id": "whHOsuxfTdSnQDi-9RAuKw"
}
}
]
]
}

StackExchange.Redis TIMEOUT growing unsent queue

Our test environment last weekend saw a number VMs start logging timeouts where the unsent queue just kept growing:
Timeout performing GET 0:B:ac64ebd0-3640-4b7b-a108-7fd36f294640, inst:
0, mgr: ExecuteSelect, queue: 35199, qu: 35199, qs: 0, qc: 0, wr: 0,
wq: 0, in: 0, ar: 0, IOCP: (Busy=2,Free=398,Min=4,Max=400), WORKER:
(Busy=5,Free=395,Min=4,Max=400)
Timeout performing SETEX 0:B:pfed2b3f5-fbbf-4ed5-9a58-f1bd888f01,
inst: 0, mgr: ExecuteSelect, queue: 35193, qu: 35193, qs: 0, qc: 0,
wr: 0, wq: 0, in: 0, ar: 0, IOCP: (Busy=2,Free=398,Min=4,Max=400),
WORKER: (Busy=6,Free=394,Min=4,Max=400)
I've read quite a few posts on analyzing these but most of the time it doesn't involve the unsent message queue growing. No connectivity errors were logged during this time; an AppPool recycle resolved the issue. Has anyone else seen this issue before?
Some potentially relevant extra info:
Same timeouts seen on 1.0.450 and 1.0.481 versions of StackExchange.Redis nuget package
ASP.Net v4.5 Web API 1.x site was the one affected
Upgraded to Redis 3.0.4 (from 3.0.3) same week errors were encountered (but days prior)
Installed New Relic .NET APM v5.5.52.0 that includes some StackExchange.Redis instrumentation (https://docs.newrelic.com/docs/release-notes/agent-release-notes/net-release-notes/net-agent-55520), again, a couple days prior to the timeouts. We've rolled this back here just in case.
I'm encountering same issue.
To research issue, we logging ConnectionCounters of ConnectionMultiplexer for monitoring on every 10 seconds.
It shows growing pendingUnsentItems only, it means StackExchange.Redis does not send/receive from socket.
completedAsynchronously completedSynchronously pendingUnsentItems responsesAwaitingAsyncCompletion sentItemsAwaitingResponse
1 10 4 0 0
1 10 28 0 0
1 10 36 0 0
1 10 51 0 0
1 10 65 0 0
1 10 72 0 0
1 10 85 0 0
1 10 104 0 0
1 10 126 0 0
1 10 149 0 0
1 10 169 0 0
1 10 190 0 0
1 10 207 0 0
1 10 230 0 0
1 10 277 0 0
1 10 296 0 0
...snip
1 10 19270 0 0
1 10 19281 0 0
1 10 19291 0 0
1 10 19302 0 0
1 10 19313 0 0
I guess socket writer thread was stopped?
My Environment is
StackExchange.Redis 1.0.481
Windows Server 2012 R2
.NET Framework 4.5
ASP.NET MVC 5.2.3
Installed New Relic .NET APM v5.7.17.0
Looks like the issue was seen when using New Relic .NET APM between versions 5.5-5.7 and is fixed in 5.8.

Resources