Randomly getting the RangeError: Maximum call stack size exceeded when send the data to connected client(s) via Socket.IO room concept. Gone through few forums it is stating that the data object may have self-referencing array Node.js + Socket.io Maximum call stack size exceeded this exception may occurs but in my code I getting the exception in both the plain string data & data object also.
Below are the sample code snips
Sending plain text
socket.emit('STATUS','OK');
Stack Trace
Error in SendPlainText : RangeError: Maximum call stack size exceeded
at TLSSocket.Socket._writeGeneric (net.js:1:1)
at TLSSocket.Socket._write (net.js:783:8)
at doWrite (_stream_writable.js:397:12)
at writeOrBuffer (_stream_writable.js:383:5)
at TLSSocket.Writable.write (_stream_writable.js:290:11)
at TLSSocket.Socket.write (net.js:707:40)
at Sender.sendFrame (/node_v0_10_36/node_modules/ws/lib/Sender.js:390:20)
at Sender.send (/node_v0_10_36/node_modules/ws/lib/Sender.js:312:12)
at WebSocket.send (/node_v0_10_36/node_modules/ws/lib/WebSocket.js:377:18)
at send (/node_v0_10_36/node_modules/engine.io/lib/transports/websocket.js:114:17)
Sending object data
var clients = socketio.sockets.adapter.rooms['ROOMID'];
if(clients != undefined && clients != null)
{
console.log('Sending data to client');
socketio.sockets.in('ROOMID').emit('DATA', data);
}
Stack Trace
Error in SendData : RangeError: Maximum call stack size exceeded
at /node_v0_10_36/node_modules/engine.io-parser/lib/index.js:236:12
at proxy (/node_v0_10_36/node_modules/after/index.js:23:13)
at /node_v0_10_36/node_modules/engine.io-parser/lib/index.js:255:7
at /node_v0_10_36/node_modules/engine.io-parser/lib/index.js:231:7
at Object.exports.encodePacket (/node_v0_10_36/node_modules/engine.io-parser/lib/index.js:79:10)
at encodeOne (/node_v0_10_36/node_modules/engine.io-parser/lib/index.js:230:13)
at map (/node_v0_10_36/node_modules/engine.io-parser/lib/index.js:253:5)
at Object.exports.encodePayload (/node_v0_10_36/node_modules/engine.io-parser/lib/index.js:235:3)
at XHR.Polling.send (/node_v0_10_36/node_modules/engine.io/lib/transports/polling.js:246:10)
at Socket.flush (/node_v0_10_36/node_modules/engine.io/lib/socket.js:431:20)
Related
This question already has an answer here:
Elasticsearch bulk insert with NEST returns es_rejected_execution_exception
(1 answer)
Closed 5 years ago.
I am trying to bulk insert data from SQL to ElasticSearch index. Below is the code I am using and total number of records is around 1.5 million. I think it something to do with connection setting but I am not able to figure it out. Can someone please help with this code or suggest better way to do it?
public void InsertReceipts
{
IEnumerable<Receipts> receipts = GetFromDB() // get receipts from SQL DB
const string index = "receipts";
var config = ConfigurationManager.AppSettings["ElasticSearchUri"];
var node = new Uri(config);
var settings = new ConnectionSettings(node).RequestTimeout(TimeSpan.FromMinutes(30));
var client = new ElasticClient(settings);
var bulkIndexer = new BulkDescriptor();
foreach (var receiptBatch in receipts.Batch(20000)) //using MoreLinq for Batch
{
Parallel.ForEach(receiptBatch, (receipt) =>
{
bulkIndexer.Index<OfficeReceipt>(i => i
.Document(receipt)
.Id(receipt.TransactionGuid)
.Index(index));
});
var response = client.Bulk(bulkIndexer);
if (!response.IsValid)
{
_logger.LogError(response.ServerError.ToString());
}
bulkIndexer = new BulkDescriptor();
}
}
Code works fine but takes around 10 mins to complete. When I try to increase batch size, it fails with below error:
Invalid NEST response built from a unsuccessful low level call on
POST: /_bulk
Invalid Bulk items: OriginalException: System.Net.WebException: The
underlying connection was closed: An unexpected error occurred on a
send. ---> System.IO.IOException: Unable to write data to the
transport connection: An existing connection was forcibly closed by
the remote host. ---> System.Net.Sockets.SocketException: An existing
connection was forcibly closed by the remote host
A good place to start is with batches of 1,000 to 5,000 documents or, if your documents are very large, with even smaller batches.
It is often useful to keep an eye on the physical size of your bulk requests. One thousand 1KB documents is very different from one thousand 1MB documents. A good bulk size to start playing with is around 5-15MB in size.
I had a similar problem. My problem was solved by adding following code, before the ElasticClient connection is established:
System.Net.ServicePointManager.Expect100Continue = false;
I am writting a CAPL for Diagnostic request and response, I can get response if the data is up to 8 bytes, if data is multiframe I am not getting respone and the message on the trace is "Breaking connection between server and tester", how to handle this? I know about the CANTP frames but in this case it should handle by CAN/Canoe .
Please read CANoe ISO-TP protocol. In case of multiframe response, the tester has to send the flow control frame which provides synchronization between Sender and Receiver, which is usually 0x30. It also has fields for Block size of continous frames and seperation time. Try the below CAPL code.
variables
{
message 0x710 msg = { dlc=8,dir = rx };
byte check_byte0;
}
on message 0x718
{
check_byte0 = this.byte(0) & 0x30;
if(check_byte0 == 0x10)
{
msg.dword(0)=0x30;
msg.dword(4)=0x00;
output(msg2);
}
}
I was trying to send the request over a message ID in most gross form like 22 XX YY , which is a read DID request,this works well if the response is less than 8 bytes, if response is more than 8 bytes this wont work. so we need to use the Diagnostic objects for the request and response as defined in the CDD(or any description file) as used in the project.
If you are not using CDD, in such cases you need to use CCI (Capl call back interfaces), mostly that is necessary for simulation setups.
I have a 4 node elasticsearch cluster. I have a .net console application that is designed to fill the cluster with data which comes from sql. Everything works fine as long as I keep the rate of records being added (or deleted) fairly low. If I increase the number of threads eventually I will see timeout errors from my console app. The cluster has a total of 48 cores and the average time it takes to index a record is about .1 seconds.
I have been able to get it to do about 7000 records (documents) per second. I never see any exceptions thrown from elasticsearch.net that indicate low resources. I never see any of the indexing queues overloaded. The servers never peak to more than about 10% cpu. It looks like the issue is not the cluster or it's configuration but something in the nest connection. Here is my code for the connection:
//set up the es client
Uri node = new Uri(ConfigurationManager.AppSettings["ESConnectionString"]);
var connectionPool = new SniffingConnectionPool(new[] { node });
ConnectionSettings settings = new ConnectionSettings(connectionPool);
settings.SetDefaultPropertyNameInferrer(p => p); //ditch the camelcase
settings.SniffOnConnectionFault(true);
settings.SniffOnStartup(true);
settings.SniffLifeSpan(TimeSpan.FromMinutes(1));
settings.SetPingTimeout(3000);
settings.SetTimeout(5000);
settings.MaximumRetries(5);
//settings.SetMaximumAsyncConnections(20);
settings.SetDefaultIndex("dummyindex");
settings.SetBasicAuthentication(ConfigurationManager.AppSettings["ESUser"], ConfigurationManager.AppSettings["ESPass"]);
ElasticClient client = new ElasticClient(settings);
I have the cluster set up with http.basic authentication, but I have tried with it turned on and off and there is no difference.
Here are some of the pertinent settings from the ES nodes:
discovery.zen.minimum_master_nodes: 2
discovery.zen.fd.ping_timeout: 30s
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["CACHE01","CACHE02","CACHE03","CACHE04"]
cluster.routing.allocation.node_concurrent_recoveries: 5
indices.recovery.max_bytes_per_sec: 50mb
http.basic.enabled: true
http.basic.user: "admin"
http.basic.password: "XXXXXXX"
At this point I can't seem to figure out if it's the .Net client that is the issue or the servers? Everything points to the client but I'm at a loss for what to try next.
I don't think I can use the BulkAPI because I'm essentially just replicating changes from a SQL server and in order to keep them in sync I execute the change as soon as it's received.
It seems when I'm inserting new documents I can go at a much faster pace then when updating. I have read the updating docs and it almost reads like partial updates are better than full updates, but the there is the whole get-update-delete-reindex things that seems to happen with every update.
According to the es docs I'm not supposed to tweak the thread pools or the performance settings. I don't think I'm hitting any of those limits anyway. The ES error logs don't indicate any issue either.
Anyone have advice on what I can do to track down the connection errors?
UPDATE:
This is the actual error:
Error: Unexpected result (SaveToES). Elasticsearch.Net.Exceptions.MaxRetryException: Sniffing known nodes in the cluster caused a maxretry exception of its own ---> Elasticsearch.Net.Exceptions.SniffException: Sniffing known nodes in the cluster caused a maxretry exception of its own ---> Elasticsearch.Net.Exceptions.MaxRetryException: Retry timeout 00:00:05 was hit after retrying 1 times: 'GET _nodes/_all/clear?timeout=3000'.
InnerException: WebException, InnerMessage: The operation has timed out, InnerStackTrace: at System.Net.HttpWebRequest.GetResponse()
at Elasticsearch.Net.Connection.HttpConnection.DoSynchronousRequest(HttpWebRequest request, Byte[] data, IRequestConfiguration requestSpecificConfig)
InnerException: WebException, InnerMessage: The operation has timed out, InnerStackTrace: at System.Net.HttpWebRequest.GetResponse()
at Elasticsearch.Net.Connection.HttpConnection.DoSynchronousRequest(HttpWebRequest request, Byte[] data, IRequestConfiguration requestSpecificConfig) ---> System.AggregateException: One or more errors occurred. ---> System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at Elasticsearch.Net.Connection.HttpConnection.DoSynchronousRequest(HttpWebRequest request, Byte[] data, IRequestConfiguration requestSpecificConfig)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandlerBase.ThrowMaxRetryExceptionWhenNeeded[T](TransportRequestState1 requestState, Int32 maxRetries)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.RetryRequest[T](TransportRequestState1 requestState)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.DoRequest[T](TransportRequestState1 requestState)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.RetryRequest[T](TransportRequestState1 requestState)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.DoRequest[T](TransportRequestState1 requestState)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.Request[T](TransportRequestState1 requestState, Object data)
at Elasticsearch.Net.Connection.Transport.Elasticsearch.Net.Connection.ITransportDelegator.Sniff(ITransportRequestState ownerState)
--- End of inner exception stack trace ---
--- End of inner exception stack trace ---
at Elasticsearch.Net.Connection.Transport.Elasticsearch.Net.Connection.ITransportDelegator.Sniff(ITransportRequestState ownerState)
at Elasticsearch.Net.Connection.Transport.Elasticsearch.Net.Connection.ITransportDelegator.SniffClusterState(ITransportRequestState requestState)
at Elasticsearch.Net.Connection.Transport.Elasticsearch.Net.Connection.ITransportDelegator.SniffOnConnectionFailure(ITransportRequestState requestState)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.RetryRequest[T](TransportRequestState1 requestState)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.DoRequest[T](TransportRequestState1 requestState)
at Elasticsearch.Net.Connection.RequestHandlers.RequestHandler.Request[T](TransportRequestState1 requestState, Object data)
at Elasticsearch.Net.Connection.Transport.DoRequest[T](String method, String path, Object data, IRequestParameters requestParameters)
at Elasticsearch.Net.ElasticsearchClient.DoRequest[T](String method, String path, Object data, IRequestParameters requestParameters)
at Elasticsearch.Net.ElasticsearchClient.IndicesCreatePost[T](String index, Object body, Func2 requestParameters)
at Nest.RawDispatch.IndicesCreateDispatch[T](ElasticsearchPathInfo1 pathInfo, Object body)
at Nest.ElasticClient.<CreateIndex>b__281_0(ElasticsearchPathInfo1 p, ICreateIndexRequest d)
at Nest.ElasticClient.Nest.IHighLevelToLowLevelDispatcher.Dispatch[D,Q,R](D descriptor, Func3 dispatch)
at Nest.ElasticClient.CreateIndex(Func2 createIndexSelector)
at DCSCache.esvRepository.CreateIndex(String IndexName, String IndexVersion)
at DCSCache.esvRepository.Save(esv ItemToSave, String IndexName, String IndexVersion)
I have create a websocket server with libwebsockets library, and the protocol list is like this:
/* List of supported protocols and callbacks. */
static struct libwebsocket_protocols protocols[] = {
{ "plain-websocket-protocol" /* Custom name. */,
callback_websocket,
sizeof(struct websocket_client_real),
0 },
{ NULL, NULL, 0, 0 } /* Terminator. */
};
When I use "html + javascript + chromium browser" as client to send websocket message bigger than 4096 bytes, the websocket server will receive the LWS_CALLBACK_RECEIVE callback more than one time, one message is splited to two or more, the max receive size is 4096.
How can I receive unlimited size websocket message on server side?
The lws_protocols struct now has a rx_buffer_size member so you should be able to configure the 4096 size using this.
See the api doc for details https://libwebsockets.org/libwebsockets-api-doc.html
This answer will address this question:
How can I receive unlimited size websocket message on server side?
It's relatively simple, actually. And you don't need to change your rx_buffer_size like it was suggested before.
Check out the function size_t lws_remaining_packet_payload(struct lws *wsi) documented in here: https://libwebsockets.org/libwebsockets-api-doc.html
You can use this function in your LWS_CALLBACK_RECEIVE callback handler to determine if the data your callback was given finishes a complete WebSocket "packet" (aka, message). If this function returns nonzero, then there is more data coming for this packet in a future callback. So your application should buffer this data until lws_remaining_packet_payload(wsi) returns 0. At that point, you have read a complete message and can handle the complete message as appropriate.
I have a client/server application.
The client sends a question to the server and receives an answer.
This works great - but when I'm trying to use the same socket again to send another question (without closing the socket - after receiving an answer) the server doesn't get the second question.
Here's the code for sending and receiving answer (this should work in a loop of some-sort):
char* buf = "GET /count.htm HTTP/1.1\r\nHost: 127.0.0.1:666\r\nAccept: text/html,application/xhtml+xml\r\nAccept-Language: en-us\r\nAccept-Encoding: gzip, deflate\r\nUser-Agent: Mozilla/5.0\r\n\r\n";
int nBytesToSend= strlen(buf);
int iPos=0;
while(nBytesToSend)
{
int nSent=send(hClientSocket,buf,nBytesToSend,0);
assert(nSent!=SOCKET_ERROR);
nBytesToSend-=nSent;
iPos+=nSent;
}
//prepare buffer for incoming data
char serverBuff[256];
int nLeft=sizeof(serverBuff);
iPos=0;
do //loop till there are no more data
{
int nNumBytes=recv(hClientSocket,serverBuff+iPos,nLeft,0);
//check if cleint closed connection
if(!nNumBytes)
break;
assert(nNumBytes!=SOCKET_ERROR);
//update free space and pointer to next byte
nLeft-=nNumBytes;
iPos+=nNumBytes;
}while(1);
With this code it is impossible that you can ever send the second question, as you can never get out off the loop that reads the reply (except when you get a segmentation violation because your offset overflows the buffer, or when the peer closes the connection, in which case you can't send and he can't receive either).
And just asserting the absence of an error is never adequate. If you get an error you need to see what was and react accordingly. At the least you need to print it.