Maximum call stack size exceeded when syncing ArrayBuffer - dexie

I have a database which stores images (a thumbnail and a full size version) as ArrayBuffers.
When the dexie-cloud addon tries to sync the changes, it throws an error.
Thumbnails work fine because they have been resampled down to around 50-60 KB in size. However, the full size image (which can be 100-150 KB) fails in the b64encode function of base64.js:
Failed to sync client changes RangeError: Maximum call stack size exceeded
at b64encode (base64.js:22:41)
at b64LexEncode (b64lex.js:3:21)
at Object.replace (ArrayBuffer.js:6:16)
at Object.<anonymous> (TypesonSimplified.js:37:31)
at JSON.stringify (<anonymous>)
at Object.stringify (TypesonSimplified.js:33:31)
at syncWithServer.ts:60:16
at Generator.next (<anonymous>)
at fulfilled (tslib.es6.js:73:58)
specially at this line:
return btoa(String.fromCharCode.apply(null, ArrayBuffer.isView(b) ? b : new Uint8Array(b)));
Are there any work arounds for larger ArrayBuffers?

Thanks for the bug report! I've filed it as an issue in github: https://github.com/dexie/Dexie.js/issues/1643
There's no workaround for the moment. Please subscribe to the issue on github to get notified when it is resolved or get some progress.

Related

I have a separate machine for elasticsearch. It is 500GB, but logs are consuming full memory in 24 hours. How do I compress it and free memory?

[2019-08-01T13:20:48,015][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({“type”=>“cluster_block_exception”, “reason”=>“index [metricbea...delete (api)];“})
Your log message is cut off. Is it by any chance actually this one or close to it?
[logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
That would mean your disk is full (hit the floodstage watermark, which is at 95% by default). I can't really see anything related to memory in your log message.
To clear the floodstage: Add disk space (or delete old data) and then you will need unlock all affected indices with something like this:
PUT /_all/_settings
{
"index.blocks.read_only_allow_delete": null
}

Draw a graph using D3 (v3) in a WebWorker

The goal is to draw a graph using D3 (v3) in a WebWorker (Rickshaw would be even better).
Requirement #1:
The storage space for the entire project should not exceed 1 MB.
Requirement #2:
Internet Explorer 10 should be supported
I already tried to pass the DOM element to Webworker.
This brought the following error message:
DOMException: Failed to execute 'postMessage' on 'Worker': HTMLDivElement object could not be cloned.
var worker = new Worker( 'worker.js' );
worker.postMessage( {
'chart' : document.querySelector('#chart').cloneNode(true)
} );
The GitHub user chrisahardie has made...
a small proof on concept showing how to generate a d3 SVG chart in a
web worker and pass it back to the main UI thread to be injected into
a webpage.
https://github.com/chrisahardie/d3-svg-chart-in-web-worker
He integrated jsdom into the browser with Browserify.
The problem:
The script has almost 5 MB, which is too much memory requirements for the application.
So my question:
Does anyone have experience in solving the problem or has any idea how the problem can be solved and the requirements can be met?
The Web Workers don't have access to the following JavaScript objects: The window object, The document object and The parent object. So, all we could do on that side would be to build something that could be used for quickly creating the DOM. The worker(s) could e.g process the datasets and do all the heavy computations, then pass the result back as a set of arrays. More details, you could check this article and this sample

How can i resolve HTTPSConnectionPool(host='www.googleapis.com', port=443) Max retries exceeded with url (Google cloud storage)

I have created API using Django Rest Framework.
API communicates with GCP cloud storage to store profile Image(around 1MB/pic).
While performing load testing (around 1000 request/s) to that server.
I have encountered the following error.
I seem to be a GCP cloud storage max request issue, but unable to figure out the solution of it.
Exception Type: SSLError at /api/v1/users
Exception Value: HTTPSConnectionPool(host='www.googleapis.com', port=443): Max retries exceeded with url: /storage/v1/b/<gcp-bucket-name>?projection=noAcl (Caused by SSLError(SSLError("bad handshake: SysCallError(-1, 'Unexpected EOF')",),))
Looks like you have the answer to your question here:
"...buckets have an initial IO capacity of around 1000 write requests
per second...As the request rate for a given bucket grows, Cloud
Storage automatically increases the IO capacity for that bucket"
Therefore it automatically Auto-Scale. The only thing is that you need to increase the requests/s gradually as described here:
"If your request rate is expected to go over these thresholds, you should start with a request rate below or near the thresholds and then double the request rate no faster than every 20 minutes"
Looks like your bucket should get an increase of I/O capacity that will work in the future.
You are actually right in the edge (1000 req/s), but I guess this is what is causing your error.

ruby aws-sdk multi-part upload doesn't honor requested storage class "STANDARD_IA"

I'm uploading fairly large objects (~500 MB) using the v2 aws-sdk gem as follows:
object = bucket.object("#{prefix}/#{object_name}")
raise RuntimeError, "failed to upload: #{object_name}" unless object.upload_file("#{object_name}", storage_class: "STANDARD_IA")
The uploads succeed and I can see the new objects in the console, but they all have a storage class of "Standard".
When I run this same code with smaller objects they're correctly created with storage class = "STANDARD_IA".
Is this a factor of the file size? Or the fact that it's a multi-part upload? Or something else? I didn't see anything in the documentation, but its pretty "expansive" so I may just have missed it.
Was caused by a bug in aws-sdk-ruby. Pull request:
https://github.com/aws/aws-sdk-ruby/pull/1108

AppFabric QuotaExceededException

When trying to insert a large item into the AppFabric cache I get an error
Microsoft.ApplicationServer.Caching.DataCacheException:ErrorCode<ERRCA0016>:SubStatus<ES0001>:The connection was terminated, possibly due to server or network problems or serialized Object size is greater than MaxBufferSize on server. Result of the request is unknown. ---> System.ServiceModel.CommunicationException: The maximum message size quota for incoming messages (183886080) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element. ---> System.ServiceModel.QuotaExceededException: The maximum message size quota for incoming messages (183886080) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element.
--- End of inner exception stack trace ---
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Channels.TransportDuplexSessionChannel.EndReceive(IAsyncResult result)
at Microsoft.ApplicationServer.Caching.WcfClientChannel.CompleteProcessing(IAsyncResult result)
The problem is I can find very little documentation on this issue.
I can see various links discussing the issue all pointing to sites that no longer exist.
e.g.
http://www.biztalkgurus.com/appfabric/b/appfabric-syn/archive/2011/04/19/understanding-the-windows-azure-appfabric-service-bus-quotaexceededexception.aspx
I've also found the following which discusses setting the MaxReceivedMessageSize property.
http://msdn.microsoft.com/en-us/library/ee677250(v=azure.10).aspx
However on my install of AppFabric 1.1 on Windows server I don't have the cmdlet Set-ASAppServiceEndpoint and cannot find where to locate it.
Error not in AppFabric. The service need more size to transmit messsage to other endpoint. You remember this configuration must be the same at both endpoints.

Resources