used this commands
elasticdump --input=/opt/index_5.json --output=http://esserver:9200/index_5 --limit=5000 --transform="doc._source=Object.assign({},doc)"
Error like below while importing the data
<--- JS stacktrace --->
==== JS stack trace =========================================
Security context: 0x3b9faf49e6e9
1: stringSlice(aka stringSlice) [0x8c113e13429] [buffer.js:~589] [pc=0x3cfe067fcdcf](this=0x34873cd026f1 ,buf=0x15dd55450ef1 ,encoding=0x3b9faf4bdd31 ,start=0,end=8)
2: write [0x2bf9d6645199] [/usr/lib/node_modules/elasticdump/node_modules/jsonparse/jsonparse.js:~127] [pc=0x3cfe06d95bbd](th...
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
1: 0x8fa0c0 node::Abort() [node]
2: 0x8fa10c [node]
Aborted
In my case downgrading to elasticsearch 6.10 solved a similar memory issue. See https://github.com/taskrabbit/elasticsearch-dump/issues/628
I think that root cause of this out of memory is to use by you limit parameter with huge number value 5000 --limit=5000 (default id 100). Sometime default is too much and I see the same issue then I just change this value to --limit=10 for example.
Related
I am working on infinispan... But while adding cache to it with rest
POST /rest/v2/caches/{cacheName}/{cacheKey} and with nifi also, but it gives me a following error
12:17:55,695 ERROR [org.infinispan.server.hotrod.BaseRequestProcessor] (HotRod-ServerIO-5-1) ISPN005003: Exception reported: org.infinispan.server.hotrod.InvalidMagicIdException: Error reading magic byte or message id: 10
at org.infinispan.server.hotrod:ispn-10.0#10.0.0.Beta3//org.infinispan.server.hotrod.HotRodDecoder.switch0(HotRodDecoder.java:208)
at org.infinispan.server.hotrod:ispn-10.0#10.0.0.Beta3//org.infinispan.server.hotrod.HotRodDecoder.switch1_0(HotRodDecoder.java:153)
at org.infinispan.server.hotrod:ispn-10.0#10.0.0.Beta3//org.infinispan.server.hotrod.HotRodDecoder.decode(HotRodDecoder.java:143)
at io.netty:ispn-10.0#4.1.30.Final//io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:502)
at io.netty:ispn-10.0#4.1.30.Final//io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:441)
at io.netty:ispn-10.0#4.1.30.Final//io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:278)
I am using docker pull jboss/infinispan-server 10.0.0.beta and 10.0.0.CR1-3.
I ma not getting any clue to trace the issue.
We were getting the error:
OOM command not allowed when used memory > 'maxmemory
and now I am trying to rebuild the error on my local system, but sadly failing :|
redis-cli info gives me this:
maxmemory_human:10.00M
maxmemory_policy:noeviction
and I am forcing the cache to be overflooded - but it always stays around 10MB
used_memory_human:10.01M
What do I do wrong?
When trying to query from gpdb cluster. getting Out of memory error with error code 53400.
System Related information
TOTAL RAM =30G
SWAP =15G
gp_vmem_protect_limit=8192MB
TOTAL segment = 8 Primary, 8 mirror = 16
SEGMENT HOST=2
Getting error :
ERROR: Out of memory (seg2 slice109 datanode01:40002 pid=21691)
SQL state: 53400
Detail: VM protect failed to allocate 8388608 bytes from system, VM Protect 4161 MB available
We tried
gpconfig -c gp_vmem_protect_limit -v 4114
vm.overcommit_ratio = 95
Then, getting this error. P
ERROR: XX000: Canceling query because of high VMEM usage. Used: 3704MB, available 410MB, red zone: 3702MB
Also , getting this symptom
Prod=# show runaway_detector_activation_percent;
runaway_detector_activation_percent
-------------------------------------
90
(1 row)
Please suggest what could be the setting in this case.
Also, What is the root cause of OOM error?
Any help on it would be much appreciated?
I'm working on a medium project -with laravel- for quite a long time now and of course I use the debugger of the framework -laravel-, but now from time to time I see the page of error but there is just "whoops something went wrong without" without any specifications for the error, and I see it a lot in ajax requests, but I just actualize the page and its gone!;
Finally the error show up again and I could see it in my terminal with the command tail down here
this is what I got
[2016-12-28 14:54:04] production.ERROR: exception 'RuntimeException' with message 'No supported encrypter found. The cipher and / or key length are invalid.' in D:\shop\tess\vendor\laravel\framework\src\Illuminate\Encryption\EncryptionServiceProvider.php:45
Stack trace:
#0 D:\shop\tess\vendor\laravel\framework\src\Illuminate\Encryption\EncryptionServiceProvider.php(25): Illuminate\Encryption\EncryptionServiceProvider->getEncrypterForKeyAndCipher(NULL, 'AES-256-CBC')
#1 D:\shop\tess\vendor\laravel\framework\src\Illuminate\Container\Container.php(731): Illuminate\Encryption\EncryptionServiceProvider->Illuminate\Encryption\{closure}(Object(Illuminate\Foundation\Application), Array)
I've found this on github it helped https://github.com/orchestral/testbench/issues/93
Make sure APP_DEBUG is set to true in your .env file
You can check the errors with the following command tail -f storage/logs/laravel.log Could be different errors
Following the official guide of Titan DB here, and trying to run the command:
graph = TitanFactory.open('conf/titan-cassandra-es.properties')
I got this error:
Backend shorthand unknown: conf/titan-cassandra-es.properties
Obviously, the reason is the incorrect path to the
titan-cassandra-es.properties
file. So I changed it to:
graph = TitanFactory.open('../conf/titan-cassandra-es.properties')
and got this error:
Encountered unregistered class ID: 141.
The error happens in the following version:
titan-0.5.4-hadoop2
On titan-1.0.0-hadoop2 instead of this error message I get this one:
Invalid import definition: 'com.thinkaurelius.titan.hadoop.MapReduceIndexManagement'; reason: startup failed: script14747941661821834264593.groovy: 1: unable to resolve class com.thinkaurelius.titan.hadoop.MapReduceIndexManagement # line 1, column 1. import com.thinkaurelius.titan.hadoop.MapReduceIndexManagement ^
1 error
And on titan-1.0.0-hadoop2 I get this one:
The input line is too long.
The syntax of the command is incorrect.
Does anyone know how to handle this issue?
It seems like you have not even managed to get Titan 1 to start up yet.
I do not believe Titan 1 has been deployed to support Windows out of the box. I.e. the downloadable package will not just work with windows.
Saying that I have managed to get Titan DB 1 to work on windows. To do so, all you have to do is install Cassandra 2.x on Windows. This guide may help you out. Start cassandra and enable thrift connections.
With that done you should be able to get Titan doing basic operations on windows. From there you may find dealing with you current errors easier.
Side Note: Windows support for Titan 0.5.x may be more substantial. So you could look into that as well.