I'm running Matlab R2015b on my mac with OSX 10.11 El Captain. When starting Matlab, changing the directory in file explorer or when saving a file / running an unsaved file. A yellow processing box appears in the bottom left and Matlab consumes more and more resources. When the process is not canceled Matlab uses more than 400% CPU (low memory usage) and does not react for about 10 minutes or even more. Most times there is no feedback what happend - Today I got following error after start up and the "processing" stuff:
Exception in thread "Explorer NavigationContext request queue" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.String.substring(String.java:1913)
at java.util.StringTokenizer.nextToken(StringTokenizer.java:352)
at com.mathworks.matlab.api.explorer.FileLocation.getTokens(FileLocation.java:268)
at com.mathworks.matlab.api.explorer.FileLocation.getParent(FileLocation.java:123)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.resolveLocation(VirtualFileSystem.java:285)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.getTarget(VirtualFileSystem.java:276)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.resolveLocation(VirtualFileSystem.java:285)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.getTarget(VirtualFileSystem.java:276)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.resolveLocation(VirtualFileSystem.java:285)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.getTarget(VirtualFileSystem.java:276)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.resolveLocation(VirtualFileSystem.java:285)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.getTarget(VirtualFileSystem.java:276)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.isMountPoint(VirtualFileSystem.java:239)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.toExternalEntry(VirtualFileSystem.java:324)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileSystem.toExternalEntry(VirtualFileSystem.java:319)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileList$MountingReceiver.receive(VirtualFileList.java:101)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileList$MountingReceiver.receive(VirtualFileList.java:90)
at com.mathworks.util.AsyncReceiverUtils$3.receive(AsyncReceiverUtils.java:77)
at com.mathworks.mlwidgets.explorer.model.realfs.StatToEntryAdapter.receive(StatToEntryAdapter.java:55)
at com.mathworks.mlwidgets.explorer.model.realfs.StatToEntryAdapter.receive(StatToEntryAdapter.java:16)
at com.mathworks.util.NativeJava.listFiles(Native Method)
at com.mathworks.mlwidgets.explorer.model.realfs.RealFileList.readFilesAndFolders(RealFileList.java:50)
at com.mathworks.mlwidgets.explorer.model.overlayfs.OverlayFileList.readFilesAndFolders(OverlayFileList.java:42)
at com.mathworks.mlwidgets.explorer.model.vfs.VirtualFileList.readFilesAndFolders(VirtualFileList.java:52)
at com.mathworks.mlwidgets.explorer.model.table.UiFileList.readAndUpdateCache(UiFileList.java:358)
at com.mathworks.mlwidgets.explorer.model.table.UiFileList.access$500(UiFileList.java:43)
at com.mathworks.mlwidgets.explorer.model.table.UiFileList$6.run(UiFileList.java:323)
at com.mathworks.util.RequestQueue.execute(RequestQueue.java:129)
at com.mathworks.util.RequestQueue.access$000(RequestQueue.java:25)
at com.mathworks.util.RequestQueue$2.run(RequestQueue.java:79)
at java.lang.Thread.run(Thread.java:745)
Related
I downloaded the stanford segmentator and I am following the instructions but I am getting a memory error, the full message is here:
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.util.regex.Pattern.matcher(Pattern.java:1093)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter.shapeOf(Sighan2005DocumentReaderAndWriter.java:230)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter.access$300(Sighan2005DocumentReaderAndWriter.java:49)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter$CTBDocumentParser.apply(Sighan2005DocumentReaderAndWriter.java:169)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter$CTBDocumentParser.apply(Sighan2005DocumentReaderAndWriter.java:114)
at edu.stanford.nlp.objectbank.LineIterator.setNext(LineIterator.java:42)
at edu.stanford.nlp.objectbank.LineIterator.<init>(LineIterator.java:31)
at edu.stanford.nlp.objectbank.LineIterator$LineIteratorFactory.getIterator(LineIterator.java:108)
at edu.stanford.nlp.wordseg.Sighan2005DocumentReaderAndWriter.getIterator(Sighan2005DocumentReaderAndWriter.java:86)
at edu.stanford.nlp.objectbank.ObjectBank$OBIterator.setNextObjectHelper(ObjectBank.java:435)
at edu.stanford.nlp.objectbank.ObjectBank$OBIterator.setNextObject(ObjectBank.java:419)
at edu.stanford.nlp.objectbank.ObjectBank$OBIterator.<init>(ObjectBank.java:412)
at edu.stanford.nlp.objectbank.ObjectBank.iterator(ObjectBank.java:250)
at edu.stanford.nlp.sequences.ObjectBankWrapper.iterator(ObjectBankWrapper.java:45)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1193)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1137)
at edu.stanford.nlp.ie.AbstractSequenceClassifier.classifyAndWriteAnswers(AbstractSequenceClassifier.java:1091)
at edu.stanford.nlp.ie.crf.CRFClassifier.main(CRFClassifier.java:3023)
Before executing the file I tried increasing the heap space by doing export JAVA_OPTS=-Xmx4000m. I also tried splitting the file but still had the same error - I split the file to 8 chunks, so each had around 15MB each. What should I do to adjust the memory problem?
The segment.sh script that ships with the segmenter limits the memory to 2G, which is probably the cause of the error. Editing that file will hopefully fix the issue for you.
In a VB6 application I use the WaveInRecorder class to record and monitor the audio input of the soundcard. With this application I record files from 1 hour. Then the recording stops, and a new recording wil start. This applications runs 24/7. When I look in the taskmanager from Windows 10, I see that the memory use every day is higher, so my conclusion is that there is a memory leak in my application.
I found out that it's happening when I call the clsRecorder.StartRecord. For a couple of hours I tryied to find where it's happening but I can't find it.
I test it on this way: every second I call clsRecorder.StartRecord and I see the memory use increases. I think there goes something wrong with 'InitWaveInWnd'.
What can cause a memory leak? All objects will be set to nothing. Can it be caused by variable? Have anybody an idee?
To start the recording I use:
If Not clsRecorder.StartRecord("44100", Val(ini.Stereo) + 1) Then
MsgBox "Unable to start monitoring!", vbExclamation
End If
To stop the recording I use:
clsRecorder.StopRecord
Download the WaveInRecorder class:
https://www.vbforums.com/attachment.php?attachmentid=82714&d=1298433000
Status: the problem lowered, but compared to other users reports it persists.
I have moved to UE4.27.0 and the startup time lowered from 11 (v4.26.2) to 6 minutes! (the RAM usage lowered too!) But doesnt compare to the speed other ppl report "almost instantly"...
It is not compiling anything, not even shaders, it is like the 6th time I run it for one project.
Should I try to disable plugins? but Im new with UE and dont want to difficult my usage. Tho, for ex., I have nothing VR related to test so it could really be initially disabled.
HD READ SPEED? NO
I have tested moving UE4Editor whole engine path (100GB) to a 3xSSD(Stripes), but the UE4Editor startup time remained the same. My HD were it is too, is fast but not so fast as the 3xSSD.
CPU USAGE? MAY BE if it could use 4 cores could solve it?
UE4Editor startup uses A SINGLE CORE ONLY, i can confirm with htop and system monitor, it is possible to see only a single core being used 100% and it changes between the 4 cores, so only one is used at 100% per time.
I tested this command line parameter -USEALLAVAILABLECORES after the project URL for UE4Editor, but nothing changed. I read that option is ignored in some machines, so may be if I patch it's usage it could work on mine?
GPU? no?
a report about an integrated graphics card (weak one) says it doesnt interfere with the startup time.
LOG for UE4Editor v4.27.0 with the new biggest intervals ("..." means ommited log lines to make it easier to read; "!(interval in seconds)" is just to easy reading it (no ommitted lines here)):
[2021.09.15-23.38.20:677][ 0]LogHAL: Linux SourceCodeAccessSettings: NullSourceCodeAccessor
!22s
[2021.09.15-23.38.42:780][ 0]LogTcpMessaging: Initializing TcpMessaging bridge
[2021.09.15-23.38.42:782][ 0]LogUdpMessaging: Initializing bridge on interface 0.0.0.0:0 to multicast group 230.0.0.1:6666.
!16s
[2021.09.15-23.38.58:158][ 0]LogPython: Using Python 3.7.7
...
[2021.09.15-23.39.01:817][ 0]LogImageWrapper: Warning: PNG Warning: Duplicate iCCP chunk
!75s
[2021.09.15-23.40.16:951][ 0]SourceControl: Source control is disabled
...
[2021.09.15-23.40.26:867][ 0]LogAndroidPermission: UAndroidPermissionCallbackProxy::GetInstance
!16s
[2021.09.15-23.40.42:325][ 0]LogAudioCaptureCore: Display: No Audio Capture implementations found. Audio input will be silent.
...
[2021.09.15-23.41.08:207][ 0]LogInit: Transaction tracking system initialized
!9s
[2021.09.15-23.41.17:513][ 0]BlueprintLog: New page: Editor Load
!23s
[2021.09.15-23.41.40:396][ 0]LocalizationService: Localization service is disabled
...
[2021.09.15-23.41.45:457][ 0]MemoryProfiler: OnSessionChanged
!13s
[2021.09.15-23.41.58:497][ 0]LogCook: Display: CookSettings for Memory: MemoryMaxUsedVirtual 0MiB, MemoryMaxUsedPhysical 16384MiB, MemoryMinFreeVirtual 0MiB, MemoryMinFreePhysical 1024MiB
SPECS:
I'm using ubuntu 20.04.
My CPU is 4 cores 3.6GHz.
GeForce GT 710 1GB.
Related question but for older UE4: https://answers.unrealengine.com/questions/987852/view.html
Unreal Engine needs a high-end pc with a lot of RAM, fast SSD's, a good CPU and a medium graphic card. First of all there are always some shaders that needs to be compiled from the engine, and a lot of assets to be loaded in the startup time. As I can see you're on Linux you are probably using a self-compiled Unreal Engine version.... not the best thing to do for a newbie, because this may cause several problems on load time, startup, compiling and a lot of other stuff. If it's the first times you're using Unreal, try using it on Windows, it's all easier.
[ERROR] NVAPI error (C:\code\rtsdk\adobe-ae\cc13.1\src\Util\DriverInfo.cpp:78):
[LOG 4] CPU fallback enabled
Getting this error while executing following command (on After effect enabled enviornment)
C:\Program Files\Adobe\Adobe After Effects CC 2015.3\Support Files>aerender -pro
ject D:\proj\01.aep
Please revert asap
Disable Ray-traced 3D in target comps
I mean, it was rendering 40 seconds per frame, got 4 hours for a minute.
After spending quite some time, turns out it was this that was triggering the "CPU fallback enabled"
And, for some reason, it only affected comps that had cameras in.
I have a Core Data iOS app that uses private queue concurrency in a background process. I'm getting a deadlock that makes the UI freeze up from time to time (fairly regularly, to be honest) - but all the info I get from the debugger (LLDB) is that it is stuck on pthread_mutex_lock. The stack trace is no longer than that, which makes debugging near on impossible:
thread #1: tid = 0x2503, 0x3b5060fc libsystem_kernel.dylib`__psynch_mutexwait + 24, stop reason = signal SIGSTOP
frame #0: 0x3b5060fc libsystem_kernel.dylib`__psynch_mutexwait + 24
frame #1: 0x3b44f128 libsystem_c.dylib`pthread_mutex_lock + 392
The XCode process pane is similarly only showing those two entries on the stack.
I'm quite new to this multithreading stuff so am at a total loss where to begin with fixing the issue. Any suggestions for how to go about debugging this?
Your stack is obviously longer than two frames, you can't start a thread with pthread_mutex_lock. So the truncation of the stack frame is pretty clearly just a bug in the lldb unwinder. If you have an ADC account, please file a bug about this at bugreporter.apple.com. Also if you're not using the most recent version of lldb you can get your hands on you might want to try that, maybe it fixed whatever bug you are seeing. You can install multiple Xcode's side by side so you don't have to remove the one you are currently using to try a newer one.
You might also try another tool that will give you a backtrace (e.g. the Instruments time profiler) when your app gets into this state, since it uses a different unwinder. That will at least let you see what the full backtrace is.