RefreshDatabase is extremly slow - laravel-5

I am using Laravel v5.6.26, PHPUnit 6.5.8, and PHP 7.2.9.
This is my full Test Class:
class ExampleTest extends TestCase
{
use RefreshDatabase;
/** #test */
public function basicTest()
{
$this->assertTrue(true);
$this->assertFalse(false);
}
}
I call phpunit from homestead.
Without use RefreshDatabase this takes 513miliseconds. With use RefreshDatabase it takes 17.29 seconds. I currently have 72 tables.
I only want to test one model that is associated to one table. It seems that refreshing the empty 72 tables is taking so much time. I tried to remove all tables except the one that I need, but use RefreshDatabase will always remigrate all the other tables.
How can I speed this up?
I don't think that the hardware is the issue here. Thats my PC:
Ram 32 GB, Intel® Core™ i7-7700K CPU # 4.20GHz × 8, GeForce GTX 1070/PCIe/SSE2, I am running Ubuntu 18.04.1 LTS 64 Bit.

Unless you use SQLite, migrating the database for each test is really slow.
What you can do is use the DatabaseTransactions trait which starts a transaction at the beginning of the test and rolls it back at the end of the test.
You can read more about it in the docs and this blog.

Related

Laravel 8 Memory Leak

I am currently setting feature testing for the API of a project I am currently working on. However, I seem to be running into issues with memory issues when running the tests in bitbucket pipelines. This seems to be caused by some memory leaks.
I have done the following research but can not seem to find a solution.
I used roave/no-leaks to find which tests where causing memory leaks. This returned that all my tests where causing memory leaks.
This lead me to believe the memory leak is present in my application, not in the actual test.
I found an article "Laravel: Fixing memory leaks on tests" addressing the issue.
Using the information in the article I set up the following test in order to test my memory leakage:
public function test_memory_leak()
{
$this->app->flush();
$this->app = null;
for($i = 1; $i < 250; ++$i) {
$this->createApplication()->flush();
echo('Using ' . ((int) (memory_get_usage(true) / (1024 * 1024))) . 'MB as ' . $i . ' iterations.');
echo("\n");
}
$this->app = $this->createApplication();
}
This test creates and flushes an app instance 250 times and echoing the memory usage after each iteration. When starting the test I use around 28MB or memory. After 250 app instances this is 74MB. I am not using any setUp or tearDown methods in combination with this test.
After some investigation I found that not using Route::group() in my routing files decreased the total memory usage to 56MB. Does anybody have experience or tips for finding out why Route::group() causes such a memory leak and how to find the remaining memory leaks?
The project is set up in Laravel 8 and the testing is set up using PHPUnit 9.5.20. I am not using RefreshDatabase, DatabaseTransanactions or similar traits. I am developing on Windows but have access to Linux if needed for debugging software.
Thanks in advance for any help!
Kind Regards,
Simon

What can be done to lower UE4Editor startup time?

Status: the problem lowered, but compared to other users reports it persists.
I have moved to UE4.27.0 and the startup time lowered from 11 (v4.26.2) to 6 minutes! (the RAM usage lowered too!) But doesnt compare to the speed other ppl report "almost instantly"...
It is not compiling anything, not even shaders, it is like the 6th time I run it for one project.
Should I try to disable plugins? but Im new with UE and dont want to difficult my usage. Tho, for ex., I have nothing VR related to test so it could really be initially disabled.
HD READ SPEED? NO
I have tested moving UE4Editor whole engine path (100GB) to a 3xSSD(Stripes), but the UE4Editor startup time remained the same. My HD were it is too, is fast but not so fast as the 3xSSD.
CPU USAGE? MAY BE if it could use 4 cores could solve it?
UE4Editor startup uses A SINGLE CORE ONLY, i can confirm with htop and system monitor, it is possible to see only a single core being used 100% and it changes between the 4 cores, so only one is used at 100% per time.
I tested this command line parameter -USEALLAVAILABLECORES after the project URL for UE4Editor, but nothing changed. I read that option is ignored in some machines, so may be if I patch it's usage it could work on mine?
GPU? no?
a report about an integrated graphics card (weak one) says it doesnt interfere with the startup time.
LOG for UE4Editor v4.27.0 with the new biggest intervals ("..." means ommited log lines to make it easier to read; "!(interval in seconds)" is just to easy reading it (no ommitted lines here)):
[2021.09.15-23.38.20:677][ 0]LogHAL: Linux SourceCodeAccessSettings: NullSourceCodeAccessor
!22s
[2021.09.15-23.38.42:780][ 0]LogTcpMessaging: Initializing TcpMessaging bridge
[2021.09.15-23.38.42:782][ 0]LogUdpMessaging: Initializing bridge on interface 0.0.0.0:0 to multicast group 230.0.0.1:6666.
!16s
[2021.09.15-23.38.58:158][ 0]LogPython: Using Python 3.7.7
...
[2021.09.15-23.39.01:817][ 0]LogImageWrapper: Warning: PNG Warning: Duplicate iCCP chunk
!75s
[2021.09.15-23.40.16:951][ 0]SourceControl: Source control is disabled
...
[2021.09.15-23.40.26:867][ 0]LogAndroidPermission: UAndroidPermissionCallbackProxy::GetInstance
!16s
[2021.09.15-23.40.42:325][ 0]LogAudioCaptureCore: Display: No Audio Capture implementations found. Audio input will be silent.
...
[2021.09.15-23.41.08:207][ 0]LogInit: Transaction tracking system initialized
!9s
[2021.09.15-23.41.17:513][ 0]BlueprintLog: New page: Editor Load
!23s
[2021.09.15-23.41.40:396][ 0]LocalizationService: Localization service is disabled
...
[2021.09.15-23.41.45:457][ 0]MemoryProfiler: OnSessionChanged
!13s
[2021.09.15-23.41.58:497][ 0]LogCook: Display: CookSettings for Memory: MemoryMaxUsedVirtual 0MiB, MemoryMaxUsedPhysical 16384MiB, MemoryMinFreeVirtual 0MiB, MemoryMinFreePhysical 1024MiB
SPECS:
I'm using ubuntu 20.04.
My CPU is 4 cores 3.6GHz.
GeForce GT 710 1GB.
Related question but for older UE4: https://answers.unrealengine.com/questions/987852/view.html
Unreal Engine needs a high-end pc with a lot of RAM, fast SSD's, a good CPU and a medium graphic card. First of all there are always some shaders that needs to be compiled from the engine, and a lot of assets to be loaded in the startup time. As I can see you're on Linux you are probably using a self-compiled Unreal Engine version.... not the best thing to do for a newbie, because this may cause several problems on load time, startup, compiling and a lot of other stuff. If it's the first times you're using Unreal, try using it on Windows, it's all easier.

How to run a prediction on GPU?

I am using h2o4gpu and the parameters which i have set are
h2o4gpu.solvers.xgboost.RandomForestClassifier model.
XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bytree=1.0, gamma=0, learning_rate=0.1, max_delta_step=0,
max_depth=8, min_child_weight=1, missing=nan, n_estimators=100,
n_gpus=1, n_jobs=-1, nthread=None, num_parallel_tree=1, num_round=1,
objective='binary:logistic', predictor='gpu_predictor',
random_state=123, reg_alpha=0, reg_lambda=1, scale_pos_weight=1,
seed=None, silent=False, subsample=1.0, tree_method='gpu_hist')
When i am training this model and then predicting, everything is running fine on GPU.
However, when i am saving my model in pickle and then loading back into another notebook and then running a prediction through predict_proba on it, then everything is running on CPU.
Why is my prediction not running on GPU?
The predictions are meant to run on CPU so you don't need a GPU to actually use the model.

Performance issues mongo on windows

We have built a system where videos are stored in mongodb. The videos are each a couple of hundred megabytes in size. The system is built in python3 using mongoengine. The c extensions of pymongo and bson are installed.
The definition of the mongoengine documents is:
class VideoStore(Document, GeneralMixin):
video = EmbeddedDocumentListField(SingleFrame)
mutdat = DateTimeField()
_collection = 'VideoStore'
def gen_video(self):
for one_frame in self.video:
yield self._get_single_frame(one_frame)
def _get_single_frame(self, one_frame):
if one_frame.frame.tell() != 0:
one_frame.frame.seek(0)
return pickle.loads(one_frame.frame.read())
class SingleFrame(EmbeddedDocument):
frame = FileField()
Reading a video in Linux takes about 3 to 4 seconds. However running the same code in Windows takes 13 to 17 seconds.
Does anyone out there have any experience with this problem and any kind of solution?
I have thought of and tested (to no avail):
increasing the chunksize
reading the video as a single blob without using yield
Storing the file as a single blob (so no storing of separate frames)
Use Linux, Windows is poorly supported. The use of "infinite" virtual memory among other things causes issues with Windows variants. This thread elaborates further:
Why Mongodb performance better on Linux than on Windows?

Unexpected Heap Dumps for Hello World Android APP

I am learning about Memory Utilization using the MAT in Eclipse. Though I have ran into a strange problem. Leave aside the heavy apps, I began with the most benign The "Hello World" App. This is what I get as Heap Stats on Nexus 5, ART runtime, Lollipop 5.0.1.
ID: 1
Heap Size: 25.429 MB
Allocated: 15.257 MB
Free: 10.172 MB
% Used: 60%
# Objects: 43487
My Heap dump gives me 3 Memory Leak suspects:
Overview
"Can't post the Pie Chart because of low reputation."
Problem Suspect 1
The class "android.content.res.Resources", loaded by "", occupies 10,166,936 (38.00%) bytes. The memory is
accumulated in one instance of "android.util.LongSparseArray[]" loaded
by "".
Keywords android.util.LongSparseArray[] android.content.res.Resources
Problem Suspect 2
209 instances of "android.graphics.NinePatch", loaded by "" occupy 5,679,088 (21.22%) bytes. These instances are
referenced from one instance of "java.lang.Object[]", loaded by
"" Keywords java.lang.Object[]
android.graphics.NinePatch
Problem Suspect 3
8 instances of "java.lang.reflect.ArtMethod[]", loaded by "" occupy 3,630,376 (13.57%) bytes. Biggest instances:
•java.lang.reflect.ArtMethod[62114] # 0x70b19178 - 1,888,776 (7.06%)
bytes. •java.lang.reflect.ArtMethod[21798] # 0x706f5a78 - 782,800
(2.93%) bytes. •java.lang.reflect.ArtMethod[24079] # 0x70a9db88 -
546,976 (2.04%) bytes. Keywords java.lang.reflect.ArtMethod[]
This is all by a simple code of:
import android.app.Activity;
import android.os.Bundle;
public class MainActivity extends Activity {
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
}
}
Questions
Why are the heap numbers so big. ?
Also as a side note the app was consuming 52 MB of RAM in the system.
Where are these 209 instance of NinePatch coming ? I merely created the project by doing a "Create a new Project" in Eclipse ?
The first leak suspect of resources, It comes up all the time in my analysis of apps. Is it really a suspect ?
What is the ArtMethod? Does it have to do something with the ART runtime ?
In Lollipop the default runtime is ART i.e Android Run Time, which replaces the old Dalvik Run Time(DRT) used in older Android versions.
In KitKat, Google released an experimental version of ART to get feedback from the users.
In Dalvik JIT(just in time compilation) is used, which means when you open the application only then the DEX code is converted to object code.
However, in ART the dex code is converted to object code(i.e AOT ahead of time compilation) during installation itself. The size of this object code is bigger compared to the DEX code therefore ART needs more RAM than DRT. The advantage of ART is that ART apps have better response time over DRT apps.
Yesterday i'm faced with this problem too. In your log key word is "NinePatch". In my case the cause was a "fake" shadow - tiny picture with alpha channel which trigger resource leak. It's costs about 60mb leaked memory for me.

Resources