Why pgAdmin 4 is too slow? - performance

pgAdmin 4 GUI for postgreSQL is very slow. It takes too much time to even expand a server tree or a database tree. They each took almost 30 seconds to expand. It also hangs while creating a new database or table. Even after loading it took more than a minute just to create and save a new database. It happens almost every time I load the pgAdmin. Is this problem faced just by me or there's something wrong?
My system specifications: PostgreSQL 12.3, Firefox 77.0, Windows 10 64-bit, 8th Gen Quad Core i5 8250u processor, 8GB RAM and 2GB dedicated graphics memory.
In the picture you can see:
The database tree is still loading.
The right click menu and create database window got hanged up.
Hanged on clicking save. It took more than a minute to save a new database

I had the very same issue, Seems it's related to Windows 10 preferring ipv6 over ipv4.
the following fix worked for me:
https://dba.stackexchange.com/questions/201646/
Modify the listen_addresses setting in
the postgresql.conf file usually located under : (unless another data folder specified)
{installation folder}/data/postgresql.conf
to be -> listen_addresses = '127.0.0.1,::1'
The Default value for the listen_addresses is localhost

Click the title bar of pgadmin and drag it around(slowly, otherwise other windows will minimize). I don't know how but it works for me, everything will load faster.

I had the same problem. It turns out that the PostgreSQL binary path was pointing to $DIR/../runtime:
So I had to manually set the binary path, restart PgAdmin and that's it; everything works as expected.

pgAdmin 4 is just a frontend for PostgreSQL, there are around 25 front end like this that you can use for Postgre, i personally use DBeaver for PostgreSQL, i also had so many issues with pgAdmin 4, its super slow and will give you issues if you working with nodejs.
Install DBeaver and go to "NEw Database Connection> and connect with PostgreSQL
Thank you

FILE >> PREFERENCES >> BINARY PATHS :
I moved the '$DIR/runtime' path from Pg14 to Pg13.
GUI is much faster now.

I also had the same issue when I installed pgAdmin 4. Server tree loading and query execution took long time than the older versions took.
I tried to change many things but the thing that worked was changing the binary path. Actually I had added the binary path of postgresql bin folder i.e. C:\Program Files\PostgreSQL\14\bin to PostgreSQL 14. I changed the that and added the path to PostgreSQL 13.
Open the preferences dialog box from 'File' in the menu bar. Then go to binary paths tab and it should look similar to this image.
Actually PostgresSQL 14 is the latest one and it should work but I also don't know why it is not working. Maybe in the future developers will solve this issue, but for now I hope that this solution helps you.

Programming languages are not made equal.
Python is four times slower than C++ or Java
when no external efficient libs are used.
The current pgAdmin 4 has been rewritten in Python as you can check here: github pgadmin4 and here pgadmin download
The older pgAdmin 3 was written in C++ as you can check in its source code here enter link description here Python is four times slower than languages like C++ or Java. Part of the user interface has also moved to Javascript.
Change to different data structures
Each programming language comes with its default datastructures. A change to a different programming language can cause that less effective structures are used . And this cause a many fold decrease in performance if great care is not taken.
These changes could explain that pgAdmin 4 is slow.

Related

Laravel php unit testing takes long in Windows Docker

I am working on Laravel with docker.
If I run php unit test in mac os, it takes few seconds.
However on windows 10, it takes few mins.
Is there anyway to fix this problem?
Thanks.
If you're running on a non-Linux OS, Docker has to virtualise your file system, and this requires a certain amount of time per file. For programs that are compiled into one executable, this is less of a problem at runtime (but clearly with its own compilation-time implications), but for scripting languages like PHP this can mean that every request runs super slowly, since every file that is used has to be 'translated' every time it is read. This is also a problem on Docker for Mac (so you're actually experiencing problems there, too, but less so, since at least it's a Linux system under the hood.) Linux is, I believe, completely virtualised on Windows which is going to add even more time.
This Reddit discusses the problem to an extent:
https://www.reddit.com/r/docker/comments/7xvlye/docker_for_macwindows_performances_vs_linux/
With this being particularly interesting (I have not tried it myself):
https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly
There is also a good community-created solution which we have used to solve our Docker for Mac problem. I don't see why their Windows options wouldn't work similarly well in your case. You can find it here:
https://github.com/EugenMayer/docker-sync/wiki/docker-sync-on-Windows
It basically sets up an intermediate service that copies all the files over into an intermediate volume (that uses the 'correct' filesystem) only when the file is updated, therefore speeding up run speed immensely.
I know it looks like quite an intimidating process, but this problem is fundamental, so you're going to have to do a certain amount of work to fix things!
FWIW I had that working on Docker 4 Mac, but it added a layer of complexity to our dev process that I found annoying, so in the end I've got myself a Linux box for work. To be honest, installing Linux as dual boot on my Windows machine (which has been my at-home solution) was probably easier than tweaking Docker 4 Mac to my satisfaction, so you might want to consider that. I have used this page twice:
https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
And it's worked fine each time. One caveat - it suggests a low amount of disk for your root (/) volume, but Docker gets mounted on root so give it around 100G (not the 10-20G that page recommends.)

foxpro program freezes while it is running

I am using foxpro apps under Windows 7. During the compilation one of my program it suddenly became freezing until I move the mouse or press any key. And this happens all the time while I am working with prog.
This happens when I move only the data to a mapped directory on the host. If my application, foxpro and the data are in the same directory on the virtual machine there are no problem with it.
This happens when my data is not on the virtual machine.
Can it be a caching issue?
Change the registry:
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\WOW]
Save export as backup. Then change value "DefaultSeparateVDM" to "yes"
If you have 64bit then you need to create a file for 16bit applications that use the internal start command in separate memory space as follows:
Start /separate command
Also have a look at this article. http://www.reddit.com/r/Database/comments/2kz0x5/dbf_file_getting_corrupt/
There kind of similar problem, who knows, maybe it would be helpful for you
I have run into similar in the past, and it doesn't have to do with VFP application and data residing in the same folder. What has happened to me is the debugger. You mention "...while I am working with the prog." tells me you are in the VFP development mode, and not run-time from the app itself. If you have issues where your debugger has breakpoints, or other flags that have become corrupted somehow, I have done
CLEAR DEBUGGER
But that is something going back several years and MIGHT be what you are encountering.

Need documentation of Progress application upgrade 11.3?

We are upgrading our Progress application on 9.1D to 11.3. Is there any sample document which we should look for our migration.
Currently we have built a new server where we are installing OpenEdge Enterprise RDBMS 11.3.
Can we backup the current database and dump it to new version.
Any suggestions/documents ?
Generally Progress is very "kind" when upgrading but you have to take in mind that moving from 9.1d to 11.3 (11.4 is soon out by the way) is moving from 2002 to 2013. A lot has changed since then.
If you have program logic that relies on disc layout, os utilities (for example using UNIX, DOS or OS-COMMAND) they might be changed as well. So an upgrade might break even if the files compile without errors. You need to test everything!
You cannot directly backup and restore from 9.1D to 11.3, you need to dump & load.
What you need to do:
Back everything up! Don't miss this and make sure you save a copy of the backup. Back up database, scripts, program files (.p, .i, .r, .cls, etc). Everything! This is vital! Make sure you always have an untouched version left of the backup so you can restart if things go bad. Progress has built in utilities for backing up the database. OS utilities can also be used. Be aware that OS utilities can not be used to create online backups. The backed up database will most likely be corrupt. Shut down the database before backing up when using OS utilities.
Dump you current database. Data as well as schema. Don't forget to check for sequences etc.
Rebuild a new database on your new server with schema from old db.
If possible - move to Type 2 Storage Areas when doing an upgrade like this. It will increase perfomance. Check documentation and knowledgebase around required settings for this.
Load dumped data
Copy program files from old server to new
Recompile
Create startup scripts etc for starting databases as well as clients. Old parameters might not fit your new server, you most likely have more memory, faster CPU, larger discs etc.
All steps have several substeps. I suggest you dive into the documentation found at community.progress.com. You can also search the KnowledgeBase (knowledgebase.progress.com)
Also if you run into problems you can ask more specific questions here (but tag accordingly for example with openedge).
11.3 Documentation
9.1D Documentation
KnowledgeBase

Program runs slow on just a couple of computers

I have a program that I run on multiple network PCs. When I compiled the most recent version, it runs extremely slowly on 2 PCs on the network, but runs fine for everyone else.
This used to happen with my old dev PC when I had an additional 2gb RAM installed. When I would remove the additional 2gb and recompile, it would then work fine for everyone.
Now, I am on a completely new machine and am having the same issue. I've tried to rebuild the project after rebooting, but still have the same issue.
For all other PCs, the program loads in about 3-5 seconds. On these 2 PCs, it takes anywhere from 45 seconds to 1.5 mins to load...
One of the PCs is an older Dell Dimension 8200, but the other is a newer OptiPlex that is identical to several other PCs on the network, so this is what is really making it so confusing.
For now, I've had to revert to the old version so it will run correctly for everyone.
Does anyone have any idea of anything to try?
Thanks in advance!!!
Edit:
Ok, it was an exhausting day yesterday trying various things to solve this issue. Here is what I tried and where the problem begins:
Using the new program
Went back to old versions of all updated components, but still had the same issue
Using the old program
I decided to go back to the drawing board and start from the old version of the application and incrementally add the new features a small piece at a time.
Recompiled the old version using the old components - program works fine
Updated to new DevExpress components - program works fine
Updated to new ESBPCS components - program works fine
Updated to new DeepSoftware components - program works fine
Ok, so now we know there is nothing with the component sets I've updated...
Added 1 image to each of 2 image lists - program works fine
Added new database table - program works fine
Added code to open and close the new table - program works fine
Added new action to action list and added a menu item and toolbar button to new action (action does nothing at this point) - program works fine
Added a new BLANK form to the application and added code to open the new form - BAM!!!
So, adding just one form to the application is what's causing the issue! I removed all the code for the opening of the form, commented out the uses clauses and removed the uses entry from the project source and everything is back to normal!
Anybody have any idea about this?
Thanks!
Edit 2:
For #Warren P - here is my .DPR source:
program Scheduler;
uses
ExceptionLog,
Forms,
SchedulerMainUnit in 'SchedulerMainUnit.pas' {FrmMain},
SchedulerDBInfoUnit in 'SchedulerDBInfoUnit.pas' {FrmDBInfo},
SchedulerHistoryUnit in 'SchedulerHistoryUnit.pas' {FrmHistory},
SchedulerOptionsUnit in 'SchedulerOptionsUnit.pas' {FrmOptions},
SchedulerExtVersionUnit in 'SchedulerExtVersionUnit.pas' {FrmExtVersion},
SchedulerSplashUnit in 'SchedulerSplashUnit.pas' {FrmSplash},
SchedulerInfoUnit in 'SchedulerInfoUnit.pas' {FrmInfo},
SchedulerShippedUnit in 'SchedulerShippedUnit.pas' {FrmShipped}; {<-- This is the new form with the issue}
{$R *.res}
begin
Application.Initialize;
Application.Title := 'SmartWool WIP Scheduling Assistant';
Application.CreateForm(TFrmMain, FrmMain);
Application.CreateForm(TFrmDBInfo, FrmDBInfo);
Application.CreateForm(TFrmHistory, FrmHistory);
Application.CreateForm(TFrmOptions, FrmOptions);
Application.CreateForm(TFrmExtVersion, FrmExtVersion);
Application.Run;
end.
And here is the intialization section of the main form to create the splash:
initialization
FrmSplash:=TFrmSplash.Create(Application);
FrmSplash.Show;
FrmSplash.Refresh;
Edit 3:
Anybody??? Please?
It could be that the program is waiting for timeouts when trying to access resources that are not available on that machine such as network drives or Internet hosts.
Try running Process Monitor when starting up your program and look for file open calls. Filter the output so it only shows your process.
http://technet.microsoft.com/en-us/sysinternals/bb896645
Performance problems initially can seem very daunting at first.
I have been on many teams where people have tried to guess at a reason for performance problems. This sometimes works, but is far less effective than actually measuring the code.
When reproducible on a development machine, I would recommend a profiler.
There was a previous question that asked about
Delphi Profiling tools which has several possible tools you could use.
When you can't reproduce the problem on your development machine, then it becomes a bit more difficult, but not impossible. Typically I have found that problems are related to an application dependency that is different, and not performing well. Understanding the external influences on your application can help pinpoint the problem.
Specifically common external problems in some of my applications.
Network
Database
Application Servers
Installation or Data File Location (i.e. Disk Performance)
Virus and Malware Scanners
Other application interring with yours such as a virus.
To monitor for items related to the network (i.e. Database, web services, etc...)
I typically use Wireshark which allows me to see if resources are responding in expected times. My most common problem is poor performing DNS and can found using Wireshark.
You can use the AutoRuns program to determine everything that starts up when your computer does, it's useful in determine differences between machines.
But most of all I have logging that can be turned on in my applications and this allows me to isolate the problem to a specific area of code. This narrowing down to a specific section of code reduces the guessing, and allows you to focus on a few possible problems.
I created a log function for this that I call at specific places (in your case especially during startup). It adds a timestamp to each log text and stores them in a TMemo that is regularly saved. Not only very helpful when debugging, but may also shed some light on your problem.
Are you using code signing - ie Microsoft Authenticode? If so, then outdated certificate authorities on the computers can cause significant delays to startup.
First, I would try to defragment the hard disk. If still slow, I would check the power supply. Maybe your hard disk are getting insufficient energy.
Check if there is the same antivirus software on those 2 problematic computers. If so, then your Delphi application may match byte pattern used in some virus made in Delphi. Update virus definitions to solve it, or report false alarm to antivirus company, or change antivirus software.
Check if there isn't any printer installed on those 2 problematic computers. If it is so, then add any printer and try again.
Idea 1:
One reason I have seen for very slow application load time, is when printing or reporting system components like Developer Express Express Print, are in your application.
The problem I saw when using Developer Express Printing components, is that I had an offline or non-responsive network printer in my list of printers (check the control panel printer icon) that was not responding. Some of those Developer Express components seem to read some information from each printer you have installed, and the solution was to go to those clients, and delete old printers from their control panel, that were no longer being used. Each not-responding network printer added up to 60 seconds for a TCP Timeout, to the startup time of my application.
Update - Idea 2:
Download MS DebugView and install it on the machine that runs slowly. Now go back to your main development PC, open the IDE, open your main project file (right click on the project, view project source in project viewer), this will show you the contents of your main project source file (.dpr). go to the main begin....end. block. Now set a breakpoint on the main begin statement, and single step INTO (not OVER) and you will see all the module initialization sections. In each one add this: OutputDebugString('ModuleName').
Now when you run this inside the Delphi Ide you will see messages, and see how far apart they come in, and understand what is taking a long time to initialize. Instead of installing the delphi ide onto the machine that runs slowly, Debug View (which is less than 400kb single executable) will be run, and it will show you these debug messages, along with a nice time display (##.# seconds) for each message.
MS Debug view is here.
Are you allowing the forms to be constructed on initialization within the DPR source? If so, you may do well to consider whether or not you want those forms sucking up memory the entire time, more-over if you want those forms to be wasting the application's time on load.
A rule of thumb: If the form is used a LOT during the application's execution, allow it to be constructed when the application loads (this will work out faster over-all than constructing the instance "on-demand").
If the form is not used very often at all (for example, a Dialog or an About Box), delete the "Application.CreateForm" line from the DPR source, and instead construct your instance on request...
var
LForm: TfrmAbout;
begin
Application.CreateForm(LForm, TfrmAbout);
try
LForm.ShowModal;
finally
LForm.Free;
end;
end;
Now that form (which may not even be displayed during the program's execution) is not sucking up system resources, and will not slow down the application's load time.
It may not solve your problem 100%, but it should certainly help!

How do I improve Windows Subversion client update performance?

How do I improve Subversion client update performance? It appears to be disk bound on the client.
Details:
CollabNet Windows client version 1.6.2 (r37639)
Windows XP SP2
3 GB RAM with PF Usage around 1 GB and System Cache of 1.1 GB.
Disk has write caching enabled
Update takes 7-15 minutes (when very little to update).
Checkout has 36,083 directories/files (from svn list)
Repository has 58,750 revisions.
Checkout takes about 2.7 GB
Perf monitor shows % Disk Write time stays near 90% during update.
Max Disk Read Bytes/sec got up to 12.8M and write got up to 5.2M
CPU, paging file usage, and network usage are all low.
Watching the server performance seems to show that it isn't a bottleneck.
I'm especially interested in answers besides getting a faster disk (especially configuration changes).
Updates from some of the suggestions:
I need the whole thing so sparse directories won't work.
Another client (TortoiseSVN) takes 7 minutes also
TortoiseSVN icon overlays have be configured so they don't cause the problem.
Anti-virus is configured to to skip that directory is it isn't causing the problem.
I experience exatly the same thing. Recently replaced Perforce with svn, but if we cannot overcome the performance problems on Windows me must consider another tool.
Using svn 1.6.6, Win XP and Vista clients. RedHat server.
My observations matches yours:
Huge disk-write activity.
Antivirus not a bottleneck.
No matter witch svn-clients are used.
No server or network bottleneck.
Complementary info
More than 3 times faster operations on:
Linux (Ubuntu).
Linux (Ubuntu) running on VirtualBox at Win Vista host.
Win XP running on VMWare at RedHat host.
Do you need every bit of the repository on your working copy? If you truly only care about particular portions of the tree, look into Subversion's Sparse Directories (a.k.a. "Sparse Checkouts") feature. It allows you to manipulate your working copy so it only contains those directories of interest.
Just as an example, you might use this to prune documentation, installer-related files, etc. Depending on what you truly need on your local machine, embracing this approach could make a serious dent in your wait times.
Try svn client version 1.5.. It helped me on my Vista laptop. Versions 1.6. are extremely slow.
This is more likely to be your network and the amount of data moved as well as your client. Are you using Tortoise? I find it to be a bit slow myself when moving that much data!
Are you using TortoiseSVN? If so, the Icon Overlays do slow down operations. If you go to TortoiseSVN Settings/Icon Overlays there are several settings you can tweak to control the level to which you want to use the Overlays, including turning them off completely. See if that affects your performance.
Do you run a virus checker that uses on-access scanning? That can really make it crawl. If so, turn it off and see if that helps. Most scanners will have a way to exclude specific directories if that helps.
Nobody seems to be pointing out the one reason that I often consider a design flaw. Subversion creates a second "pristine" copy of the checkout for offline operations. If you're checking out 4G of files, it's actually writing 8G to disk.
Compare a checkout to an export. That will show you the massive difference when writing those second copies.
There's nothing you can do about that.
Upgrade to svn 1.7
From Discussion of Slow Performance of SVN Update:
The update process in svn 1.6 goes something like this:
search the entire working copy, to see what's there at the moment, and locking it so no one changes the answer during the next steps
tell that to the server
receive from the server whatever new stuff you need, applying the changes to the files as you go
recurse over the entire working copy again, unlocking it
If there are many directories and files, steps 1 and 4 can take up a
lot of time. This would be consistent with your observation of long
delays with no network traffic.
Working copy format was changed in svn 1.7. Now all meta information is stored in SQLite database in root folder of working copy and there is no need to perform steps 1 and 4 any more which consumed most of the time durring svn update.

Resources