OpenLiteSpeed and cache - caching

I am testing a Cyberpanel + Openlitespeed webserver with a Prestashop site.
Default config for the Litespeed module is:
checkPrivateCache 1
checkPublicCache 1
maxCacheObjSize 10000000
maxStaleAge 200
qsCache 1
reqCookieCache 1
respCookieCache 1
ignoreReqCacheCtrl 1
ignoreRespCacheCtrl 0
enableCache 0
expireInSeconds 3600
enablePrivateCache 0
privateExpireInSeconds 3600
I've seen that if I change enableCache to 1, the web loads faster, but the backend donest work and don't save changes.
The only way to to work with Prestashop and OLScache is with the Enterprise and the Prestashop module?
Thanks

Related

Performance Issue in spring boot api rest webservice

In our organization we have started an integration through a web service with api rest but we have a rare performance problem.
Data:
We have a virtual machine (VMWare) 4 core/8Gb ram. sufficient remote storage.
Ubuntu server 18.04
openjdk 11.0.7 2020-04-14
JAVA_OPTS='-Djava.awt.headless=true -Xms512m -Xmx2048m -XX:MaxPermSize=256m'
mysql: See 5.7.30-0ubuntu0.18.04.1 (It's running locally but the app connects by host name).
APP: Spring boot 2.1.3 (tomcat & spring data jpa & hikari & hibernate) All parameters by default.
top - 15:09:15 up 2 days, 14:21, 1 user, load average: 0.03, 0.01, 0.00
Tasks: 189 total, 1 running, 100 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.3 us, 0.2 sy, 0.0 ni, 99.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 8168140 total, 148740 free, 7590936 used, 428464 buff/cache
KiB Swap: 2097148 total, 1352428 free, 744720 used. 332048 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2383 app 20 0 41920 3944 3220 R 0.7 0.0 0:00.53 top
2698 app 20 0 5835612 402424 15312 S 0.7 4.9 23:13.92 java
1786 mysql 20 0 2680528 321892 8108 S 0.3 3.9 20:38.32 mysqld
2677 app 20 0 5850152 441440 15824 S 0.3 5.4 28:01.41 java <------
2769 app 20 0 5868308 977.2m 16868 S 0.3 12.3 49:25.72 java
ps -eaf | grep java
app 2677 2676 0 Jul07 ? 00:28:01 java -Dserver.port=4560 -jar app-ws-1.0.0-SNAPSHOT.jar <------
app 2698 2696 0 Jul07 ? 00:23:14 java -Dserver.port=4561 -jar app-ws-1.0.0-SNAPSHOT.jar
app 2769 2768 1 Jul07 ? 00:49:26 java -jar app-gui-1.0.0-SNAPSHOT.jar
We have 2 webservices, one functional (2677) and the other in testing (2698) and a web app (2768).
We have a problem with the first one. When processing calls the first one takes >30s, causing a timeout in the calling system, but the following calls are processed ok <5s.
The number of calls is minimum, 10 max. per day and never concurrent. Timeout can also occur if several hours pass without calls (>5h).
We have checked the code, we have checked WMware/Ubuntu (suspension options) and we haven't seen anything in the monitoring.
We have been told that it could be JVM and GC problems but I personally don't understand much and I haven't seen anything with the Memory analyzer.
Later on we have implemented in the app itself a dummy call (localhost) every 10 minutes to "warm up the machine" but even so the first call still takes >30s and the rest does not. The dummy call only answers ok.
We don't know what the cause could be and we don't know how to discard options since it is a productive environment and it doesn't admit many changes.

How to optimize the occupied memory using Ruby with Gitlab

run: top
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
13960 git 20 0 2032080 336220 13304 S 1.0 16.3 0:31.50 ruby
14284 git 20 0 554792 300168 10844 S 0.0 14.5 0:04.27 ruby
14287 git 20 0 546056 291068 10652 S 0.0 14.1 0:03.13 ruby
2705 mysql 20 0 1082876 287544 380 S 0.0 13.9 0:01.70 mysqld
14104 git 20 0 524072 276016 13324 S 0.0 13.4 0:24.69 ruby
14281 git 20 0 524072 267504 4812 S 0.0 13.0 0:00.00 ruby
13978 gitlab-+ 20 0 579824 39872 39280 S 0.0 1.9 0:00.12 postgres
1404 www 20 0 142196 31304 820 S 0.0 1.5 0:00.05 nginx
1405 www 20 0 142196 31304 820 S 0.0 1.5 0:00.05 nginx
1403 www 20 0 142196 30992 508 S 0.0 1.5 0:00.04 nginx
My machine only has 2GB of memory.
Is there a way to optimize the configuration and reduce the memory consumption?
Not really: see GitLab Requirements for memory
You need at least 8GB of addressable memory (RAM + swap) to install and use GitLab!
The operating system and any other running applications will also be using memory so keep in mind that you need at least 4GB available before running GitLab. With less memory GitLab will give strange errors during the reconfigure run and 500 errors during usage.
We recommend having at least 2GB of swap on your server, even if you currently have enough available RAM. Having swap will help reduce the chance of errors occurring if your available memory changes.
We also recommend configuring the kernel’s swappiness setting to a low value like 10 to make the most of your RAM while still having the swap available when needed.

Hyperlink Error on Concrete 8.2.0 & Windows Server 2012

Currently having issues with hyperlinks: Throughout the day hyperlinks will not work because www is not added to the URL and then randomly 20-30 minutes later the issue will be fixed.
My Concrete Environment:
Core Version - 8.2.0
Version Installed - 8.2.0
Database Version - 20170711151953
concrete5 Cache Settings
Block Cache - On
Overrides Cache - On
Full Page Caching - On - If blocks on the particular page allow it.
Full Page Cache Lifetime - Every 6 hours (default setting).
Server Software
Microsoft-IIS/8.5
Server API
cgi-fcgi
PHP Version
7.1.8
PHP Extensions
bcmath, calendar, cgi-fcgi, Core, ctype, curl, date, dom, fileinfo, filter, gd, hash, iconv, intl, json, ldap, libxml, mbstring, mcrypt, mysqli, mysqlnd, odbc, openssl, pcre, PDO, pdo_mysql, Phar, readline, Reflection, session, SimpleXML, SPL, standard, tokenizer, wddx, wincache, xml, xmlreader, xmlwriter, zip, zlib
PHP Settings
max_execution_time - 10240
log_errors_max_len - 1024
max_file_uploads - 500
max_input_nesting_level - 64
max_input_time - 60
max_input_vars - 1000
memory_limit - 4096M
post_max_size - 1000M
sql.safe_mode - Off
upload_max_filesize - 1000M
ldap.max_links - Unlimited
mysqli.max_links - Unlimited
mysqli.max_persistent - Unlimited
odbc.max_links - Unlimited
odbc.max_persistent - Unlimited
pcre.backtrack_limit - 1000000
pcre.recursion_limit - 100000
session.cache_limiter - no value
session.gc_maxlifetime - 7200
wincache.maxfilesize - 2048
wincache.ttlmax - 1200

dotnet core 2 long build time because of long restore time

I noticed that building in dotnet core 2 seemed a lot slower.
But the timing after the build always showed 'only' 15 seconds.
I couldn't believe that so I timed it with time.
> time dotnet build
Microsoft (R) Build Engine version 15.3.409.57025 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
hrm -> /Users/r/dev/hrm/bin/Debug/netcoreapp2.0/hrm.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:15.45
real 0m52.366s
user 0m36.851s
sys 0m15.458s
That seemed more correct. Almost a minute.
I then tried without restore and it was a lot faster:
> time dotnet build --no-restore
Microsoft (R) Build Engine version 15.3.409.57025 for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
hrm -> /Users/r/dev/hrm/bin/Debug/netcoreapp2.0/hrm.dll
Build succeeded.
0 Warning(s)
0 Error(s)
Time Elapsed 00:00:15.39
real 0m15.795s
user 0m11.397s
sys 0m4.238s
But dotnet also shows 15 seconds.
Could it be that only building is counted in the timings?
Not sure why a restore is always slow when everything is already restored.
Are there other ways I could speed up the building process? Disable telemetry? (I'm using osx, my environment is set to development)
I prefer to use dotnet watch run but that seems even slower.
Running dotnet watch to view the parameters is taking 12 seconds.
> time dotnet watch
Microsoft DotNet File Watcher 2.0.0-rtm-26452
Usage: dotnet watch [options] [[--] <arg>...]
Options:
....
real 0m12.631s
user 0m8.880s
sys 0m3.816s
Is this only on my system?
Update:
Here is the result from dotnet restore /clp:PerformanceSummary
> dotnet restore /clp:PerformanceSummary
Restore completed in 43.95 ms for /Users/roeland/dev/hrm/hrm.csproj.
Restore completed in 52.73 ms for /Users/roeland/dev/hrm/hrm.csproj.
Restore completed in 38.48 ms for /Users/roeland/dev/hrm/hrm.csproj.
Project Evaluation Performance Summary:
36252 ms /Users/roeland/dev/hrm/hrm.csproj 3 calls
Project Performance Summary:
36424 ms /Users/roeland/dev/hrm/hrm.csproj 9 calls
24359 ms Restore 1 calls
1 ms _IsProjectRestoreSupported 2 calls
12011 ms _GenerateRestoreProjectPathWalk 1 calls
1 ms _GenerateRestoreProjectPathItemsPerFramework 1 calls
43 ms _GenerateRestoreGraphProjectEntry 1 calls
0 ms _GetRestoreSettingsPerFramework 1 calls
6 ms _GenerateProjectRestoreGraph 1 calls
3 ms _GenerateProjectRestoreGraphPerFramework 1 calls
Target Performance Summary:
0 ms _GenerateRestoreGraphProjectEntry 1 calls
0 ms _GenerateProjectRestoreGraph 1 calls
0 ms _GetRestoreTargetFrameworksAsItems 1 calls
0 ms _GetRestoreProjectStyle 2 calls
0 ms CheckForImplicitPackageReferenceOverridesBeforeRestore 2 calls
0 ms _CheckForUnsupportedNETCoreVersion 1 calls
0 ms _IsProjectRestoreSupported 1 calls
0 ms _GetRestoreSettingsPerFramework 1 calls
0 ms _GetProjectJsonPath 2 calls
0 ms _GetRestoreSettingsOverrides 1 calls
1 ms _GenerateRestoreProjectPathWalk 1 calls
1 ms _GenerateRestoreProjectPathItemsPerFramework 1 calls
1 ms _GenerateRestoreSpecs 1 calls
1 ms _GenerateRestoreProjectSpec 1 calls
2 ms _GenerateProjectRestoreGraphPerFramework 1 calls
2 ms _GetRestoreTargetFrameworksOutput 1 calls
5 ms _GenerateRestoreDependencies 1 calls
10 ms _LoadRestoreGraphEntryPoints 1 calls
20 ms _GenerateDotnetCliToolReferenceSpecs 1 calls
21 ms _GetRestoreSettings 1 calls
54 ms _GenerateRestoreGraph 1 calls
216 ms Restore 1 calls
12007 ms _GenerateRestoreProjectPathItems 1 calls
12014 ms _GetAllRestoreProjectPathItems 1 calls
12058 ms _FilterRestoreGraphProjectInputItems 1 calls
Task Performance Summary:
1 ms Message 3 calls
1 ms ConvertToAbsolutePath 2 calls
1 ms GetRestorePackageReferencesTask 1 calls
1 ms GetRestoreProjectReferencesTask 1 calls
2 ms GetRestoreProjectFrameworks 1 calls
3 ms RemoveDuplicates 5 calls
4 ms WarnForInvalidProjectsTask 1 calls
18 ms GetRestoreSettingsTask 1 calls
20 ms GetRestoreDotnetCliToolsTask 1 calls
216 ms RestoreTask 1 calls
36121 ms MsBuild 9 calls
Long story short: MSBuild scans the entire folder structure based on glob patterns defined by the SDK used. This is done for each project evaluation and the NuGet restore seems to trigger at least three full evaluations.
Since it is slow to scan large directories, the SDKs define globbing patterns used to exclude some known large directories that are usually not wanted as part of the project anyway (node_modules, bower_components etc.).
It has been known that special circumstances may circumvent these optimisations and or even trigger performance bugs in the include/exclude glob pattern expansion / matching.
As a precaution, add all folders known to be excluded to the DefaultItemExcludes property (inside of a <PropertyGroup> element):
<DefaultItemExcludes>custom\node_modules\**;$(DefaultItemExcludes)</DefaultItemExcludes>
For me excluding.git folder helped to make build around 10x faster.
<PropertyGroup>
<DefaultItemExcludes>.git\**;$(DefaultItemExcludes)</DefaultItemExcludes>
</PropertyGroup>

Ambari dashboard retrieving no statistics

I have a fresh install of Hortonworks Data Platform 2.2 installed on a small cluster (4 machines) but when I login to the Ambari GUI, the majority of dashboard stats boxes (HDFS disk usage, Network usage, Memory usage etc) are not populated with any statistics, instead they show the message:
No data There was no data available. Possible reasons include inaccessible Ganglia service
Clicking on the HDFS service link gives the following summary:
NameNode Started
SNameNode Started
DataNodes 4/4 DataNodes Live
NameNode Uptime Not Running
NameNode Heap n/a / n/a (0.0% used)
DataNodes Status 4 live / 0 dead / 0 decommissioning
Disk Usage (DFS Used) n/a / n/a (0%)
Disk Usage (Non DFS Used) n/a / n/a (0%)
Disk Usage (Remaining) n/a / n/a (0%)
Blocks (total) n/a
Block Errors n/a corrupt / n/a missing / n/a under replicated
Total Files + Directories n/a
Upgrade Status Upgrade not finalized
Safe Mode Status n/a
The Alerts and Health Checks box to the right of the screen is not displaying any information but if I click on the settings icon this opens the Nagios frontend and again, everything looks healthy here!
The install went smoothly (CentOS 6.5) and everything looks good as far as all services are concerned (all started with green tick next to service name). There are some stats displayed on the dashboard: 4/4 datanodes are live, 1/1 Nodemanages live & 1/1 Supervisors are live. I can write files to HDFS so its looks like it's a Ganglia issue?
The Ganglia daemon seems to be working ok:
ps -ef | grep gmond
nobody 1720 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPHistoryServer/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPHistoryServer/gmond.pid
nobody 1753 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPFlumeServer/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPFlumeServer/gmond.pid
nobody 1790 1 0 12:54 ? 00:00:48 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPHBaseMaster/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPHBaseMaster/gmond.pid
nobody 1821 1 1 12:54 ? 00:00:57 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPKafka/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPKafka/gmond.pid
nobody 1850 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPSupervisor/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPSupervisor/gmond.pid
nobody 1879 1 0 12:54 ? 00:00:45 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPSlaves/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPSlaves/gmond.pid
nobody 1909 1 0 12:54 ? 00:00:48 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPResourceManager/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPResourceManager/gmond.pid
nobody 1938 1 0 12:54 ? 00:00:50 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPNameNode/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPNameNode/gmond.pid
nobody 1967 1 0 12:54 ? 00:00:47 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPNodeManager/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPNodeManager/gmond.pid
nobody 1996 1 0 12:54 ? 00:00:44 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPNimbus/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPNimbus/gmond.pid
nobody 2028 1 1 12:54 ? 00:00:58 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPDataNode/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPDataNode/gmond.pid
nobody 2057 1 0 12:54 ? 00:00:51 /usr/sbin/gmond --conf=/etc/ganglia/hdp/HDPHBaseRegionServer/gmond.core.conf --pid-file=/var/run/ganglia/hdp/HDPHBaseRegionServer/gmond.pid
I have checked the Ganglia service on each node, the processes are running as expected
ps -ef | grep gmetad
nobody 2807 1 2 12:55 ? 00:01:59 /usr/sbin/gmetad --conf=/etc/ganglia/hdp/gmetad.conf --pid-file=/var/run/ganglia/hdp/gmetad.pid
I have tried restarting Ganglia services with no luck, restarted all services but still the same. Does anyone have any ideas how I get the dashboard to work properly? Thank you.
It turns out to be a proxy issue, to access the internet I had to add my proxy details to the file /var/lib/ambari-server/ambari-env.sh
export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS' -Xms512m -Xmx2048m -Dhttp.proxyHost=theproxy -Dhttp.proxyPort=80 -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false'
When ganglia was trying to access each node in the cluster the request was going via the proxy and never resolving, to overcome the issue I added my nodes to the exclude list (add the flag -Dhttp.nonProxyHosts) like so:
export AMBARI_JVM_ARGS=$AMBARI_JVM_ARGS' -Xms512m -Xmx2048m -Dhttp.proxyHost=theproxy -Dhttp.proxyPort=80 -Dhttp.nonProxyHosts="localhost|node1.dms|node2.dms|node3.dms|etc" -Djava.security.auth.login.config=/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false'
After adding the exclude list the stats were retrieved as expected!

Resources