I have the following setup:
Joomla! 3.2.2, Bootstrap 3.1 with LESS.
I tested my page loading time and got a bad result, mainly because of LESS.
The loading time overall was 7.96s. The waiting for bootstrap/LESS variables was 5.3s!!!
It loads all the bootstrap variables:
template.less ~.com/templates/c~/less/ 9.1 kB
bootstrap.less 1.3 kB
variables.less 19.6 kB
mixins.less 24.2 kB
normalize.less 7.4 kB
print.less 1.9 kB
scaffolding.less 2.1 kB
type.less 5.3 kB
code.less 1.3 kB
grid.less 2. kB
tables.less 4.5 kB
forms.less 9.2 kB
buttons.less 3.7 kB
component-animations.less 786 B
glyphicons.less 14.8 kB
dropdowns.less 4. kB
button-groups.less 5.1 kB
input-groups.less 3.5 kB
navs.less 5.1 kB
navbar.less 13.8 kB
breadcrumbs.less 817 B
pagination.less 2.1 kB
pager.less 1.1 kB
labels.less 1.3 kB
badges.less 1.3 kB
jumbotron.less 1.2 kB
thumbnails.less 1. kB
alerts.less 1.7 kB
progress-bars.less 1.8 kB
media.less 1.1 kB
list-group.less 2.1 kB
panels.less 4.2 kB
wells.less 812 B
close.less 96 B
modals.less 3.3 kB
tooltip.less 2.8 kB
popovers.less 3.5 kB
carousel.less 4.8 kB
utilities.less 1. kB
responsive-utilities.less 5. kB
That's fine by me. I won't even comment out many to perform better, becasue I need most of them.
But is there a way to raise the speed of page load in any kind of way?
It's basically not the rendering of the styles but the waiting time for server actions for loading the variables that let it take long. pingdom.com: "The web browser is waiting for data from the server".
If you are concerned about performance, you want to compile the less files and upload the resulting CSS to the server. Compiling on the server is convenient for development but not such a great idea in production.
Related
I have an issue I'm having trouble to investigate: I have a production compiled server at https://preprod.weally.org/ (through nginx)
I opened the port 3000 to access directly the next server (http://preprod.weally.org:3000/)
My issue is that when I log in, I'm supposed to see a different content than a non logged user. Also editing an article should show the updated article immediately to all other users.
I noticed the returned page is correct only for its first display, it then always returns the same for any browser session: (refresh occurs only when I restart the server).
After incriminating nginx of caching for hours, I realized it was Next.js that was not calling the endpoint /api/graphql, but getting its content from cache.
My next.config.js is supposed to avoid caching (you can see the build output saying /api/graphql is server side rendered)
Note: when I run the server in dev mode (yarn dev), everything works as expected
const { i18n } = require('./next-i18next.config');
module.exports = {
i18n,
generateEtags: false,
compress: false,
};
My question is: "what is the next step in my investigation?". I'm not feeling comfortable to debug the build process of next.js
Here's the "yarn build" output:
Page Size First Load JS
┌ λ / 3.46 kB 288 kB
├ /_app 0 B 167 kB
├ ○ /404 195 B 167 kB
├ ○ /about (1049 ms) 1.69 kB 169 kB
├ λ /api/auth/[...nextauth] 0 B 167 kB
├ λ /api/auth/callback/ignore-facebook 0 B 167 kB
├ λ /api/graphql 0 B 167 kB
├ λ /api/rest/[...restEndpoint] 0 B 167 kB
├ λ /business/[businessId]/detail 10.2 kB 295 kB
├ ○ /legal/data_deletion (2485 ms) 2.06 kB 169 kB
├ ○ /legal/privacy_policy (2510 ms) 8.6 kB 176 kB
├ ○ /legal/terms (2478 ms) 14.8 kB 182 kB
├ λ /request/[issueId]/detail 608 B 285 kB
├ λ /request/[issueId]/edit 46.5 kB 284 kB
├ ○ /signin (1143 ms) 1.94 kB 172 kB
├ ○ /signout (1111 ms) 2.46 kB 169 kB
└ ○ /signup (1094 ms) 1.73 kB 172 kB
+ First Load JS shared by all 167 kB
├ chunks/framework.3af989.js 42.6 kB
├ chunks/main.fa7488.js 24.5 kB
├ chunks/pages/_app.f19732.js 98.1 kB
├ chunks/webpack.32a7fa.js 1.72 kB
└ css/5d6d9ddf7d6a4cfee084.css 1.56 kB
λ (Server) server-side renders at runtime (uses getInitialProps or getServerSideProps)
○ (Static) automatically rendered as static HTML (uses no initial props)
● (SSG) automatically generated as static HTML + JSON (uses getStaticProps)
(ISR) incremental static regeneration (uses revalidate in getStaticProps)
I have a problem when I try to run ng deploy on a recently created Angular project. The version I'm using is 9.1.6 (I just upgraded from 9.1.5, where I got the same error). ng serve works probably.
(base) paul#Pauls-MacBook-Pro must-have % ng deploy
📦 Building "must-have"
Another process, with id 4426, is currently running ngcc.
Waiting up to 250s for it to finish.
Generating ES5 bundles for differential loading...
ES5 bundle generation complete.
chunk {0} runtime-es2015.1eba213af0b233498d9d.js (runtime) 1.45 kB [entry] [rendered]
chunk {0} runtime-es5.1eba213af0b233498d9d.js (runtime) 1.45 kB [entry] [rendered]
chunk {2} polyfills-es2015.690002c25ea8557bb4b0.js (polyfills) 36.1 kB [initial] [rendered]
chunk {3} polyfills-es5.9e286f6d9247438cbb02.js (polyfills-es5) 129 kB [initial] [rendered]
chunk {1} main-es2015.658a3e52de1922463b54.js (main) 467 kB [initial] [rendered]
chunk {1} main-es5.658a3e52de1922463b54.js (main) 545 kB [initial] [rendered]
chunk {4} styles.c86817c326e37bf011e3.css (styles) 62 kB [initial] [rendered]
Date: 2020-05-16T16:06:16.041Z - Hash: a4ca944a427d0323601e - Time: 68781ms
Cannot read property 'printf' of undefined
(base) paul#Pauls-MacBook-Pro must-have %
My versions:
Package Version
-----------------------------------------------------------
#angular-devkit/architect 0.900.7
#angular-devkit/build-angular 0.901.6
#angular-devkit/build-optimizer 0.901.6
#angular-devkit/build-webpack 0.901.6
#angular-devkit/core 9.1.6
#angular-devkit/schematics 9.1.6
#angular/cdk 9.2.3
#angular/cli 9.1.6
#angular/fire 6.0.0
#angular/material 9.2.3
#ngtools/webpack 9.1.6
#schematics/angular 9.1.6
#schematics/update 0.901.6
rxjs 6.5.5
typescript 3.8.3
webpack 4.42.0
The fix is available in version 6.0.0-rc.2 of #angular/fire. Upgrade to 6.0.0-rc.2 by
ng add #angular/fire#next
I had this problem and every ng add #angular/fire#next did not help.
I am using angular8, angular fire and firebase hosting. So, I build the project using ng build, and then deployed on firebase hosting using firebase deploy.
I'm trying to get the automatic tree-shaking functionality provided by the Nuxt.js / Vuetify module working. In my nuxt.config.js I have:
buildModules: [
['#nuxtjs/vuetify', {treeShake: true}]
],
However, I'm only using one or two components at the moment, but I'm still getting a very large vendor.app (adding the treeshake option had no effect on size)
Hash: 9ab07d7e13cc875194be
Version: webpack 4.41.2
Time: 18845ms
Built at: 12/10/2019 11:04:48 AM
Asset Size Chunks Chunk Names
../server/client.manifest.json 12.2 KiB [emitted]
5384010d9cdd9c2188ab.js 155 KiB 1 [emitted] [immutable] commons.app
706a50a7b04fc7741c9f.js 2.35 KiB 4 [emitted] [immutable] runtime
8d5a3837a62a2930b94f.js 34.7 KiB 0 [emitted] [immutable] app
9d5a4d22f4d1df95d7a7.js 1.95 KiB 3 [emitted] [immutable] pages/login
LICENSES 389 bytes [emitted]
a0699603e56c5e67b811.js 170 KiB 6 [emitted] [immutable] vendors.pages/login
b1019b7a0578a5af9559.js 265 KiB 5 [emitted] [immutable] [big] vendors.app
b327d22dbda68a34a081.js 3.04 KiB 2 [emitted] [immutable] pages/index
+ 1 hidden asset
Entrypoint app = 706a50a7b04fc7741c9f.js 5384010d9cdd9c2188ab.js b1019b7a0578a5af9559.js 8d5a3837a62a2930b94f.js
WARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
This can impact web performance.
Assets:
b1019b7a0578a5af9559.js (265 KiB)
ℹ Generating pages 11:04:48
✔ Generated / 11:04:48
✔ Generated /login
Notice the line indicating the large vendors.app
Notice: b1019b7a0578a5af9559.js 265 KiB 5 [emitted] [immutable] [big] vendors.app
Can you please advise?
My mistake, the above configuration is working correctly. The real issue is the file size of the CSS being included (for all components) included in the build.
For people suffering from the same problem, adding build: {analyze:true} to nuxt.config.js shows where the problem files are (automatically opens in a browser window when running npm run build).
Clearly main.sass is the issue here. I will ask the question of how to get Nuxt/Webpack to only use CSS modules for relevant components in another question. The article here only shows how to do it with the CLI, not Nuxt.
Edit: I've now added the extractCSS:true property to my Nuxt config and the file size is reduced to a few kb..
build: {
analyze:true,
extractCSS: true
}
I am using elasticsearch "1.4.2" with river plugin on an aws instance with 8GB ram.Everything was working fine for a week but after a week the river plugin[plugin=org.xbib.elasticsearch.plugin.jdbc.river.JDBCRiverPlugin
version=1.4.0.4] stopped working also I was not able to do a ssh login to the server.After server restart ssh login worked fine ,when I checked the logs of elastic search I could find this error.
[2015-01-29 09:00:59,001][WARN ][river.jdbc.SimpleRiverFlow] no river mouth
[2015-01-29 09:00:59,001][ERROR][river.jdbc.RiverThread ] java.lang.OutOfMemoryError: unable to create new native thread
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: unable to create new native thread
After restarting the service everything works normal .But after certain interval the same thing happen.Can anyone tell what could be the reason and solution .If any other details are required please let me know.
When I checked the number of file descriptor using
sudo ls /proc/1503/fd/ | wc -l
I could see it is increasing after every time . It was 320 and it now reached 360 (keeps increasing) . and
sudo grep -E "^Max open files" /proc/1503/limits
this shows 65535
processor info
vendor_id : GenuineIntel
cpu family : 6
model : 62
model name : Intel(R) Xeon(R) CPU E5-2670 v2 # 2.50GHz
stepping : 4
microcode : 0x415
cpu MHz : 2500.096
cache size : 25600 KB
siblings : 8
cpu cores : 4
memory
MemTotal: 62916320 kB
MemFree: 57404812 kB
Buffers: 102952 kB
Cached: 3067564 kB
SwapCached: 0 kB
Active: 2472032 kB
Inactive: 2479576 kB
Active(anon): 1781216 kB
Inactive(anon): 528 kB
Active(file): 690816 kB
Inactive(file): 2479048 kB
Do the following
Run the following two commands as root:
ulimit -l unlimited
ulimit -n 64000
In /etc/elasticsearch/elasticsearch.yml make sure you uncomment or add a line that says:
bootstrap.mlockall: true
In /etc/default/elasticsearch uncomment the line (or add a line) that says MAX_LOCKED_MEMORY=unlimited and also set the ES_HEAP_SIZE line to a reasonable number. Make sure it's a high enough amount of memory that you don't starve elasticsearch, but it should not be higher than half the memory on your system generally and definitely not higher than ~30GB. I have it set to 8g on my data nodes.
In one way or another the process is obviously being starved of resources. Give your system plenty of memory and give elasticsearch a good part of that.
I think you need to analysis your server log. Maybe In: /var/log/message
I have read some where that ruby fork are COW friendly
ok here the link
But then when I happen to google around for more info on it I found out that Ruby does not support COW (copy on write)
Now I'm actually a bit confuse whether ruby actually support or not COW functionality
I'm also aware that REE and Rubinius has a COW friendly GC so
does that me REE and Rubinius support COW functionality
I yes I'm dying to test it , can anyone suggest me that if ruby support COW functionality then how to write a sample code to test COW concept in Ruby
Thanks
fork being copy-on-write is a property of your operating system kernel, not Ruby. On most UNIX-like systems, it is.
On Linux, for example, you can look in /proc/pid/smaps and look for how much of the heap mapping is shared. Here is an example from bash doing a fork:
02020000-023cd000 rw-p 00000000 00:00 0 [heap]
Size: 3764 kB
Rss: 3716 kB
Pss: 1282 kB
Shared_Clean: 0 kB
Shared_Dirty: 3652 kB
Private_Clean: 0 kB
Private_Dirty: 64 kB
Referenced: 144 kB
Anonymous: 3716 kB
AnonHugePages: 0 kB
Swap: 0 kB
KernelPageSize: 4 kB
MMUPageSize: 4 kB
Locked: 0 kB
So, out of its 3764k heap, 3652k is shared. See proc.txt for documentation on the files in /proc.
Of course, Ruby may have something that causes the COW pages to be copied (e.g., maybe its garbage collector writes to each page), but you'll be able to see that by the shared count going to 0.