Jekyll serve on Mac - slow loading - macos

when I try to run Jekyll (C(base) cXXX-macpro:website cXXX$ jekyll serve) on my MAC via the terminal, I receive the following output:
Configuration file: /xxx/website/_config.yml
Source: /xxx/website
Destination: /xxx/website/_site
Incremental build: disabled. Enable with --incremental
Generating...
AutoPages: Disabled/Not configured in site.config.
Pagination: Complete, processed 1 pagination page(s)
done in 3.629 seconds.
Auto-regeneration: enabled for '/xxx/website'
Server address: http://127.0.0.1:4000
Server running... press ctrl-c to stop.
Then, when I try to load the Website via the Browser, it takes up to 2-3 minutes (sic!) to load the site. Finally, when the site is loaded and I try to click on the links on this site, it takes again 2-3 minutes to load the respective page.
Can anyone tell me why this is the case and how to solve this problem?

I finally solved the problem, after two weeks of searching for a solution and posting this question yesterday.
I found the solution here: Page loading suddenly super slow #372.
There seems to be a problem with https://gitcdn.xyz/... in the file head.html in the _include folder. Just change the URL to https://gitcdn.link/ and it works!

Related

How to disabled Phusion Passenger for hosting on Plesk? When I refresh web app in VueJS I always have error

My server has Plesk and I have mydomain.com with frontend in Vue and backend in api.mydomain.com with laravel. Before, when I went to mydomain.com/clients and tried to refresh the page, I got an error with Phusion Passenger. I have disabled Phusion Passenger on this domain but I don't remember how I did it.
Now, I have the same problem with the same front and back system. The only difference is that both domains are subdomains, that is, I have front.domain.es and api.domain.es.
If I refresh front.domain.com it works perfectly. If I navigate from the system menu it works fine. But, if I am in front.domain.com/clients for example and I try to refresh page, I get Pushion Passenger error.
Important comment that the system work perfectly, just when you try to refresh the page has this error. On the first example that I comment, now works perfecty without Passenger, I can refresh without problem, but I don't remember how I did it.
I want to disable Phusion Passenger for this hosting on plesk or domain or whatever, I hope you can help me!
Thank you very much!!
Log File:
[ E 2023-02-02 08:44:42.0486 476333/Tf age/Cor/App/Implementation.cpp:221 ]: Could not spawn process for application /var/www/vhosts/domain.com/front.domain.com: The application process exited prematurely.
Error ID: dbcc318c
Error details saved to: /tmp/passenger-error-BX09me.html
Go in Tools&Settings >> PHP Setting
In top of the page you should find this " Select the PHP handlers you want to make available. You can install additional handlers using the Plesk Installer. "
Follow the link and will take you to Plesk Installer, there you can enable/disable modules including fusion

Google re-captcha stop loading suddenly

We have implemented the Google re-captcha, it was working fine. File - https://www.gstatic.com/recaptcha/releases/CHIHFAf1bjFPOjwwi5Xa4cWR/recaptcha__en.js giving 404.
On our staging environment - We have downloaded - https://www.google.com/recaptcha/api.js and load it locally, once all looks fine. We moved the changes over the Live instance. Till yesterday all looks fine, captcha was loading and working fine but from today not visible on both environment LIVE and Staging, and above mention recaptcha__en.js file giving 404.
1st Ques - Can we download https://www.google.com/recaptcha/api.js and load it from our base URL?
2nd Ques - what is 'CHIHFAf1bjFPOjwwi5Xa4cWR' in https://www.gstatic.com/recaptcha/releases/CHIHFAf1bjFPOjwwi5Xa4cWR/recaptcha__en.js?
Found the error, should not download and load https://www.google.com/recaptcha/api.js from you local setup. It is recommended, even when you load the https://www.google.com/recaptcha/api.js in browser it is written at the top of file that - /* PLEASE DO NOT COPY AND PASTE THIS CODE. */ and I missed it .

Sulu, strange hash related error when trying to save page / post?

Until recently everything was working well. Now, when I try to save (create or update) any page or post I get error message at top of the form "Error - There was an error when trying to save the form".
In error log I see this error:
“Uncaught PHP Exception Sulu\Component\Rest\Exception\InvalidHashException: “The given hash for the entity of type “Sulu\Bundle\ArticleBundle\Document\ArticleDocument” with the id “9e0720a7-5565-4a6f-a735-8a186b8fef9b” does not match the current hash. The entity has probably been edited in the mean time.” at /var/www/html/vendor/sulu/sulu/src/Sulu/Component/Hash/RequestHashChecker.php line 53"
Tried clearing symfony cache, website cache from admin, restarting docker containers.
I'm totally unaware that I did something to cause this error. Please help.
Update: strange thing I just noticed. When I try to save some article and I get that error and go back to overview page (where i.e. all articles of that type are listed) then I see unchanged article title. But when I click to edit it I see changed title?!? Like title on overview page and title on edit page are not used from the same place? How is that possible?
Update:
Now even I setup once more project from scratch saving articles causes that error. Some more info:
In stack trace last command executed is:
in vendor/elasticsearch/elasticsearch/src/Elasticsearch/ConnectionPool/StaticNoPingConnectionPool.php (line 64)
an it shoots out “No alive nodes found in your cluster”.
And while I'm setting up the project when executing:
php bin/console ongr:es:index:create
I get error:
{"error":{"root_cause":[{"type":"resource_already_exists_exception","reason
":"index [su_articles/sWs5F1uzSFO8bFiZqF1Egw] already exists","index_uuid":
"sWs5F1uzSFO8bFiZqF1Egw","index":"su_articles"}],"type":"resource_already_e
xists_exception","reason":"index [su_articles/sWs5F1uzSFO8bFiZqF1Egw] alrea
dy exists","index_uuid":"sWs5F1uzSFO8bFiZqF1Egw","index":"su_articles"},"st
atus":400}
And when I run:
php bin/console ongr:es:index:create --manager=live
I get similar:
In Connection.php line 675:
{"error":{"root_cause":[{"type":"resource_already_exists_exception","reason":"index [su_articles_live/Pissm9ycRj-o79K4wrrD
AA] already exists","index_uuid":"Pissm9ycRj-o79K4wrrDAA","index":"su_articles_live"}],"type":"resource_already_exists_exc
eption","reason":"index [su_articles_live/Pissm9ycRj-o79K4wrrDAA] already exists","index_uuid":"Pissm9ycRj-o79K4wrrDAA","i
ndex":"su_articles_live"},"status":400}
Also to mention that now saving pages works, but saving articles doesn't.
This solved the issue on ElasticSearch index creation for me:
php bin/console ongr:es:index:drop --force
The error can happen in the following cases.
Expected Case somebody else did edit the same article like you and did save it
Unexpected Case your phpcr cache is not in sync
Unexpected Case you have a multi server setup but your cache.app is not configured to use a central cache
So if its one of the unexpected cases first you should clear your cache.pools with:
bin/console cache:pool:prune
If you have a multi server setup make sure you configure a central cache. Most use in this case a redis-server which you configure in your cache.yaml e.g.:
# config/packages/prod/cache.yaml
framework:
cache:
default_redis_provider: "%env(resolve:REDIS_DSN)%"
app: cache.adapter.redis
Also make sure that you use the latest version and maybe update your phpcr cache configuration based on the sulu/skeleton: https://github.com/sulu/skeleton/blob/2.x/config/packages/prod/sulu_document_manager.yaml, there you could when performance doesn't matter in your case disable the phpcr cache, I would not recommend that.

Laravel app is very slow - over 3 seconds to boot and 2 seconds to load

The app is not even big,but it takes over 5 seconds to reload some pages,i don't think that is normal eventhough I read that laravel is pretty slow,but this is unusable,I've installed debugbar and it is showing that booting takes over 3.5 seconds while loading of the app is over 2.5 seconds,I've been following a course and instructor's app loads instantly,can someone tell me what effects the booting and load time?
Since this is a general question without specific detail here's a general answer:
Your first step should be to install the Laravel debug bar (which you say you have) and then look at the query time or controller time and narrow down the culprit. Based on that, you can ask more pointed questions on StackOverflow with the details of the specific queries that are slow, or if it's a controller that's slow, you can post the contents of that controller file. From there we can make recommendations in terms of what changes you can make.
One other thing to try is on the same machine try out a vanilla Laravel app and see what the baseline load times are. Maybe there's nothing wrong with your app at all and instead it has something to do with whatever is serving it.
Alright,I think I've got it fixed,was using XDebug for PHP lectures for collage and that is why app was running so slowly,so I've disabled it.
If anyone else will have similar problem here is the solution,but be aware that this will disable your xDebug:
Open XAMMP -> Click on config button for Apache -> Open PHP (php.ini).
Inside of that file look for "[xDebug]" and comment out all the commands that enable it ( in front of commands put ";" and space (" ") ) . It should look like this:
; [xDebug]
; zend_extension = C:\xampp\php\ext\php_xdebug-2.9.8-7.2-vc15-x86_64.dll
; xdebug.remote_enable = 1
; xdebug.remote_autostart=on
And then restart the server
If you need xDebug,then check out this post - PHP on Windows with XAMPP running 100 times too slow

Magento install stops creating database

I've tried win xp and 7
Apache 2.2.19
php-5.3.6 (tried php 5.1.x)
Mysql 5.1.44
The install process runs ok until the database creation screen.
After 1 minute, the process stops at:
http://127.0.0.1/magento/index.php/install/wizard/installDb/
with a BLANK page
The database has only 199 tables (sometimes stops with less tables)
If i refresh the webpage, sometimes more tables are created, but then i get a database error.
Tried to IMPORT the database manually and start the install, but i get an error at the same step!
Also retried the install (deleting the cached data in the Magento folder).
What am i doing wrong?
thanks.
I used to get the same behavior. Other observation is you do not get the blank screen if you are running installation wizard while magento sample data imported to the database.
I did some googling and find the following poge which resolved my issue.
http://www.magentocommerce.com/boards/viewthread/76240/
that is modify apache httpd.conf and add following lines:
Options FollowSymLinks
AllowOverride All
Order allow,deny
Allow from all
I would suggest trying to install via the Command Line Interface:
http://www.magentocommerce.com/wiki/groups/227/command_line_installation_wizard
Open the index.php file (at the root) and write the following line at the top
<?php
set_time_limit(0);
?>
Empty the tables in your database and try again. The white page appears because the script times out and when you refresh that same page Magento gives you an error page where it says the table already exists. The SQL script which creates the tables in the database does not check whether the table exists in the database or not.
A blank screen in PHP is a strong sign of running out of memory. Check the memory limit according to the requirements and then maybe add a bit more too.
Change the name localhost file
with any www.example123.com in
c:\Windows\System32\drivers\etc\hosts
and add
127.0.0.1 www.example123.com
after wards follow these steps through this torrent link.
http://tinyurl.com/3r6dpop
After installing some images may not upload so even uploaded the books.
I got blank screen too, and the following error:
[09-Feb-2012 15:27:43] PHP Fatal error: Maximum execution time of 30
seconds exceeded in D:\creation\software
developer\projects\magento\document root\lib\Zend\Db\Statement\Pdo.php
on line 228
I'm pretty sure, that you have execution timeout too...
You have to setup your php.ini, and increase the max_execution_timeout, and in the iis manager you have to increase the fastcgi request and activity timeouts too.

Resources