I succeeded in setting up a Jekyll site, but there is one thing I would like to optimize. In my config file I have the value http://mydomain.com/ as baseurl. Everything is working fine, except that every time I want to use the 'serve' command to have a local development server as a preview for a post that I am currently writing, I have to manually set the baseurl to '/' to make it work. Otherwise the server address would be http://0.0.0.0:4000http://mydomain.com/ and non-working.
Is there an easy workaround for this, or am I doing something wrong?
Thanks.
Just found out, that this command solves my problem.
jekyll serve --baseurl '/'
What do you mean by "manually" ?
jekyll serve --baseurl '/'
Should work (and your terminal will remember the command). Now, if you are using GitHub pages, you don't really need the baseurl anyway.
Base url is the path after the hosts part in the url.
So your local parameters must be :
host: http://mydomain.com/
baseurl: ""
Related
I'm quite new to laravel and the concept of CI/CD. But I have invested the last 24 hours to get something up and running. Actually I'm using gitlab.com as repo. There I have configured the CI/CD functionality.
The deployments should be done to SRV1 which has configured its corresponding user with a cert. The SRV1 should then clone the necessary files from the gitlab repo by using deployer. The gitlab repo also has the public key from SRV1 user. This chain is working quite good.
The problem is, that after deploying I need to restart php-fpm so that it can reinitialize its symlinks and updates its absolute path cache.
I saw various methodes to overcome this with setting some cgi settings in php-fpm. But these didn't work for me since they all are using nginx, while I'm using apache.
Is there any other way to tell php-fpm with apache to reinitialize its paths or reload after changes?
The method to add the deployer user to the sudoers list and to call service restart php-fpm looks quite hacky to me...
Thanks
UPDATE1:
Actually I found this : https://github.com/lorisleiva/laravel-deployer/blob/master/docs/how-to-reload-fpm.md
It looks like, that deployer has some technique to do this. But this requires the deployer user to have acces to php-fpm reload. Looks a bit unsafe to me.
I didn't found any other solutions. there are some for nginx to tell nginx to always re-evaluate the real path. Obviously for Apache it should be "followSymLink" but it was not working.
Actually I created a bash script which is running under root. this script always check for changes in the "current" symlink every 10 seconds. if there was a change -> reload php-fpm. Not nice, of course quite ugly, but should work.
Still open for other proposals.
I solved this issue on my server by adding a php file that clear APCU & OPCACHE :
<?php
if (in_array(#$_SERVER['REMOTE_ADDR'], ['127.0.0.1', '::1']))
{
apcu_clear_cache();
apcu_clear_cache('user');
apcu_clear_cache('opcode');
opcache_reset();
echo "Cache cleared";
}
else
{
die("You can't clear cache");
}
then you have to call it with curl after you updated your symlink :
/usr/bin/curl --silent https://domain.ext/clear_apc_cache.php
I use Gitlab CI/CD it works now for me
So i'm using Laravel with Kubernetes and everything works great, except for the fact that when i access the website, it takes too much to load. I troubleshot it and i found out that some CSS and JS files are loaded using the private ip (the one that starts with 10: 10.244.xx.xx)
I have no idea what's going on. Is it some kind of NGINX setting that messes it up? I am using the default NGINX Ingress for the cluster and i repeat: everything works great, except with this particular thing.
Edit: It seems like the route:cache command messes everything. I don't know why.
Never use secure_asset() over asset() unless you know what it can do.
I had to replace all my secure_asset() with asset()
make sure you got the domain right in .env file, in the root folder of your Application run:
sudo nano .env
find APP_URL parameter and configure it right, then run:
php artisan config:cache
So, i found it. It seems like i had to run route:cache between config:cache and view:cache on each pod deployment.
I want to deploy my local larvel website online with NGROK.
I'm using Laragon with Apache server, I use this command :
ngrok http -host-header=rewrite site.dev:80
It almost work, but the asset file (like CSS/Image) are still link to my local server (site.dev). And it's the same for my link, the laravel routing command :
{{ route('ngo') }} return site.dev/ngo instead of my online tunnel (http://number.ngrok.io/ngo)
I've try to :
Edit the http.conf (https://forum.laragon.org/topic/88/allow-outside-other-devices-phones-tablets-to-access-your-local-server-using-ngrok)
Change my Laravel App url in config/app.php
Change my url in .env file
Nothings work
I ran into this problem myself just now, but also found a way to fix it:
Run the ngrok command without the -host-header=rewrite part, resulting in
ngrok http site.dev:80
After this, edit your http.conf file and add the ngrok domain as a server alias. For example:
ServerAlias nd3428do.ngrok.io
The problem is that Laravel's route helper uses the HOST header, which is rewritten to site.dev. Without the -host-header part, the header isn't rewritten and now it works.
Configuration: OSX 10.9.1, ruby 2.1.1 via RVM
I have created a new Jekyll site via the command jekyll new sitename.
I then enter that directory and issue the command jekyll serve.
I get the following notice:
Configuration file: /Users/George/sitename/_config.yml
Source: /Users/George/sitename
Destination: /Users/George/sitename/_site
Generating... done.
Server address: http://0.0.0.0:4000
Server running... press ctrl-c to stop.
However when attempting to visit http://localhost:4000/ or http://0.0.0.0:4000/ my browser endlessly attempts to load the page.
I have checked and the site was built correctly including an index.html in /Users/George/sitename/_site/.
The _config.yml looks like so:
name: sitename
markdown: maruku
pygments: true
Does anyone know why Jekyll could be failing to serve the site but not throwing any errors?
Edit: If I change the port to 6000 and attempt to serve it then my browser instantly gives a page not found so there must be something interfering with port 4000 however the issue still stands.
It's likely that something else is using port 4000 on your computer. You can find out by running the following command:
sudo lsof -i :4000
As you can run the server on another port, we'll move on to the next issue, setting the baseurl. In your _config.yml you can specify which baseurl to use, this will be the url from which all pages are served, so if you had baseurl: http://myawesomesite.com, Jekyll will expect them to be accessed from that url.
As you're working locally, you have two options:
Set the baseurl to / in your _config.yml:
baseurl: /
Launch Jekyll with the --baseurl flag:
$ jekyll serve -w --baseurl '/'
Additionally, you could specify the port as a flag:
$ jekyll server -w --baseurl '/' --port 4000
If you want further information about what's going on when you run jekyll, you can run the command with the --trace flag:
$ jekyll server -w --trace
It is probably a simple question but I dont know how to solve my problem.
There is a proxy on the network I am using. I execute php scripts on command line (composer.phar not to same the name). The script, using curl obviously, try to download file from http urls. I cannot download because of the proxy.
So my question is this one :
where should I configure proxy settings to use a tool like composer.phar ?
Maybe the command line is not the problem and I would have the same problem if I were using curl in an application.
Thank you
I have to add a HTTP_PROXY system environment variable to make it work