in a product I'm fighting with, I found an .htaccess file at the application root, which basically rewrites requests to non-existing files to a central processing script.
For performance reasons I now want to move that rule to my server (virtual host) configuration.
The simplest way to do it is to literally copy the rules into a <Directory> section, as these get interpreted just like .htaccess contexts, right? Well - it works.
Would I have any benefit from modifying the rules and moving them to server/toplevel context instead of a directory context?
EDIT: I seem to not have been clear enough. By 'directory context' I do NOT mean a .htaccess file, but a <Directory> section within my server configuration file.
Here's an interesting blog post about that:
http://www.fubra.com/blog/2008/01/htaccess-vs-httpdconf/
Seems they concluded that it's only about 6.6% faster to have the rules in httpd.conf
So you get a bit of a performance gain, but you also give up some flexibility to change rules per directory or without restarting the server.
The server configuration file is only read and parsed once when the server is started. But .htaccess files read and parsed every time a request is done. Furthermore any .htaccess file on the way down to the directory where the requested file is located are read an parsed. If you have a structure like this:
htdocs/
foo/
.htaccess
bar/
.htaccess
baz/
.htaccess
somefile.html
And you would request /foo/bar/baz/somefile.html, all the .htaccess files the file hierarchy down to somefile.html are parsed and parsed. Now you can imagine on your own if that’s an overhead or not.
Related
I am about to learn rewrite rules on lighttpd.
I have a question: is this the normal state of the art that lighttpd is ignoring the .htaccess files ?
I have some rules in .htaccess and they where ignored, when I write them to the lighttpd.conf they are executed correctly.
how can I enable mod rewrite to read the .htaccess files ?
Quoting Lighttpd FAQ:
Do you support .htaccess files?
No. Lighty's design does not permit implementing this functionality as config files are loaded at startup time and .htaccess would be needed to be parsed at request time.
[…]
Furthermore, .htaccess files are Apache config files. We would need to write a parser and it might not even be possible to map all functionality to lighty logic.
I was starting to implement mod_rewrite rules on my site when I came across some weird behaviour. I removed my htaccess file for this test, to take it out of the equation.
My local dev site is at http://dev.mydomain.com and is a virtual host.
If I go to, eg "http://dev.mydomain.com/blog/", that folder doesn't exist, but apache finds a matching php file "blog.php" and instead displays that.
This only happens when there is a matching php file - when there isn't, eg "http://dev.mydomain.com/barfblurg/" I just get a 404.
It's like there are some extra mod_rewrites going on above where the site htaccess would be - that when /file/ couldn't be resolved, it searches for other matching files and instead serves this - but there are no other htaccess files that would have an effect, so this must presumably be a config thing? I can't see anything in the apache.conf or php.ini that would cause this behaviour.
(This also doesn't happen on my live host elsewhere, so it's definitely a config thing.)
Anyone point me to where to turn that behaviour off, because it's interfering with the url rewrites I want to do?
(Apache2, OSX, 10.10.5)
This behavior is due to enabling of option MultiViews.
Option MultiViews is used by Apache's content negotiation module that runs before mod_rewrite and makes Apache server match extensions of files. So /file can be in URL but it will serve /file.php.
To turn this off use:
Options -MultiViews
at top of your .htaccess or in Apache config/vhost file.
Currently I've put my laravel site online (just for testing). But when I go to for example www.mysite.nl/.env it shows my password etc. for my database. How can I prevent this?
It should be mentioned that use of .env files is intended to be for development only, not production.
Once you're ready to take the site live, the values that you put in the .env file should be moved to the server environment variables.
This should be more secure for two reasons:
The problem you've discovered, that the .env file is accessible, will no longer apply, since there will be no more .env file. Plus, this won't require any server configuration changes (.htaccess files or similar) to restrict access to the .env file.
Server environment variables will not be accessible to anyone without shell access to the server.
Also keep in mind that .env file should not be reachable by users. Only the public/ folder content must be reachable. Set your server configuration to do it ( not always possible though ).
Otherwise, for your production environment you can simply ommit the .env file and define all the settings directly in app/config/
Some hosters also provide their servers with Forge.
Remember to always put your .env file into the .gitignore file if you are using it.
Have a nice day
Removing .env alone doesn't guarantee that your code is safe. As the only reason why this happen is that you ignore the default recommended structure which is to only give access for the web server to public directory.
Now, let say you removed the .env file. Are you sure that:
Nobody can access storage/logs/laravel.log (or daily rotating log file).
Are you sure you're cache and session data is safe, if you're using file based driver.
Are you sure nobody can peek at your compiled blade view under storage/framework/views.
Don't ever skip and compromise security to solve your webhosting limitation.
To hide .env on apache server add this below code in top of your htaccess file. this will also hide directory list view and will hide gitignore webpack config and more. hope that helps
# Disable Directory listing
Options -Indexes
# block files which needs to be hidden, specify .example extension of the file
<Files ~ "\.(env|json|config.js|md|gitignore|gitattributes|lock)$">
Order allow,deny
Deny from all
</Files>
# in here specify full file name sperator '|'
<Files ~ "(artisan)$">
Order allow,deny
Deny from all
</Files>
Be careful, the accepted answer skips over an important detail. Even if the .env file is removed, the directory that it resides in should not be accessible in the first place.
Instead of attempting to deny access to a list of files or folders as some people suggest, the web server's Document Root should be set to the /public folder in a Laravel application. All requests will then be mapped safely starting from /public, which means that the source files of your application are not accessible from the web unless you explicitly give access to them.
I noticed Joomla, Wordpress and other CMSs have blank index.html files in ALL their sub folders to prevent people from peeking into the folder structure. My question is why can't they forbid folder viewing using the .htaccess file instead of putting a blank index.html file into all the folders. What's the difference and why have they chosen index.html?
Turning off folder 'Indexes' is best done in the master httpd.conf or vhosts.conf file rather than local .htaccess files.
Joomla (and Mambo before it) have been around quite a while and are used widely on shared hosting servers. The decision was taken to use index.html files as a safe fallback given the 'mixed' nature of shared hosting. Obviously .htaccess files are apache only, can cause server 500 errors if present on servers not expecting them or if they contain directives not supported by certain server setups. Consequently by default Joomla doesn't ship with a .htaccess file as such. There is a htaccess.txt file which the user needs to put in place manually if they activate certain features. It is assumed that a user knowledgeable enough to put the file in place will understand the consequences and if it does kill their site they will immediately understand the cause - due to the error immediately following their actions.
As server setups have advanced there is (sometimes heated) discussion about the current validity of the use of index.html files - but for now the policy is that all add-ons should ship with 'blank' index.html files in all folders.
I am trying to find the best way of displaying content that resides under a different server location.
So I have a domain where have the main site content is located at:
/home/user/my_site/www/
and accessed at:
www.example.com
I have another site (a blog) located at:
/home/user/the_blog/www/
I wish to get the blog content to appear at:
www.example.com/news
I was planning on using an .htaccess file at my_site to set the rules for the path:
/news
However the content for the blog resides outside the .htaccess document root, so although U can set a rule it won't be able to access this content.
Is it possible to change the document root somewhere higher up the chain?
Or is it possible to just create a symlink for the /news folder? Is this even advisable?
Thanks in advance
Tom
You could set an alias to that location:
Alias /news /home/user/the_blog/www
But that can only be set in the server or virtual host configuration context and not in a .htaccess file.
Since both directories aren't in your DocumentRoot, I don't see how mod_rewrite can work here. And I don't think anyone would recommend symlinking. The way I see it, there are only two ways out of this: either change your DOcumentRoot or move the latter directory into the current DocumentRoot.