I use Lets Encrypt and get error:
urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Error parsing key authorization file: Invalid key authorization: malformed token
I try: sudo service nginx stop
but get error: nginx service not loaded
So I had a lot of trouble with this stuff. Basically, the error means that certbot was unable to find the file it was looking for when testing that you owned the site. This has a number of potential causes, so I'll try to summarize because I encountered most of them when I set this up. For more reference material, I found the github readme much more useful than the docs.
First thing to note is that the nginx service needs to be running for the acme authorization to work. It looks like you're saying it's not, so start by spinning that up.
sudo service nginx start
With that going, everything here is based on the file location of the website you're trying to create a certificate for. If you don't know where that is, it will be in the relevant configuration file under /etc/nginx which depends largely on your version of NGINX, but is usually under /etc/nginx/nginx.conf or /etc/nginx/sites-enabled/[site-name] or /etc/nginx/conf/[something].conf. Note that the configuration file should be listed (or at least it's directory) under /etc/nginx/nginx.conf so you might start there.
This is an important folder, because this is the folder that certbot needs to modify. It needs to create some files in a nested folder structure that the URL it tries to read from returns the data from those files. The folder it tries to create will be under the root directory you give it under the folder:
/.well-known/acme-challenge
It will then try to create a file with an obscure name (I think it's a GUID), and read that file from the URL. Something like:
http://example.com/.well-known/acme-challenge/abcdefgh12345678
This is important, because if your root directory is poorly configured, the url will not match the folder and the authorization will fail. And if certbot does not have write permissions to the folders when you run it, the file will not be created, so the authorization will fail. I encountered both of these issues.
Additionally, you may have noticed that the above URL is http not https. This is also important. I was using an existing encryption tool, so I had to configure NGINX to allow me to view the ./well-known folder tree under port 80 instead of 443 while still keeping most of my data under the secure https url. These two things make for a somewhat complicated NGINX file, so here is an example configuration to reference.
server {
listen 80;
server_name example.com;
location '/.well-known/acme-challenge' {
default_type "text/plain";
root /home/example;
}
location '/' {
return 301 https://$server_name$request_uri;
}
}
This allows port 80 for everything related to the certbot challenges, while retaining security for the rest of my website. You can modify the directory permissions to ensure that certbot has access to write the files, or simply run it as root:
sudo ./certbot-auto certonly
After you get the certificate, you'll have to set it up in your config as well, but that's outside the scope of this question, so here's a link.
Related
After multiple fails to correctly deploy my Laravel app through Beanstalk I decided to follow the basic AWS tutorial on how to create a Laravel app and deploy it to Beanstack (in order to rule it out as a cause.)
It launched and so I then added the following endpoint in routes/web.php:
Route::get('/hello', function () {
return 'hi';
});
It failed.
I then discovered that Beanstalk switched its server from Apache to nginx just a couple of months ago! No mention of this in the Laravel tutorial despite it meaning that the two are no longer compatible in their default states.
After doing a bit of digging, I found a link to another AWS tutorial which apparently resolves the issues: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/platforms-linux-extend.html
The solution looked very simple: create a custom configuration file and store it in a .ebextensions directory called '.platform' (dot platform) which would be located in the app directory.
I created a custom file called laravel.config with the inside:
location / {
try_files $uri $uri/ /index.php?$query_string;
gzip_static on;
}
The path to the file is:
~/my-app/.platform/nginx/conf.d/elasticbeanstalk/
I re-deployed. The api still doesn't work...
I connected to the instance through SHH and noticed that the .platform directory isn't there. I guess this makes sense since it's only used at the point of deployment...
One apparent issue is the use of wrong extensions for your nginx config files.
The file should be *.conf, not *.config as shown in the docs.
A reason why your .platform folder is ignored, could be that you maybe have it in your .gitignore or .ebignore files.
Please note that even if you fix the extension, the configuration itself may be still incorrect, which I can't verify.
I followed the instructions on the official Dockerhub repo for IIS (https://hub.docker.com/_/microsoft-windows-servercore-iis), but running into "Site can't be reached" when trying to access via the IP of the container.
I get 403 forbidden when I try htp://localhost:8000.
I copied a test.html page into C:/inetpub/wwwroot and verified by logging into the container as well.
The results of appcmd list site is as follows:
SITE "Default Web Site" (id:1,bindings:http/*:80:,state:Started)
403 typically indicates that the web address we are trying to access is not the root directory of the website.
I doubt if there exist any files, which have been copied to the remote docker container.
Please make sure that the directory where the dockerfile is located contains the content folder and contains all site files.
WORKDIR /inetpub/wwwroot
COPY content/ .
Feel free to let me know if the problem persists.
You are right in your analysis. What i didn't realize is IIS perhaps serves index.html as default and my file was called helloworld.html which obviously wasn't going to be to served when i access localhost:8000; it works when i try localhost:8000/helloworld.html.
I have a virtual /xmlrpc.php route on my Drupal site. It's for legacy compatibility. With the default DDEV configuration, nginx returns "No input file specified." when I visit https://mysite.ddev.local/xmlrpc.php.
How can I make it ask Drupal to handle the request instead?
This answer assumes the use of DDEV 1.8.0+.
Create a new file in the nginx subfolder of your project's .ddev directory, e.g. .ddev/nginx/xmlrpc.conf. (The file can be named anything as long as it ends in .conf.)
Paste in the following:
# pass the PHP scripts to FastCGI server listening on socket
location = '/xmlrpc.php' {
try_files $uri #rewrite;
}
Run ddev start to recreate the web container.
This pattern, also used for things like handling /system/files paths (for Drupal private files), will prefer a real xmlrpc.php file if it exists, and otherwise will ask Drupal's index.php (and routing system) to handle the request.
i've tried push my laravel 5.1 project to server. I want to make a symlink to public_html folder so i can access it like normal.
ln -sv app/crowd/public public_html
i've tried that one. but the symlink is taking the public folder instead of it's content.
ln -sv ~/app/crowd/public ~/public_html
i've tried that one too but it still fail.
any suggestion how to make it works?
If "the symlink is taking the public folder instead of it's content" means that your browser is giving a 403 or saying you can't access the folder, it may mean that you don't have index.php in your DirectoryIndex directive (if you're using Apache - not sure what the equivalent is on nginx).
When you enter a location in your browser and don't include a filename (www.host.com/page.html), the server looks at a list of default pages and sees if one of those is present. If not, it tries to serve the whole directory. Try browsing to www.host.com/index.php and see what you get. If it loads, you just need to update your config. If not, please explain what you see.
I am trying to deploy my Laravel 5 site to my VPS using Envoyer. I changed the document root in the site's Apache settings to /current/public (settings below), when I do this I receive a generic Apache 500 error. If I use the old public directory, everything loads properly.
I also tried chmod 777 -R storage, no luck. There are no log entries in the Laravel log, everything deploys fine without errors.
I did notice that if I create a plain HTML document and deploy it via Envoyer, I am able to access it directly with the /current/public document root, anything related to Laravel (and only using current/public), results in the 500.
Ideas? Would a symlink be a possible solution? Oddly, my Forge configuration on my other Envoyer site has the document root set to public, yet there is no symlink to current/public that I can see. It may be set to current/public and just not displaying that for some reason.
customlog:
-
format: combined
target: /usr/local/apache/domlogs/mydomain.org
-
format: "\"%{%s}t %I .\\n%{%s}t %O .\""
target: /usr/local/apache/domlogs/mydomain.org-bytes_log
documentroot: /home/eyf/current/public
group: eyf
hascgi: 1
homedir: /home/eyf
ifmoduleconcurrentphpc: {}
ifmodulemodsuphpc:
group: eyf
ip: MY.IP.ADDR
owner: root
phpopenbasedirprotect: 1
port: 80
scriptalias:
-
path: /home/eyf/public/cgi-bin
url: /cgi-bin/
-
path: /home/eyf/public/cgi-bin/
url: /cgi-bin/
serveradmin: webmaster#mydomain.org
serveralias: www.mydomain.org
servername: mydomain.org
usecanonicalname: 'Off'
user: eyf
userdirprotect: ''
Okay, so I encountered two separate problems here.
The first problem was the fact that I was deploying code as root and trying to access a site owned by a cPanel user (eyf in this case). Because the files/directories were deployed as root, an ownership issue caused the generic 500 error page.
I then tried to connect via Envoyer with eyf and there was some sort of SSH key issue - even though I added the key to eyf via cPanel, it did not seem to take. Repeated attempts to connect from Envoyer eventually lead the IP address to be blacklisted.
In response to this, Envoyer simply said "Failed" when trying to connect to the server. Immediately after saying "Failed," a warning message would appear saying that there was a problem with PHP-FPM.
Taylor says that this PHP-FPM warning message appears because the connection was unsuccessful and Envoyer could not connect to PHP-FPM. Well, this is totally misleading because I do not have PHP-FPM installed on this server and it has absolutely nothing to do with why the connection failed (it was an SSH authentication problem).
I asked him to please improve the warnings/errors for things like this, it stretched what should have been a quick fix into a several hour long tail chasing session. Dploy.io, a competitor, clearly showed an SSH connection issue when I first attempted to connect and had forgot the SSH key - "d'oh! Let me fix that," problem solved in less than a minute.
Anyway, back to Envoyer bliss - just a bit ticked. ;) The IP addresses were whitelisted, I added the SSH key manually for the cPanel user (/.ssh/id_rsa), and now everything works.