Protect Jenkins with nginx http auth except callback url - proxy

I installed jenkins on my server and I want to protected it with nginx http auth so that requests to:
http://my_domain.com:8080
http://ci.my_domain.com
will be protected except one location:
http://ci.my_domain.com/job/my_job/build
needed to trigger build. I am kinda new to nginx so I stuck with nginx config for that.
upstream jenkins {
server 127.0.0.1:8080;
}
server {
listen x.x.x.x:8080;
server_name *.*;
location '/' {
proxy_pass http://jenkins;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
auth_basic "Restricted";
auth_basic_user_file /path/.htpasswd;
}
}
I tried smth like above config but when I visit http://my_domain.com:8080 there is no http auth.

Finally I figured out how to solve this problem. At first we need to uncheck "Enable security" option at Manage Jenkins page. With security disabled we can trigger our jobs with requests like http://ci.your_domain.com/job/job_name/build.
If you want to add token to trigger URL we need to Enable Security, choose "Project-based Matrix Authorization Strategy" and give Admin rights to Anonymous user. After it in Configure page of your project will be "Trigger builds remotely" option where you can specify token so your request will look like JENKINS_URL/job/onru/build?token=TOKEN_NAME
So with disabled security we need to protect http://ci.your_domain.com with nginx http_auth except urls like /job/job_name/build'.
And of course we need to hide 8080 port from external requests. Since my server is on Ubuntu I can use iptables firewall:
iptables -A INPUT -p tcp --dport 8080 -s localhost -j ACCEPT
iptables -A INPUT -p tcp --dport 8080 -j DROP
But! On ubuntu (I am not sure about other linux oses) iptables will disappear after reboot. So we need to save them with:
iptables-save
And it is not the end. With this command we just get a file with iptables. On startup we need to load iptables and the easiest way is to use 'uptables-persistent' package:
sudo apt-get install iptables-persistent
iptables-save > /etc/iptables/rules
Take a closer look at iptables if needed https://help.ubuntu.com/community/IptablesHowTo#Saving_iptables and good luck with Jenkins!
And there is good example for running jenkins on subdomain of your server: https://wiki.jenkins-ci.org/display/JENKINS/Running+Hudson+behind+Nginx

Related

Devilbox (docker) + Laravel Websockets

Trying to get the two to work together. Is there something I'm missing or way to debug why it's not working?
Edited .devilbox/nginx.yml as suggested here although trying to contain it to path: wsapp
---
###
### Basic vHost skeleton
###
vhost: |
server {
listen __PORT____DEFAULT_VHOST__;
server_name __VHOST_NAME__ *.__VHOST_NAME__;
access_log "__ACCESS_LOG__" combined;
error_log "__ERROR_LOG__" warn;
# Reverse Proxy definition (Ensure to adjust the port, currently '8000')
location /wsapp/ {
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://php:6001;
}
__REDIRECT__
__SSL__
__VHOST_DOCROOT__
__VHOST_RPROXY__
__PHP_FPM__
__ALIASES__
__DENIES__
__SERVER_STATUS__
# Custom directives
__CUSTOM__
}
Installed laravel-websockets and configured to use '/wsapp'
Visit the dashboard to test:
https://example.local/laravel-websockets
But console has error:
Firefox can’t establish a connection to the server at
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false.
2 pusher.min.js:8:6335 The connection to
wss://example.local:6001/wsapp/app/a558686cac00228eb003?protocol=7&client=js&version=4.3.1&flash=false
was interrupted while the page was loading. pusher.min.js:8:6335
I've Created a Setup that works...
first you need 2 domains in devilbox...
For you Laravel App (example.local)
For you Laravel Websocket (socket.example.local)
on your socket.example.local directory...
create htdocs and .devilbox here you'll add your nginx.yml file
when you try to connect to your socket.
don't use the port anymore...
and don't isolate the socket to /wsapp anymore...
use socket.example.local in .env PUSHER_HOST value
run your laravel websocket on example.local...
visit /laravel-websockets dashboard... remove the port value then click connect
I don't suggest you'll serve your socket in /wsapp because it's hard to configure nginx to serve 2 apps... (it's hard for me, maybe someone more expert on nginx can suggest something regarding this setup)
but that's my solution... if you didn't understand, please do comment

install wildcard lets encrypt ssl certificate on laravel sail

I created a SaaS app using laravel 8 with first-party package laravel sail (Docker) and tenancy for laravel
package for the SaaS.
I need to install wildcard lets encrypt SSL on the main app and all tenant apps will be on HTTPS.
I tried to install certbot image like this
certbot:
image: certbot/certbot:latest
the image installed but I do not know what to do after that.
I tried without docker using certbot instructions
it's installed and everything succeeded but the website doesn't open and all request timeout.
Update:
this is the ports section in my docker-compose.yml file
ports:
- '443:443'
I ran docker ps and all services are up and running.
I ran sudo ufw status and this is the result
TLDR: Laravel sail is not for production. Use a different docker configuration, if you need an example you can find it here: https://github.com/thomasmoors/laravel-docker
Also wildcard certificates are not achievable by using HTTP-01 challenges, you need a DNS-01 challenge, which you do by adding a txt record to your dns config.
Wildcard certificates by Let's Encrypt are only possible with a DNS-01 challenge. This however requires you to paste a TXT record to your DNS registry. So no go for wildcard unless you have an api to change your dns. It might be worth a try to look at this: https://stackexchange.github.io/dnscontrol/
However I do not know if your domain provider supports this.
For regular (non-wildcard) certificates:
By default Laravel Sail runs using the built in php artisan serve command-webserver, which has no support for ssl certificates. So you need to add a reverse proxy like nginx. Because of this I believe sail not to be production ready and also not intended. I have made an example of a non-sail docker-compose config for laravel: https://github.com/thomasmoors/laravel-docker
Certbot works by placing a file on your webserver which will be retrieved for the challenge. However it looks like your current configuration does not share a volume between your webserver and Certbot. Also you need to allow certbot to modify your nginx config.
The default location for you code is /var/www/html, so you should enable Certbot to write to that directory by adding a volume for the Certbot service as well:
upstream sentry_docker {
server 192.168.1.94:9005;
}
server {
server_name example.dev;
location / {
proxy_pass http://sentry_docker;
proxy_set_header Host $host;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/example.dev/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/example.dev/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = example.dev) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name example.dev;
listen 80;
return 404; # managed by Certbot
}
certbot:
image: certbot/certbot:latest
volumes:
- .:/var/www/html
- ./data/nginx:/etc/nginx/conf.d
There are not enough information to help you but I can suggest to check out this guide https://github.com/Daanra/laravel-lets-encrypt and double check your configuration.
If the website doesn't show up, according to the error, the problem might be related to the network (firewall) or something else (the application not running and binding itself to the port 80 for http requests and 443 for https).

Hostname to docker containers mapping?

When the Apache is installed directly on the host, I add an internal hostname in "C:\Windows\System32\drivers\etc\hosts" and using virtual host to easily access different projects locally say: http://foo.test and http://bar.test.
Using the docker container for each project I can access the project by assigning a host port in the docker-compose file.
I hope that docker may have some internal tools to achieve access via hostname to containers.
Using a reverse proxy can be a solution as described in these relatively old but brilliant articles.
https://www.alexecollins.com/developing-with-docker-proxy-container/
http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/
But because I believe this is a very common development requirement, I hope Docker has something builtin to address it.
My approach to this problem is the following. Consider I have container A and B both running a webserver. I simply add a reverse proxy on my local machine which looks at the hostname and then proxies to the respective containers.
But instead of proxying through the hard-coded ip addresses, I proxy through the local ports. So instead of binding both your containers to port 80, bind them to a random local port (e.g., 4041) and proxy over that. That way you decouple the container IP from your host.
My nginx file looks like this then:
server {
server_name example.com # Add <host lan ip> example.com to your /etc/hosts file
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade; # These two lines ensure that the
proxy_set_header Connection "Upgrade"; # a WebSocket is used
proxy_pass http://localhost:4041/;
}
<snip>
Adding multiple containers then just means you have to edit 1 nginx proxy file, and bind a port to your local machine. No coupling between Docker ip's and your local hosts file.

How can I use "let's encrypt" without stopping nginx?

I am adding https support to our servers. How can I not stop Nginx when adding Let's Encrypt support?
Add this block to your server configuration (depending on your server configuration you can use other path than /var/www/html):
location ~ /.well-known {
root /var/www/html;
allow all;
}
Reload nginx, run certbot as follows:
certbot certonly -a webroot --webroot-path=/var/www/html -d yourdomain.example
Apply generated certificate to your server configuration
ssl_certificate /etc/letsencrypt/live/yourdomain.example/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.example/privkey.pem;
Make sure server setup is configured to run on port 443 with ssl:
listen 443 ssl;
Reload nginx again. Between reloads, you can make sure if configuration don't have syntax errors by running nginx -t.
against all answers you can run certbot in nginx mode.
just read the docs for it.
all you have to do is install an additional nginx plugin and follow the docs of certbot.
that plugin would even hot reload the cached certificates in nginx ram as soon as they get updated.
https://certbot.eff.org/instructions
or go to the nginx docs instead: https://www.nginx.com/blog/using-free-ssltls-certificates-from-lets-encrypt-with-nginx/
You can use docker for that. Link on hub.docker
For example:
Create certbot.sh
For that you must run in CLI:
touch certbot.sh && chmod +x ./certbot.sh
Write in file:
#!/usr/bin/env bash
docker run --rm -v /etc/letsencrypt:/etc/letsencrypt -v /var/lib/letsencrypt:/var/lib/letsencrypt certbot/certbot "$#"
and run like this:
./certbot.sh --webroot -w /var/www/example -d example.com -d www.example.com -w /var/www/thing -d thing.is -d m.thing.is
OR
./certbot.sh renew
And you can add call this method in crontab for renew
0 0 1 * * /<PATH_TO_FILE>/certbot.sh renew

easy way to make an elasticsearch server read-only

It's really easy to just upload a bunch of json data to an elasticsearch server to have a basic query api, with lots of options
I'd just like to know if there's and easy way to publish it all preventing people from modifying it
From the default setting, the server is open ot receive a DELETE or PUT http message that would modify the data.
Is there some kind of setting to configure it to be read-only? Or shall I configure some kind of http proxy to achieve it?
(I'm an elasticsearch newbie)
If you want to expose the Elasticsearch API as read-only, I think the best way is to put Nginx in front of it, and deny all requests except GET. An example configuration looks like this:
# Run me with:
#
# $ nginx -c path/to/this/file
#
# All requests except GET are denied.
worker_processes 1;
pid nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 8080;
server_name search.example.com;
error_log elasticsearch-errors.log;
access_log elasticsearch.log;
location / {
if ($request_method !~ "GET") {
return 403;
break;
}
proxy_pass http://localhost:9200;
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
}
}
}
Then:
curl -i -X GET http://localhost:8080/_search -d '{"query":{"match_all":{}}}'
HTTP/1.1 200 OK
curl -i -X POST http://localhost:8080/test/test/1 -d '{"foo":"bar"}'
HTTP/1.1 403 Forbidden
curl -i -X DELETE http://localhost:8080/test/
HTTP/1.1 403 Forbidden
Note, that a malicious user could still mess up your server, for instance sending incorrect script payloads, which would make Elasticsearch get stuck, but for most purposes, this approach would be fine.
If you would need more control about the proxying, you can either use more complex Nginx configuration, or write a dedicated proxy eg. in Ruby or Node.js.
See this example for a more complex Ruby-based proxy.
You can set a readonly flag on your index, this does limit some operations though, so you will need to see if thats acceptable.
curl -XPUT http://<ip-address>:9200/<index name>/_settings -d'
{
"index":{
"blocks":{
"read_only":true
}
}
}'
As mentioned in one of the other answers, really you should have ES running in a trusted environment, where you can control access to it.
More information on index settings here : http://www.elasticsearch.org/guide/reference/api/admin-indices-update-settings/
I know it's an old topic. I encountered the same problem, put ES behind Nginx in order to make it read only but allow kibana to access it.
The only request from ES that Kibana needs in my case is "url_public/_all/_search".
So I allowed it into my Nginx conf.
Here my conf file :
server {
listen port_es;
server_name ip_es;
rewrite ^/(.*) /$1 break;
proxy_ignore_client_abort on;
proxy_redirect url_es url_public;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
location ~ ^/(_all/_search) {
limit_except GET POST OPTIONS {
deny all;
}
proxy_pass url_es;
}
location / {
limit_except GET {
deny all;
}
proxy_pass url_es;
}
}
So only GET request are allowed unless the request is _all/_search. It is simple to add other request if needed.
I use this elasticsearch plugin:
https://github.com/sscarduzio/elasticsearch-readonlyrest-plugin
It is very simple, easy to install & configure. The GitHub project page has a config example that shows how to limit requests to HTTP GET method only; which will not change any data in elasticsearch. If you need only whitelisted IP#'s (or none) to use other methods (PUT/DELETE/etc) that can change data then it has got you covered as well.
Something like this goes into your elasticsearch config file (/etc/elasticsearch/elasticsearch.yml or equivalent), adapted from the GitHub page:
readonlyrest:
enable: true
response_if_req_forbidden: Sorry, your request is forbidden
# Default policy is to forbid everything, let's define a whitelist
access_control_rules:
# from these IP addresses, accept any method, any URI, any HTTP body
#- name: full access to internal servers
# type: allow
# hosts: [127.0.0.1, 10.0.0.10]
# From external hosts, accept only GET and OPTION methods only if the HTTP request body is empty
- name: restricted access to all other hosts
type: allow
methods: [OPTIONS,GET]
maxBodyLength: 0
Elasticsearch is meant to be used in a trusted environment and by itself doesn't have any access control mechanism. So, the best way to deploy elasticsearch is with a web server in front of it that would be responsible for controlling access and type of the queries that can reach elasticsearch. Saying that, it's possible to limit access to elasticsearch by using elasticsearch-jetty plugin.
With either Elastic or Solr, it's not a good idea to depend on the search engine for your security. You should be using security in your container, or even putting the container behind something really bulletproof like Apache HTTPD, and then setting up the security to forbid the things you want to forbid.
If you have a public facing ES instance behind nginx, which is updated internally these blocks should make it ready only and only allow _search endpoints
limit_except GET POST OPTIONS {
allow 127.0.0.1;
deny all;
}
if ($request_uri !~ .*search.*) {
set $sc fail;
}
if ($remote_addr = 127.0.0.1) {
set $sc pass;
}
if ($sc = fail) {
return 404;
}

Resources