How can I add basic authentication to the MailHog service in DDEV-Local - ddev

I have an unusual setup where I want to provide some authentication on the MailHog feature of DDEV-Local. How can I add basic authentication?

Since it turns out that MailHog supports basic auth and DDEV-Local provides the ability to add extra files into the container at build time, you can do this (updated for DDEV v1.19.0):
Add these four files to .ddev/web-build in your DDEV-Local project:
mailhog.conf:
[program:mailhog]
command=/usr/local/bin/mailhog -auth-file=/etc/mailhog-auth.txt
autorestart=true
startretries=10
mailhog-auth.txt:
test:$2a$04$qxRo.ftFoNep7ld/5jfKtuBTnGqff/fZVyj53mUC5sVf9dtDLAi/S
Dockerfile:
ARG BASE_IMAGE
FROM $BASE_IMAGE
ADD mailhog-auth.txt /etc
ADD mailhog.conf /etc/supervisor/conf.d
ADD healthcheck.sh /
healthcheck.sh: (See gist - it's a little long to quote here.)
Now you can ddev start and the mailhog auth with be "test":"test". The MailHog auth page gives more detail about how to generate a better password, and it will just go into mailhog-auth.txt.

As a followup to this issue, after an upgrade to DDEV v1.19.0 a project with basic auth configured for MailHog using the instructions listed here resulted in a project that would no longer spin up. It was only after finally deleting the entire DDEV setup for the project and systematically implementing one customization at a time that we finally isolated this as a problem. Needless to say, it was an afternoon wasted.
Not sure what changed with the web image for v1.19.0 but this solution, which was working fine in DDEV v1.18.2 now no longer works.
Leaving this for anyone else who may be wrestling with the same issue.

Related

Laravel setup on HostGator VPS

I want to deploy my Laravel App in a VPS hosting plan.
I have a WHM, but I've no experience deploying my app and configure the server.
I don't have a domain, so I want to test my app using an IP address (like DigitalOcean)
any help?
Edit:
I've completed these steps into my WHM.
Have SSH access to the VPS
Have a sudo user and set up some kind of firewall (for example ufw)
Install required software (nginx, MySQL, PHP, Composer, npm) and additional PHP modules if necessary.
I've created an account ( CPanel ) and I've completed steps
Create a database
Checkout your application using VCS like Git
Configure your .env file.
Install your composer packages, run npm, or anything you would like to do
The account ( CPanel provides an IP address that looks like http://xxx.xxx.x.xx/~cpanel-account-name/).
I can access the website correctly ( however all images are broken and even laravel-routes are not found 404). I know the issue is because ( ~cpanel-account-name/ ) found at the end of the URL.
But how can I fix It?
Since this is quite a broad topic that consists of multiple questions, perhaps you could elaborate on steps you have already taken or the step you are stuck at / need help with?
In short, you need to do the following:
Have SSH access to the VPS
Have a sudo user and set-up some kind of firewall (for example ufw)
Install required software (nginx, MySQL, PHP, Composer, npm) and additional PHP modules if necessary.
Create a database
Checkout your application using VCS like Git
Configure your .env file.
Install your composer packages, run npm or anything you would like to do
Set-up nginx
If this seems daunting, I would advice to tackle it one by one and trying to research every step along the way. This might be challenging and time-consuming, but will be very rewarding!
Alternatively, a paid solution like Laravel Forge can help you take care of server management.

How to setup multiple visual studio solutions working together using docker compose (with debugging)

Lots of questions seem to have been asked about getting multiple projects inside a single solution working with docker compose but none that address multiple solutions.
To set the scene, we have multiple .NET Core APIs each as a separate VS 2019 solution. They all need to be able to use (as a minimum) the same RabbitMQ container running locally as this deals with all of the communication between the services.
I have been able to get this setup working for a single solution by:
Adding 'Container orchestration support' for an API project.
This created a new docker-compose project in the solution I did it for.
Updating the docker-componse.yml to include both a RabbitMQ and MongoDb image (see image below - sorry I couldn't get it to paste correctly as text/code):
Now when I launch all new RabbitMQ and MongoDB containers are created.
I then did exactly the same thing with another solution and unsurprisingly it wasn't able to start because the RabbitMQ ports were already in use (i.e. it tried to create another new RabbitMQ image).
I kind of expected this but don't know the best/right way to properly configure this and any help or advice would be greatly appreciated.
I have been able to compose multiple services from multiple solutions by setting the value of context to the appropriate relative path. Using your docker-compose example and adding my-other-api-project you end up with something like:
services:
my-api-project:
<as you have it currently>
my-other-api-project:
image: ${DOCKER_REGISTRY-}my-other-api-project
build:
context: ../my-other-api-project/ <-- Or whatever the relative path is to your other project
dockerfile: my-other-api-project/Dockerfile
ports:
- <your port mappings>
depends_on:
- some-mongo
- some-rabbit
some-rabbit:
<as you have it currently>
some-mongo:
<as you have it currently>
So I thought I would answer my own question as I think I eventually found a good (not perfect) solution. I did the following steps:
Created a custom docker network.
Created a single docker-compose.yml for my RabbitMQ, SQL Server and MongoDB containers (using my custom network).
Setup docker-compose container orchestration support for each service (right click on the API project and choose add container orchestration).
The above step creates the docker-compose project in the solution with docker-compose.yml and docker-compose.override.yml
I then edit the docker-compose.yml so that the containers use my custom docker network and also specifically specify the port numbers (so they're consistently the same).
I edited the docker-compose.override.yml environment variables so that my connection strings point to the relevant container names on my docker network (i.e. RabbitMQ, SQL Server and MongoDB) - no more need to worry about IPs and when I set the solution to startup using docker-compose project in debug mode my debug containers can access those services.
Now I can close the VS solution and go to the command line and navigate to the solution folder and run 'docker-compose up' to start the container.
I setup each VS solution as per steps 3-7 and can start up any/all services locally without the need to open VS anymore (provided I don't need to debug).
When I need to debug/change a service I stop the specific container (i.e. 'docker container stop containerId' and then open the solution in VS and start it in debug mode/make changes etc.
If I pull down changes by anyone else I re-build the relevant container on the command line by going to the solution folder and running 'docker-compose build'.
As a brucey bonus I wrote PowerShell script to start all of my containers using each docker-compose file as well as one to build them all so when I turn on my laptop I simply run that and my full dev environment and 10 services are up and running.
For the most part this works great but with some caveats:
I use https and dev-certs and sometimes things don't play well and I have to clean the certs/re-trust them because kestrel throws errors and expects the certificate to be trusted, have a certain name and to be trusted. I'm working on improving this but you could always not use https locally in dev.
if you're using your own nuget server like me you'll need to a Nuget.config file and copy that as part of your docker files.

Secure first time setup of nextcloud

I want to setup a Nextcloud on my personal VPS. To do the first time setup, I have to access the webserver via my browser and it says I should do it over http://localhost/nextcloud/ (Nextcloud Installation Wizard (Right in the beginning), but this does not work for my because the VPS is not my local machine. So I have to open up the setup website to the public web and everybody who would know the IP of my VPS could do it first time setup.
I read other tutorials from web applications (for example Confluence Confluence Installation Documentation (Point 4.2)) where this is the common way of setting things up the first time.
Is there another secure way to do this in general for setting up an webapp for the first time? Firewall? VPN? How do you guys do it?
Thank you for your help
Yes - this is the common way on how to set it up. In the unlikely case that somebody else sets it up in the short time between placing the files and running the installer, you could also remove the config/config.php and do the setup again.
If you want to not do the web based setup you could use the CLI tool to run the installation. It also asks in an interactive way to set up Nextcloud or all the parameters can be provided via CLI options.
See https://docs.nextcloud.com/server/12/admin_manual/installation/command_line_installation.html for more details on the CLI installation method.

What's the simplest mac docker setup for archival purposes?

I've got a bunch of old sites (wordpress/mysql/php, ruby 1.x/sqlite, etc) on my computer that I'd like to ensure easy access to in the future without having to muck up my environment.
Docker seems like the perfect candidate for this task, but I have tried to wrap my head around it many times and have realized it's time to ask for professional help (which brings me here).
I have wasted so much time messing with this thing, and have been slightly overwhelmed with depth. At first (in the case of the wp/mysql problem), I was trying to create two different images (a wordpress/php one, and a mysql one) and link them together, which appeals to my programmer-mentality of Doing It The Right Way™.
But my UX mentality has won out and I'm abandoning the Right Way™ in favor of getting this thing working in the simplest way so that future me (who will forget all this docker knowledge as soon as I complete this task) might be able to figure it out again.
So here's what I want: A docker setup that I can put in a folder along with with an exported mysql database and a wordpress site, so when I start up that badboy—boom—I'm browsing some old site that made a lot of sense in 2005 and makes no sense now.
Know what I'm saying? What's the simplest, future-proof way I can get this done? Is it possible to keep the data/files outside of the containers in the case I wanted to edit them? I'm using Docker For Mac.
It sounds like you want something really generic that's going to work for all the sites you have. That might be tricky because Docker is inherently not generic. You don't typically have a docker image with all the tools (PHP, Ruby, etc) to run everything. You typically build only what you need into an image.
With that being said, it might still be possible to do something like what you're asking for, and I think I can get you pointed in the right direction. The official Wordpress Docker image should be able to run your Wordpress sites. You were actually on the right track with a separate MySQL image, and this is easy to achieve with docker-compose.
Your docker-compose.yml file would look something like this:
version: '3'
services:
wordpress:
image: wordpress:4-php5.6-apache
ports:
- "8080:80"
volumes:
- ./:/var/www/html
environment:
WORDPRESS_DB_HOST: mysql
WORDPRESS_DB_USER: root
WORDPRESS_DB_PASSWORD: password
WORDPRESS_DB_NAME: wordpress
mysql:
image: mariadb:10.1
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: wordpress
A brief summary of what that does:
Creates 2 Docker containers, one for Wordpress and one for MySQL.
Forwards requests from 127.0.0.1:8080 to the Wordpress container on port 80.
Mounts ./ to /var/www/html in the wordpress container. Meaning it will serve Wordpress from your current directory.
Configures environment variables so it knows how to connect to the database.
Prepares a MySQL Docker container.
Forwards 127.0.0.1:3306 to 3306 on the container. So you can do mysql -u root -ppassword -h 127.0.0.1.
Now, if you create a file called docker-compose.yml similar to the one above in the Wordpress directory you want to serve, you can simply run docker-compose up in that directory and you'll be running Wordpress in a container. If you need to restore a database dump, you can cat dump.sql | mysql -u root -ppassword -h 127.0.0.1 WORDPRESS. And you can access the site at localhost:8080. By putting a docker-compose.yml file like this in your projects, it would be pretty quick to spin up a container for them.
Now, because the scope of a StackOverflow answer like this is pretty limited, you'll probably run into a couple snags here and there that I didn't cover in this answer. But if you're willing to invest a little time learning more about Docker (it's really not that complex), this could be a great solution for you.
Docker for Mac and lot of PHP files (wordpress, Symfony) as mounted resources don't like each other ... if You don't want to make some fancy tricks with docker-sync plus some other tricks (since databases may be slow as well).
My suggestion - if You want easy fire-and-forget is to maybe use Vagrant to setup one virtual machine which will serve all the sites? You can use puphpet.com for this to easily click Your machine. There is even possibility to mark SQL files to import ... looking then on config.yml file You will know how to add additional hosts in feature and then You just fire vagrant provision to have it setted up as well.
Problem will occur if You require some old PHP or different environments/setups per site. Unfortunately Docker for Mac isn't foolproof fire-and-forget solution right now.

laravel 5.2 run composer update at hosting

I developed my website and publish it on my host.while editing it in host I add package in composer file to use it.
My question
How to run composer update in host and how to get CMD screen ?
Or
How to add package in host?
please any one help me
This highly depends on your host. The usual "cheap" hosters are shared hosters. Which might or might not give you access to the cmd. Most probably you do not have root access - that makes a lot of things really hard.
One thing you can always do is running the commands directly from your code. You could put this code behind a specific route for example and remove it after the update.
exec('composer install my-package');
// or
shell_exec('composer install my-package');
You should look for ssh / shell access in your hosters faq. However I highly recommend hosting laravel applications on your own server or at least virtual server so that you have full access to everything.

Resources