DDEV - Configure multiple environments - ddev

Is there a way to have multiple DDEV configurations for the same project? For example, we need to cover the case when we have several servers:
production with Apache + PHP 7.3 + Composer 1;
staging server with Apache + PHP 7.3, but a different. set of domains;
development with Nginx + PHP. 7.4 + Elasticsearch + Redis + Composer 2, where we're working on the system upgrade.
Dev team needs to emulate at least development and "stating" environments. Some features/hotfixes for production are under development and should be released before the big upgrade. This is a typical situation for (for example) Magento 2 projects with heavy customizations.
Is there a way to have multiple different environments like .ddev-prod, .ddev-dev etc., and somehow pass env name to ddev or configure it?
What comes to my mind is that we can create multiple configurations and add some information to Readme.md like:
"To start dev env: copy .ddev-dev to .ddev and run ddev start".
From your experience, what is the best way to maintain multiple environments?
Regards,
Max

The first thing I'd try would just be three different DDEV projects with different configuration (but the same code and database). That way you could easily see any differences. That would certainly solve the problem with mimicking the development server.
But it actually sounds like you need 3 branches of your code. Each branch could have different DDEV configuration and there you'd go. Use different project name for each.
Another approach would be to have a config.prod.yaml, config.staging.yaml and config.dev.yaml and copy just the one you want into the .ddev directory when you need it. But I think you'll be much happier with 3 branches of your code and 3 different project names.

Related

Using Helm For Deploying Spring Boot Microservice to K8s

We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.

sqitch: deploying changes across multiple environments

In looking at the sqitch docs, there’s a situation I don’t immediately understand how to deal with.
Like probably many organizations, we progress changes through several environments before they reach production. In our situation, we have a different DBA user on a different Oracle server for each environment, each with its own credentials.
As I understand it, sqitch uses database tables to track what changes have been applied to a server. Maybe I’m dumb, but it just doesn’t jump out at me how sqitch can tell me if a change has been applied to a UAT server, but not yet to a production server.
So basically, I’d like to organize a repository to move changes from one DB environment to the next. Might this be what “sqitch target” and plan files are for? Are there examples I can look at?
If I were you, I would create a centralized DB with DB links to points to each database. After that, I would create an Union of all Repositories and a View (with PIVOT function) to see the deployement path of each patch.
To deploy to multiple environments it would depend on how you're running your sqitch deploy command.
You use the sqitch.conf to declare various targets.
eg.
[core]
engine = oracle
top_dir = SQL
[engine "snowflake"]
reworked_dir = SQL/rework
[target "DEV"]
uri = "db:oracle:DEV_DB"
[target "QA"]
uri = "db:oracle:QA_DB"
[target "PROD"]
uri = "db:oracle:PROD_DB"
With that sqitch.conf setup you can now run deploy to target the required environments.
eg.
sqitch deploy --target DEV
sqitch deploy --target QA
sqitch deploy --target PROD
You won't be able to compare deployments from one environment to another unfortunately.
You can use the sqitch check --target <xxx> command to check for divergences between the planned and deployed changes as stated here: https://sqitch.org/docs/manual/sqitch-check/.
However I've found this to not work properly at times. I haven't been able to determine the exact cause as yet but you're welcome to run the command to check.

Docker based development environment for multiple projects

I am wondering about the best architecture for a Docker based development environment with a LAMP stack.
Requirements
Working on multiple projects in parallel
Most projects are using the same LAMP stack (for simplicity let's assume that all projects are sharing the same stack and configuration)
The host is running Windows + VBox + Docker Toolbox (i.e. Boot2Docker)
Current architecture
One shared development environment running multiple containers (web, db, persistent data..) with vhosts configuration per site
Using scripts / Jenkins container to setup new project (new DB, vhost configuration..)
Running custom Samba container to share the data with the Windows machine (IDE is running on Windows)
As always there are pros. and cons., while this is quite easy for maintenance, we are unable to really deploy a specific project with a dedicated docker-compose.yml file, and we are also unable to get all the benefits of "micro services" like replacing PHP / MySQL version for a specific site.
The question is how can we use a per project docker-compose.yml file, but still have multiple projects running simultaneously (since all projects are using port 80).
Will it be better (and is it even possible?) to use random ports per project and run a proxy layer on top of those web containers?
Any other options or common design patterns for this use case?
Thanks.
The short answer is yes. Docker by default assigns random ports if no port it is specified. For mapping I would use: https://github.com/jwilder/nginx-proxy
You can have something like project1.yml project2.yml .... and to start the containers would be something like:
docker-comppse -f project1.yml up
However, I'm not sure why would you try to do that. You could use something like Rancher and split my development host into multiple small development environments.

How to create Docker image,Dockerfile of LAravel application from existing docker environment for Laravel

I have dockervel environment ,docker container for Laravel , and I really struggled to make it work on my machine.
The way I used the dockervel image is described here http://www.spiralout.eu/2015/12/dockervel-laravel-development.html
I have developed an application including Behat for BDD and PHPUnit testing in this environment and I have to make an image from it and say how to use It. I am confused how to create Dockerfile ?
any help is appreciated
This is NOT the answer to your question but I had bit of struggle with dockervel as well and thought of simplifying the docker implementation for Laravel projects.
The result is https://github.com/purinda/docker-laravel
This gives you ability to easily communicate with Laravel CLI commands such as artisan and running unit tests through phpunit in a the docker environment using multi-container environment.
First of all, dockervel is not a single image. It consists of sevelar images orchestrated together by docker-compose (see docker-compose.yml). And the reason for that is that you don't need all the parts every time (eg. you don't load nodejs when all you want is to work with artisan). Also you can change parts (eg. you can change MySQL with Postgres).
If you want to share your project, the easy way is to share the entire dockervel folder (please omit the node_modules folder, you can recreate it with dcomposer install, as also the database folders).
If you realy want to make it a single dockerfile (not so good idea) you should combine each dockerfile into a single one and you will end up with a huge and not flexible container.

How to easily switch between dev and prod environments

What is the best way to get dev and test browsers to resolve our production domain name to dev and test environments? Say our production domain is widgets.com. In the past, we've used internal DNS for devwidgets.com, testwidgets.com, demowidgets.com, etc. But this is proving to be big pain. Seems better to have a host file or proxy server setup so each client can choose to resolve widgets.com to each pre-prod environment. Ideas? How have others solved this problem?
You can run different versions on different ports (easiest for internal and external setup) or on different cnames (for external setup):
dev.widgets.com:81
dev.widgets.com:82
...
dev1.widgets.com
dev2.widgets.com
...
This means that the different environments can be configured centrally through the web server rather than having to manage lots of different host files.
We have solved it by using internal dns, like you said. Each developer has his own environment, so I can goto www.ordomain.com.branch2.environment10, where environment10 is my specific environment, and branch2 refers to a specific checkout, in case I got multiple checkouts because I'm working on different projects simultaniously. Just the different environment may suffice for you.
In another situation I've configured a different cname, using dev.widgets.com for remotely getting to my development environment. Disadvantage is that anyone can reach it, so you should password protect it, or use an IP filter.
I wouldn't recomment using hosts files. This is hard to maintain, and you can't reach the live environment from your development pc.

Resources