How to easily switch between dev and prod environments - proxy

What is the best way to get dev and test browsers to resolve our production domain name to dev and test environments? Say our production domain is widgets.com. In the past, we've used internal DNS for devwidgets.com, testwidgets.com, demowidgets.com, etc. But this is proving to be big pain. Seems better to have a host file or proxy server setup so each client can choose to resolve widgets.com to each pre-prod environment. Ideas? How have others solved this problem?

You can run different versions on different ports (easiest for internal and external setup) or on different cnames (for external setup):
dev.widgets.com:81
dev.widgets.com:82
...
dev1.widgets.com
dev2.widgets.com
...
This means that the different environments can be configured centrally through the web server rather than having to manage lots of different host files.

We have solved it by using internal dns, like you said. Each developer has his own environment, so I can goto www.ordomain.com.branch2.environment10, where environment10 is my specific environment, and branch2 refers to a specific checkout, in case I got multiple checkouts because I'm working on different projects simultaniously. Just the different environment may suffice for you.
In another situation I've configured a different cname, using dev.widgets.com for remotely getting to my development environment. Disadvantage is that anyone can reach it, so you should password protect it, or use an IP filter.
I wouldn't recomment using hosts files. This is hard to maintain, and you can't reach the live environment from your development pc.

Related

Docker and casual work/dev on virtual machine and IDE

I have a general question about good practices and lets say way of work between docker and IDE.
Right now i am learning docker and docker compose, and i must admit that i like the idea of containers! Ive deployed my whole spring boot microservices architecture on containers, and everything is working really well!
The thing is, that in every place of properties when i am declaring localhost address, i was forced to change localhost to custom container names, for example localhost:8888 --> naming-server:8888. It is okay for running in containers, but obviously when i am trying to run this on IDE, it will fail. I like working/optimizing/debugging microservices in IDE, but i dont want rebuilding image and returning whole docker-compose every time i made a tiny small change.
What does it look like in real dev?
Regards!
In my day job there are at least four environments my code can run in: my desktop development environment, a developer-oriented container environment, and pre-production and production container environments. All four of these environments can have different values for things like host names. That means they must be configurable in some way.
If you've hard-coded localhost as a hostname in your application source code, it will not run in any environment other than your development system, and it needs to be changed to a configuration option.
From a pure-Docker point of view, making these configurable via environment variables is easiest (and Spring can set property values from environment variables). Spring also has the notion of a profile, which in principle matches the concept of having different settings for different environments, but injecting a whole profile configuration can be a little more complex at deployment time.
The other practice I've found helpful is to have the environment variable settings default to reasonable things for developers. The pre-production and production deployments are all heavily scripted and so there's a reasonably strong guarantee that they will have all of the correct environment variables set. If $PGHOST defaults to localhost that's right for a non-Docker developer, and all of the container-based setups can set an appropriate value for their environment at deploy time.
Even though our actual deployment system is based on containers (via Kubernetes) I do my day-to-day development in a mostly non-Docker environment. I can run an individual microservice by launching it from a shell prompt, possibly with setting some environment variables, and services have unit tests that can run just on the checked-out source tree, without needing any Docker at all. A second step is to build an image and deploy it into the development environment, and our CI system runs integration tests against the images it builds.

How can I have separate APIs for staging and production environments on Heroku?

I was just checking on how pipelines work in Heroku. I want the staging and production apps to be the same except that they should access different API endpoints.
How could I achieve that?
Heroku encourages getting configuration from the environment:
A single app always runs in multiple environments, including at least on your development machine and in production on Heroku. An open-source app might be deployed to hundreds of different environments.
Although these environments might all run the same code, they usually have environment-specific configurations. For example, an app’s staging and production environments might use different Amazon S3 buckets, meaning they also need different credentials for those buckets.
An app’s environment-specific configuration should be stored in environment variables (not in the app’s source code). This lets you modify each environment’s configuration in isolation, and prevents secure credentials from being stored in version control. Learn more about storing config in the environment.
On a traditional host or when working locally, you often set environment variables in your .bashrc file. On Heroku, you use config vars.
In this instance you might use an environment variable called API_BASE that gets set to the base URL of your staging API on your staging instance and to the base URL of your production API in production.
Exactly how you read those values depends on the technology you're using, but if you look for "environment variables" in your language's documentation you should be able to get started.

Docker based development environment for multiple projects

I am wondering about the best architecture for a Docker based development environment with a LAMP stack.
Requirements
Working on multiple projects in parallel
Most projects are using the same LAMP stack (for simplicity let's assume that all projects are sharing the same stack and configuration)
The host is running Windows + VBox + Docker Toolbox (i.e. Boot2Docker)
Current architecture
One shared development environment running multiple containers (web, db, persistent data..) with vhosts configuration per site
Using scripts / Jenkins container to setup new project (new DB, vhost configuration..)
Running custom Samba container to share the data with the Windows machine (IDE is running on Windows)
As always there are pros. and cons., while this is quite easy for maintenance, we are unable to really deploy a specific project with a dedicated docker-compose.yml file, and we are also unable to get all the benefits of "micro services" like replacing PHP / MySQL version for a specific site.
The question is how can we use a per project docker-compose.yml file, but still have multiple projects running simultaneously (since all projects are using port 80).
Will it be better (and is it even possible?) to use random ports per project and run a proxy layer on top of those web containers?
Any other options or common design patterns for this use case?
Thanks.
The short answer is yes. Docker by default assigns random ports if no port it is specified. For mapping I would use: https://github.com/jwilder/nginx-proxy
You can have something like project1.yml project2.yml .... and to start the containers would be something like:
docker-comppse -f project1.yml up
However, I'm not sure why would you try to do that. You could use something like Rancher and split my development host into multiple small development environments.

How do you replicate rules between SonarQube servers?

We currently have two SonarQube servers (v4.5.1) running on two separate Windows 2012 servers each with its own MS SQL database server. One is our Development server and the other is our production server. The idea being that we test out all rule changes on the development server first, once we are happy that they are correct we port them to the Production server.
When we first setup the two servers we simply took a backup of the Development server database and restored it on the Production server. At this point both systems were in sync.
We have recently made some modifications to the Development rules set, however when we tried the same approach to move these to the production server it did not work.
The production box seemed to remember the previous rule set. There seems to be a cache of the previous rules that we can't work out how to clear.
Before restarting SonarQube with the new DB in place we deleted the temp folder as that appears to keep a cached H2 database, but that did not solve the issue. We also tried starting it up and using the /setup url but this did not appear to work either.
Is there a way to completely reset the SonarQube server prior to restoring the database so that it has no knowledge of the previous rule set?
Alternatively is there a better way to export and re-import the entire rule set between two servers?
We looked at exporting the rule profile, but this did not appear to contain the full detail of the rules.
Thanks
Pete
For the moment, this is not possible to fully synchronize rules and quality profiles between 2 servers because of SONAR-5366. You can watch and vote for this ticket.
Concerning the cache that you seem to have, this is probably the E/S indexes which are located in <install_dir>/data/es folder. What you can do is:
stop you server
fully delete the <install_dir>/data folder
restart the server: your rules should be in sync with the DB

How to pull from a fellow developers repository using Mercurial

I'm trying to setup Mercurial on developer workstations so that they can pull from each other.
I don't want to push.
I know each workstation needs to run
hg serve
The format of the pull command is
hg pull ssh:[SOURCE]
What I'm having problem with is defining SOURCE, and any other permission issues.
I would believe that SOURCE ends with the name of the repository being pulled from. What I don't know is form the host name. Can I use IPs instead?
What permission issues do I need to look out for?
SOURCE == //<hostname>/<repository>
All developers or test stations are running Windows 7 or Windows XP.
I have searched for this answer and have come up empty. I did look at all the questions suggested by SO as I typed this question.
This is probably a simple Windows concept, but I'm not an expert in simple Windows concepts. :)
The hg help urls output has these examples:
Valid URLs are of the form:
local/filesystem/path[#revision]
file://local/filesystem/path[#revision]
http://[user[:pass]#]host[:port]/[path][#revision]
https://[user[:pass]#]host[:port]/[path][#revision]
ssh://[user#]host[:port]/[path][#revision]
and a lot of info about what can be used for each component (host can be anything that your dns resolver resolves or a ipv4 or ipv6 address. I beleive on windows systems UNC paths count.
Also, you appear to have some confusion about when you can use ssh. You can use ssh:// URLs to access repositories on the file systems of systems that are running ssh servers. If they're running hg serve then you can access them using the http:// URL that hg serve gives you when you start it. It's usually used for quick "here grab this from me and see if you can tell me what I'm doing wrong" situations rather than for all-the-time sharing.

Resources