We have 3 environments running in Jelastic in a standard OTAP way: test, acceptation, production.
Every tomcat in that environment has a fixed ip adress.
What I would like to do is swap the addresses of production and acceptation so that after a succesfull test on accaptation we swap acc and prod.
Is this possible? If so how?
At the moment, the way to achieve this is using a proxy (NGINX load balancer), and manually adjusting which tomcat it points to according to your needs.
The load balancer will have your public IP and that way it will not change.
Unfortunately you cannot currently create an environment with only the load balancer and nothing else, so you will need to put it within any environment that is running all the time.
UPDATE:
It's now possible to move a public IP between nodes (and between environments), using the Jelastic API or CLI tool. The command is ~/jelastic/environment/control/swapextips (the necessary params are stated in the help output).
The API method is also in the same location if you prefer to use your own API client instead.
See http://blog.layershift.com/php-7-jelastic-paas/#portable-ip for more details.
Related
I want to setup a Nextcloud on my personal VPS. To do the first time setup, I have to access the webserver via my browser and it says I should do it over http://localhost/nextcloud/ (Nextcloud Installation Wizard (Right in the beginning), but this does not work for my because the VPS is not my local machine. So I have to open up the setup website to the public web and everybody who would know the IP of my VPS could do it first time setup.
I read other tutorials from web applications (for example Confluence Confluence Installation Documentation (Point 4.2)) where this is the common way of setting things up the first time.
Is there another secure way to do this in general for setting up an webapp for the first time? Firewall? VPN? How do you guys do it?
Thank you for your help
Yes - this is the common way on how to set it up. In the unlikely case that somebody else sets it up in the short time between placing the files and running the installer, you could also remove the config/config.php and do the setup again.
If you want to not do the web based setup you could use the CLI tool to run the installation. It also asks in an interactive way to set up Nextcloud or all the parameters can be provided via CLI options.
See https://docs.nextcloud.com/server/12/admin_manual/installation/command_line_installation.html for more details on the CLI installation method.
I am wondering about the best architecture for a Docker based development environment with a LAMP stack.
Requirements
Working on multiple projects in parallel
Most projects are using the same LAMP stack (for simplicity let's assume that all projects are sharing the same stack and configuration)
The host is running Windows + VBox + Docker Toolbox (i.e. Boot2Docker)
Current architecture
One shared development environment running multiple containers (web, db, persistent data..) with vhosts configuration per site
Using scripts / Jenkins container to setup new project (new DB, vhost configuration..)
Running custom Samba container to share the data with the Windows machine (IDE is running on Windows)
As always there are pros. and cons., while this is quite easy for maintenance, we are unable to really deploy a specific project with a dedicated docker-compose.yml file, and we are also unable to get all the benefits of "micro services" like replacing PHP / MySQL version for a specific site.
The question is how can we use a per project docker-compose.yml file, but still have multiple projects running simultaneously (since all projects are using port 80).
Will it be better (and is it even possible?) to use random ports per project and run a proxy layer on top of those web containers?
Any other options or common design patterns for this use case?
Thanks.
The short answer is yes. Docker by default assigns random ports if no port it is specified. For mapping I would use: https://github.com/jwilder/nginx-proxy
You can have something like project1.yml project2.yml .... and to start the containers would be something like:
docker-comppse -f project1.yml up
However, I'm not sure why would you try to do that. You could use something like Rancher and split my development host into multiple small development environments.
I'm trying to do some clustering testing and I am setting up multiple RabbitMQ services on a single Windows machine. I am able to set the environment variables RABBITMQ_NODENAME, RABBITMQ_SERVICENAME, and RABBITMQ_NODE_PORT then run RabbitMQ-Service Install to have a new RabbitMQ service installed under a different name.
My question is regarding the configuration file. Based on what I read on the RabbitMQ site, the configuration file defaults to the %AppData%\RabbitMQ directory.
I'm just having trouble trying to understand how it should be setup so I can have 3 instances of the service running with their own configuration.
Do I run the installation under a different local or domain account so it gets placed under a different %AppData%\RabbitMQ directory or can I add a directive to the service to look in a particular directory for the configuration file for that particular service?
Also, how does RABBITMQ_BASE come into play? Is that only for data and log files or does that also apply to the configuration file? I'm not sure if once I have the service setup with BASE defined as a specific path I can place a new rabbitmq.config under the root of that path.
Please confirm and provide any additional assistance. Thank you in advance!
For now I'm testing on Windows but I plan on converting to linux once I have this all working correctly and understood. Unfortunately, I've inherited the current environment and it's already installed and running using Windows servers. They just wanted me to setup clustering for it so I'm trying to simulate the cluster on my workstation.
Nevermind, I found out what I needed. The environment variable RABBITMQ_CONFIG_FILE can be used to override the location of the default config file.
http://www.rabbitmq.com/relocate.html
You can run multiple RabbitMQ instances on 1 machine without clustering. You just need to change the ports and the node name in rabbitmq-defaults, rabbitmq-env and config files. If you want them as a service you can just create them from the already configured instances.
HERE is a detailed guide on how to do that. It's pretty easy and straightforward.
THE MISSION:
I have a development environment running on an Amazon AWS EC2 virtual server which i want to have tested by third parties.
THE PROBLEM:
I do NOT trust the companies who will test it not to sabotage environment and / or steal code. Therefore, i don't want them to know URL's, permanent IP's or even to access the web pages, which they could eventually use a crawler to find.
My environment includes web applications and socket servers. I do NOT want to expose the web applications, while giving access only to socket servers.
THE CONCEPT:
I have opted to use a secondary, impermanent Elastic IP pointing to the environment. this IP will be destroyed after 1 or 2 days, after basic tests have run. Subject to change (depending on suggestions from this thread).
THE QUESTION:
Can i create a secondary Elastic IP instance that allows access only to ports 5000-5100? If so, how?
THE ALTERNATIVE: In case this is not the most efficient procedure, what alternative would you propose?
MY SOLUTIONS: followed FAQ Launching Instance From Backup
create snapshot
create image from snapshot (snapshot menu - create image tag)
instances - launch instance
choose image created from snapshot as your root volume
edit security groups (opened port range for sockets only, no web)
deleted all web code from this instance
after 2 days, will delete instance
followed Create Image From, Instance
select (exclusively) running instance you wish to mirror
right click on selected instance
choose create image from dropdown
to 7. same as above
this second solution seems to be more stable (especially re: status check and connectivity issues).
any better solutions? thanx!
What is the best way to get dev and test browsers to resolve our production domain name to dev and test environments? Say our production domain is widgets.com. In the past, we've used internal DNS for devwidgets.com, testwidgets.com, demowidgets.com, etc. But this is proving to be big pain. Seems better to have a host file or proxy server setup so each client can choose to resolve widgets.com to each pre-prod environment. Ideas? How have others solved this problem?
You can run different versions on different ports (easiest for internal and external setup) or on different cnames (for external setup):
dev.widgets.com:81
dev.widgets.com:82
...
dev1.widgets.com
dev2.widgets.com
...
This means that the different environments can be configured centrally through the web server rather than having to manage lots of different host files.
We have solved it by using internal dns, like you said. Each developer has his own environment, so I can goto www.ordomain.com.branch2.environment10, where environment10 is my specific environment, and branch2 refers to a specific checkout, in case I got multiple checkouts because I'm working on different projects simultaniously. Just the different environment may suffice for you.
In another situation I've configured a different cname, using dev.widgets.com for remotely getting to my development environment. Disadvantage is that anyone can reach it, so you should password protect it, or use an IP filter.
I wouldn't recomment using hosts files. This is hard to maintain, and you can't reach the live environment from your development pc.