Problem: I want to access environment variables through HHVM that aren't typically exposed to a default php setup
Context: I rely on a couple of system variables to provide dynamic configuration options to a Laravel4 project running in a Docker container. I want to connect to a mysql DB running in another docker container that exposes a random IP address on startup. This IP address is passed into the Laravel4 container using --link options for Docker and automatically exposed as a system variable in the Laravel-4 container.
Previous approach: When using php-fpm, I could expose system variables created by Docker to php using the www.conf file like so, and then just use getenv('VAR_NAME') to get the variable in my php code.
However, with HHVM, I cannot figure out how to access a "non-standard" environment variable. There seems to be no equivalent to www.conf that I can locate. Has anyone attempted this before? Is it possible to access system variables that are external to PHP using HHVM? Is there something specific to HHVM's configuration and I just can't find it in the docs?
Additional Info: I am behind Nginx here. I don't think fastcgi-param directives will work in my case, but I may just be doing it wrong. If anyone has accomplished what I'm trying to do using fastcgi-params, I'm fine with that approach also.
fastcgi_param should work the exact same way in HHVM (assuming you are using HHVM as a fastcgi server, which you should be).
Related
I am wondering about the best architecture for a Docker based development environment with a LAMP stack.
Requirements
Working on multiple projects in parallel
Most projects are using the same LAMP stack (for simplicity let's assume that all projects are sharing the same stack and configuration)
The host is running Windows + VBox + Docker Toolbox (i.e. Boot2Docker)
Current architecture
One shared development environment running multiple containers (web, db, persistent data..) with vhosts configuration per site
Using scripts / Jenkins container to setup new project (new DB, vhost configuration..)
Running custom Samba container to share the data with the Windows machine (IDE is running on Windows)
As always there are pros. and cons., while this is quite easy for maintenance, we are unable to really deploy a specific project with a dedicated docker-compose.yml file, and we are also unable to get all the benefits of "micro services" like replacing PHP / MySQL version for a specific site.
The question is how can we use a per project docker-compose.yml file, but still have multiple projects running simultaneously (since all projects are using port 80).
Will it be better (and is it even possible?) to use random ports per project and run a proxy layer on top of those web containers?
Any other options or common design patterns for this use case?
Thanks.
The short answer is yes. Docker by default assigns random ports if no port it is specified. For mapping I would use: https://github.com/jwilder/nginx-proxy
You can have something like project1.yml project2.yml .... and to start the containers would be something like:
docker-comppse -f project1.yml up
However, I'm not sure why would you try to do that. You could use something like Rancher and split my development host into multiple small development environments.
I'm trying to do some clustering testing and I am setting up multiple RabbitMQ services on a single Windows machine. I am able to set the environment variables RABBITMQ_NODENAME, RABBITMQ_SERVICENAME, and RABBITMQ_NODE_PORT then run RabbitMQ-Service Install to have a new RabbitMQ service installed under a different name.
My question is regarding the configuration file. Based on what I read on the RabbitMQ site, the configuration file defaults to the %AppData%\RabbitMQ directory.
I'm just having trouble trying to understand how it should be setup so I can have 3 instances of the service running with their own configuration.
Do I run the installation under a different local or domain account so it gets placed under a different %AppData%\RabbitMQ directory or can I add a directive to the service to look in a particular directory for the configuration file for that particular service?
Also, how does RABBITMQ_BASE come into play? Is that only for data and log files or does that also apply to the configuration file? I'm not sure if once I have the service setup with BASE defined as a specific path I can place a new rabbitmq.config under the root of that path.
Please confirm and provide any additional assistance. Thank you in advance!
For now I'm testing on Windows but I plan on converting to linux once I have this all working correctly and understood. Unfortunately, I've inherited the current environment and it's already installed and running using Windows servers. They just wanted me to setup clustering for it so I'm trying to simulate the cluster on my workstation.
Nevermind, I found out what I needed. The environment variable RABBITMQ_CONFIG_FILE can be used to override the location of the default config file.
http://www.rabbitmq.com/relocate.html
You can run multiple RabbitMQ instances on 1 machine without clustering. You just need to change the ports and the node name in rabbitmq-defaults, rabbitmq-env and config files. If you want them as a service you can just create them from the already configured instances.
HERE is a detailed guide on how to do that. It's pretty easy and straightforward.
Laravel has a really nice remote SSH connection configuration file: app/config/remote.php (see: http://laravel.com/docs/ssh). It integrates well with the SSH class and things like ./artisan tail, and generally makes it easy to work with SSH connections from within a Laravel application.
It also now has Envoy to make it easy to define complex tasks to run on remote systems via SSH, for deployment etc. It's similar to the SSH class, only you use the blade syntax to easily define commands, rather than having to build a manual artisan command.
However Envoy seems to be a completely manual config independent of the Laravel configuration file. This means you need to configure hostnames, and paths, in two different places.
Is there a nice way to load the Laravel remote config into the Envoy file, so there is a single source for the information?
You should be able to use the Config class to get your configuration values out of remote.php.
For example, to get your production connection, it would be as easy as $connection = Config::get('remote.connections.production');.
You know how Laravel allows for environment based configurations? Where config files in "app/config/local" override those in "app/config". All my config files in the "local" directory override as expected, except the config file: "database.php"
I want to be able to specify different database connections for local and production environment. But when I do, and run "artisan migrate --env=local" it still attempts to use the configuration in the production folder, not the "local" folder.
This sometimes get a bit confusing on local environements. I normally use the hostname in bootstrap/start.php as opposed to the IP.
For example my Virtual Box Localhost's hostname is "debian"... just type hostname in your terminal to get the hostname.
This should work. However, since you're using environment config folders (which I always do) then I would remove the settings in the app/congig/* as you should never need them since your other servers will have their own settings in app/config/yourenv
Hope this helps
We have 3 environments running in Jelastic in a standard OTAP way: test, acceptation, production.
Every tomcat in that environment has a fixed ip adress.
What I would like to do is swap the addresses of production and acceptation so that after a succesfull test on accaptation we swap acc and prod.
Is this possible? If so how?
At the moment, the way to achieve this is using a proxy (NGINX load balancer), and manually adjusting which tomcat it points to according to your needs.
The load balancer will have your public IP and that way it will not change.
Unfortunately you cannot currently create an environment with only the load balancer and nothing else, so you will need to put it within any environment that is running all the time.
UPDATE:
It's now possible to move a public IP between nodes (and between environments), using the Jelastic API or CLI tool. The command is ~/jelastic/environment/control/swapextips (the necessary params are stated in the help output).
The API method is also in the same location if you prefer to use your own API client instead.
See http://blog.layershift.com/php-7-jelastic-paas/#portable-ip for more details.