Laravel has a really nice remote SSH connection configuration file: app/config/remote.php (see: http://laravel.com/docs/ssh). It integrates well with the SSH class and things like ./artisan tail, and generally makes it easy to work with SSH connections from within a Laravel application.
It also now has Envoy to make it easy to define complex tasks to run on remote systems via SSH, for deployment etc. It's similar to the SSH class, only you use the blade syntax to easily define commands, rather than having to build a manual artisan command.
However Envoy seems to be a completely manual config independent of the Laravel configuration file. This means you need to configure hostnames, and paths, in two different places.
Is there a nice way to load the Laravel remote config into the Envoy file, so there is a single source for the information?
You should be able to use the Config class to get your configuration values out of remote.php.
For example, to get your production connection, it would be as easy as $connection = Config::get('remote.connections.production');.
Related
I have several roles that run actions on a remote database that executes sentences for user and privilege creation.
I have seen molecule used to test playbooks that run against a single host, but I am unsure of how you could setup a second container to run a docker instance in the same network as the molecule container (similar to a docker-compose setup). However I have not been able to find a setup like this in the documentation.
Is there a recommended way to run molecule tests with external dependencies? Or should I just use docker-compose or similar to run my tests?
There is a 'prepare' stage in Molecule specifically for that. You need to separate questions:
where external resource (database) is run?
why and how it's configured?
Those are very separate, and mixing them together is a bad idea.
For 1 there are different answers:
It is (out of blue, configured by other people). Use non-managed hosts in molecule.yml.
We OK to run it on the same host as the host we run our code. Shovel installation into 'prepare' stage.
We want it to be on separate server. Put additional host in platforms in a different group and configure it in prepare stage.
If you find your driver is not good enough, you always can opt for 'delegated' driver. In this case you need to write playbooks for create/destroy of hosts. It's relatively easy. The main trick is to use 'platforms' variable to get information about content of molecule.yaml's platform section.
Background
At my company, we use Bit Bucket to host our git repos. All traffic to the server flows through a custom, non-standard port. Cloning from our repos looks something like git clone ssh://git#stash.company.com:9999/repo/path/name.git.
The problem
I would like to create Go modules hosted on this server and managed by go mod, however, the fact that traffic has to flow through port 9999 makes this very difficult. This is because go mod operates on the standard ports and doesn't seem to provide a way to customise ports for different modules.
My question
Is it possible to use go mod to manage Go modules hosted on a private git server with a non-standard port?
Attempted solutions
Vendoring
This seems to be the closest to offering a solution. First I go mod vendor the Go application that wants to use these Go modules, then I git submodule the Go module in the vendor/ directory. This works perfectly up to the point that a module needs to be updated or added. go mod tidy will keep failing to download or update the other Go modules because it cannot access the "git URL" of the custom Go module. Even when the -e flag is set.
Editing .gitconfig
Editing the .gitconfig to replace the URL without the port with the URL with the port is a solution that will work but is a very dirty hack. Firstly, these edits will have to be done for any new modules, and for every individual developer. Secondly, this might brake other git processes when working on these repositories.
The go tool uses git under the hood, so you'd want to configure git in your environment to use an alternate url. Something like
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com"
Though I recall that bitbucket/stash sometimes provides an extra suffix for reasons I don't recall, so you might need to do something like this:
git config --global url."ssh://git#stash.company.com:9999/".insteadOf "https://stash.company.com/scm/"
ADDITIONAL EDIT
user bcmills mentioned below that you can also serve the go-import metadata over HTTPS, and use whatever vanity URL you like, provided you control the domain resolution. This can be done with varying degrees of sophistication, from a simple nginx rule to static content generators, dedicated vanity services or even running your own module proxy with Athens
This still doesn't completely solve the problem of build environment configuration, however, since you'll want the user to set GOPRIVATE or GOPROXY or both, depending on your configuration.
Also, if your chosen domain is potentially globally resolvable, you might want to consider registering it anyway to keep it from being registered by a potentially-malicious third party.
I'm trying to do some clustering testing and I am setting up multiple RabbitMQ services on a single Windows machine. I am able to set the environment variables RABBITMQ_NODENAME, RABBITMQ_SERVICENAME, and RABBITMQ_NODE_PORT then run RabbitMQ-Service Install to have a new RabbitMQ service installed under a different name.
My question is regarding the configuration file. Based on what I read on the RabbitMQ site, the configuration file defaults to the %AppData%\RabbitMQ directory.
I'm just having trouble trying to understand how it should be setup so I can have 3 instances of the service running with their own configuration.
Do I run the installation under a different local or domain account so it gets placed under a different %AppData%\RabbitMQ directory or can I add a directive to the service to look in a particular directory for the configuration file for that particular service?
Also, how does RABBITMQ_BASE come into play? Is that only for data and log files or does that also apply to the configuration file? I'm not sure if once I have the service setup with BASE defined as a specific path I can place a new rabbitmq.config under the root of that path.
Please confirm and provide any additional assistance. Thank you in advance!
For now I'm testing on Windows but I plan on converting to linux once I have this all working correctly and understood. Unfortunately, I've inherited the current environment and it's already installed and running using Windows servers. They just wanted me to setup clustering for it so I'm trying to simulate the cluster on my workstation.
Nevermind, I found out what I needed. The environment variable RABBITMQ_CONFIG_FILE can be used to override the location of the default config file.
http://www.rabbitmq.com/relocate.html
You can run multiple RabbitMQ instances on 1 machine without clustering. You just need to change the ports and the node name in rabbitmq-defaults, rabbitmq-env and config files. If you want them as a service you can just create them from the already configured instances.
HERE is a detailed guide on how to do that. It's pretty easy and straightforward.
You know how Laravel allows for environment based configurations? Where config files in "app/config/local" override those in "app/config". All my config files in the "local" directory override as expected, except the config file: "database.php"
I want to be able to specify different database connections for local and production environment. But when I do, and run "artisan migrate --env=local" it still attempts to use the configuration in the production folder, not the "local" folder.
This sometimes get a bit confusing on local environements. I normally use the hostname in bootstrap/start.php as opposed to the IP.
For example my Virtual Box Localhost's hostname is "debian"... just type hostname in your terminal to get the hostname.
This should work. However, since you're using environment config folders (which I always do) then I would remove the settings in the app/congig/* as you should never need them since your other servers will have their own settings in app/config/yourenv
Hope this helps
Problem: I want to access environment variables through HHVM that aren't typically exposed to a default php setup
Context: I rely on a couple of system variables to provide dynamic configuration options to a Laravel4 project running in a Docker container. I want to connect to a mysql DB running in another docker container that exposes a random IP address on startup. This IP address is passed into the Laravel4 container using --link options for Docker and automatically exposed as a system variable in the Laravel-4 container.
Previous approach: When using php-fpm, I could expose system variables created by Docker to php using the www.conf file like so, and then just use getenv('VAR_NAME') to get the variable in my php code.
However, with HHVM, I cannot figure out how to access a "non-standard" environment variable. There seems to be no equivalent to www.conf that I can locate. Has anyone attempted this before? Is it possible to access system variables that are external to PHP using HHVM? Is there something specific to HHVM's configuration and I just can't find it in the docs?
Additional Info: I am behind Nginx here. I don't think fastcgi-param directives will work in my case, but I may just be doing it wrong. If anyone has accomplished what I'm trying to do using fastcgi-params, I'm fine with that approach also.
fastcgi_param should work the exact same way in HHVM (assuming you are using HHVM as a fastcgi server, which you should be).