Vagrant - Global Setup among many sites + domain aliases - vagrant

Coming from a MAMP Pro background, I loved the ability to have a "base" folder (/Sites in this case), have all of my projects underneath it and set custom server names/aliases with it. With Vagrant, it looks like I can accomplish the name/alias part with vagrant-hostsupdater, but if I really did just want to have the Vagrant files in /Sites and then all of them use the same config, what's the best way to specify a subfolder disk location with those custom host names?
I'm most likely over-thinking this, have just been a sucker for GUI interfaces and would love to know how to accomplish this. Thanks as always!
Clarification
What I'm used to
I used to use MAMP Pro, which allows you to setup custom host additions with their GUI interface. So, within my ~/Sites directory, I have several different projects going on, all in subfolders. The screen shot below shows how I can set a server name and specify a disk location, all from this central location.
What I'd like to do with Vagrant
Now I do know of (and used vagrant-hostsupdater), but what I was wondering is if I can set my Vagrant file in my ~/Sites directory (which is kind of like the root of the server; since all of my projects require the same setup) and then have individual host names setup for each project - so instead of having to access a subfolder like local.dev/project-1 or local.dev/project-2 I could setup server names such aslocal.project-1.comandlocal.project-2.com` from within that top-level Vagrant file and specify the subfolder it should attach that rewrite rule too.
The reason I'd like to do this is so I only have to run one vagrant up and I can then access all of my projects from one Vagrant instance as well as only keep track of one Vagrant file. Thanks!

You need to tell vagrant what hostnames you would like to use.
Directory based hostnames
Assuming you set you would like to set your hostnames based on the directory name; you can get all of the hostnames with ruby and pass them to the hostsupdater configuration.
SITES_DIR = "~/Sites"
config.hostsupdater.aliases = Dir["#{SITES_DIR}/*/"].map { |d| d.chomp('/') }
Configuration based hostnames
Alternatively you can mock up some sort of configuration that is desirable to you and what you are trying to do and evaluate/process it in ruby within the Vagrantfile.

Related

disable nfs pruning in vagrant

most recently (not sure why) vagrant (1.8.1) started asking for a root password.
however at work no root privileges are given to us (no sodoer)
I am looking for a way to tell vargant to stop the nfs pruning all together
sadly the documentation does not say how to modify this particular flag and I don't know ruby much
the code gives away that there should be a flag but can't figure out to put the "false" in there
I intend to disable NFS or skip that part all together. so both would be welcome.
my starting point is my ~/.vagrant.d/Vagrantfile
Vagrant.configure('2') do |config|
config.vagrant.host :nfs_prune => false
end
error message is: Pruning invalid NFS exports. Administrator privileges will be required...
PS: no, I do not use nfs in my shared folders
you should be able to disable by using the config.nfs.functional = false
functional (bool) - Defaults to true. If false, then NFS will not be
used as a synced folder type. If a synced folder specifically requests
NFS, it will error.
vagrantfile can be loaded from multiple sources, see LOAD ORDER AND MERGING
Vagrant actually loads a series of Vagrantfiles, merging the settings
as it goes. This allows Vagrantfiles of varying level of specificity
to override prior settings. Vagrantfiles are loaded in the order shown
below. Note that if a Vagrantfile is not found at any step, Vagrant
continues with the next step.
Vagrantfile packaged with the box that is to be used for a given
machine.
Vagrantfile in your Vagrant home directory (defaults to
~/.vagrant.d). This lets you specify some defaults for your system
user.
Vagrantfile from the project directory. This is the Vagrantfile
that you will be modifying most of the time.
As you mentioned you already check point 3 and 2, check the Vagrantfile from the particular box (if any)

Sharing the domain/subdomain urls created on vagrant machine?

What I have done till now.
1- I have used below vagrant box
https://scotch.io/bar-talk/introducing-scotch-box-a-vagrant-lamp-stack-that-just-works
2- After that I have created a virtual subdomain/domain on the Vagrrantmachine that are pointing to different folders on code directory
-- say abc.def.com pointing to var/www/public/pmtool
-- and aaa.def.com pointing to var/www/public/pmtool2
and these domains are enabled on virtual machine and running fine.
that is to say http://abc.def.com points to proper directory.
3- Now when I issue vagrant share command it provides me a url that is pointing to /var/www/public directory,
What I need to know that how I could get the urls aliases for these folders (domains/subdomains). i.e.an url alias to pointing to these directories.
You shouldn't feel too bad as other people have had this issue as well. The most relevant is this SO question, with the currently most upvoted answer being:
Change your WhateverItIs.conf file followingly by adding ServerAlias:
ServerName WhateverItIs.com
ServerAlias *.vagrantshare.com
and now you are good to go.
Another way to look at is that vagrant gives a url in the "vagrantshare.com" domain and what you want is to use "abc.def.com" and "aaa.def.com" still. The DNS that would make this possible would be for you to own the "def.com" domain and add CName records to it for both "abc.def.com" and "aaa.def.com", both pointing to the "vagrantshare.com" hostname generated for you by vagrant.

Clone development environment on an office server to use locally

Situation:
As a developer I'd like to "clone" our development environment (on an office server) so we can use it locally (for example when no/limited internet access is available). We've decided to give Vagrant a try.
What did I do?
First I used PuPHPet to create a basic config including nginx, php (incl modules), composer, git, memcached etc. You can find my config here. I also added a nginx vhost for our website.dev. This is where I run into the first problem.
We use a few additional config settings to the location block. A rewrite, a fastcgi_pass and a include. This is not available so I searched a lot online and I found out I could use the following statement (was more a try/fail/retry).
location_cfg_append:
{ rewrite: ".* /dispatch.php break", include: "fastcgi-params.conf", fastcgi_pass: "127.0.0.1:9000" }
First question:
This does work, however is this the way to do this? I'm not sure if I should be editing this config file (the file generated by PuPHPet) directly.
Second question:
How should I 'upload' the fastcgi-params.conf file I want to include? I did not find a way to do this in the config.yaml but there is a way to run some scripts. For now I've added a echo [contents] > /etc/nginx/fastcgi-params.conf that does work. However...
Third question:
When the VM is provisioned the nginx config is created. When that is done nginx is restarted. However at that moment the fastcgi-params.conf file does not exist yet (this is created AFTER the provisioning).
When nginx reloads this will fail, trigger an error and the machine can not finish the provision sequence (so it will never create the config file).
I can create this file on the next boot (and then nginx will work) but this cannot be the correct way to do this. So: how can I (before nginx 'installation') create / deploy a file to the VM? Or more generic (question 2): How can I upload a file to the VM?
If this is totally not the way to go please let me know! This are our first steps into creating a locally development machine so other/better methods are welcome.
First question: This does work, however is this the way to do this? I'm not sure if I should be editing this config file (the file generated by PuPHPet) directly.
Yes, I encourage this.
Second question: How should I 'upload' the fastcgi-params.conf file I want to include?
Place it inside one of your shared folders. It'll be available within the VM and you can reference it that way.
Third question
The above answer fixes this issue.

Multiple iDempiere instances in one server

I need to install multiple iDempiere instances in one server. The customized packages are different in build and the db they are using. Is there any way to deploy both of it in one server and access like localhost:8080/client1, localhost:8080/client2 . Any help appreciated.
When I want to reference several application servers I need to copy the path of various installations
and change the database name and port of each application :
/opt/idempiere-server-production/ (on port 8080 for example) for production
And
/opt/idempiere-server-test/ (on port 8081 for example) for test
the way you said is not possible, because the idempiere server for webapp is known as
http://hostname:port/webui
Running multiple instances of idempiere on a single server is not too difficult.
Here is what you need to take care of:
Install the instances into different directories. The instances do not need to share any common files. So you are just fine making a full installation for each instance.
Make sure each instance uses its own data base. Use different names for the instance data bases.
Make sure the idempiere server instances use different tcp ports.
If you really should need to use a single port to access all of the instances you could use a http server like apache or ngnix to do define virtual hosts. Proxying or use of rewrite rules will then allow you to do the desired redirections. (I am using subdomains and apache mod_proxy to do the job)
There is another benefit to using subdomains for browser access: If all your server instances use the same host name the client browser will sometimes not be able to keep cookies from different instances apart, which can lead to a blocked session as discussed here in the idempiere google group.
Use different DB user names. The docs advise not to change the default user name Adempiere and this is ok for a single instance installation. Still if you use a single DB user for all of your instances you will run into trouble once you need to restore a database from a backup file. The RUN_DBRestore.sh will delete and recreate the DB user which is not possible when the user owns more than one DB.
You can run all of your instances as services in parallel. Before the installation of another instance rename the service script: sudo mv /etc/init.d/idempiere /etc/init.d/idempiere-theInstance. Of course you will need to do some book keeping work wth the service controller of your OS to ensure that the renamed services are started as desired.
The service controller talks to the iDempiere server via the OSGI console. For this to work without problems in a multi instance environment you need to assign a different telnet port number to each of the instances: in the editor of your choice open the file /etc/init.d/iDempiere. Find the line export TELNET_PORT=12612 and change the port number to something else.
Please Note:
OS specific descriptions in this guide are for Ubuntu 16/18 or Debian, if on another OS you need to do some research.
I have been using the described approach to host idempiere versions 5 and 6 for some time now and did not have any problems so far. Still make sure you do your own thorough tests if you want to go that route.
If you run into any problems (and maybe even manage to solve them) please report back to the community. (by giving your own answer to this question or by posting to the idempiere google group) Thanks!
You can have as many setups on your server as you like. When you run the setup to create your properties, simply chose other web ports for each installation. You also may need to slightly change the webservers configuration if they have some default ports.

Windows - Private hosts file for a certain environment

I've an application running on a dev server and connecting to a dev-db hosting an oracle instance.
Now i'm deploying the on a prod/prod-db machine
Since the dev-db url is hardcoded inside the java code, the just-copied binaries still points to dev-db. As a quick warkaround i added a line in Windows Host file on prod so that dev-db now points to prod-db IP address. It's work, but i'm not very satisfied of this global-scope solution.
I was wondering if exits a way to make a hosts file "private" for a certain environments ie. only valid in the scope of my running application
No, there's no way to do this, and it's a bad approach anyway.
You should instead fix the real problem, which is the hard-coding of the address inside your java code. Put such things in a properties file, and use a different properties file for production.

Resources