I have a few directories with different Mercurial histories that I am working on in parallel. They all have the same Vagrantfile so it would be natural to use just one instance for all of them.
But when I run "vagrant up" in a new directory, it starts from linking the existent VM, setting up the environment, and so on.
How do I share the Vagrant instance between different directories?
UPDATE: my directory structure:
\
Vagrantfile
puppet
*.pp
support
nginx.conf
uwsgi.development.ini
other_repo_related_files_and_dirs
Well, if you want to share some directories with the same Vagrant's instance, you can configure the Vagrantfile.
This is an example with two VM (app and web), using the same box (ubuntu-12.04) and the same Vagrantfile. Each instance have two folders (one folder by VM).
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.vm.define 'app' do |app_config|
app_config.vm.box = 'ubuntu-12.04'
app_config.vm.host_name = 'app'
app_config.vm.network "private_network", ip: "192.168.33.33"
app_config.vm.synced_folder "app_config", "/app_config"
end
config.vm.define 'web' do |web_config|
web_config.vm.box = 'ubuntu-12.04'
web_config.vm.host_name = 'web'
web_config.vm.network "private_network", ip: "192.168.33.34"
web_config.vm.synced_folder "web_config", "/web_config"
end
end
The app machine has an app_config folder and the web machine have a web_config folder(these folders are in the same level of the Vagrantfile file).
When you enter to each VM with the vagrant ssh command you can see each folder.
This is into app machine.
roberto#rcisla-pc:~/Desktop/multiple$ vagrant ssh app
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Mon Jan 27 13:46:36 2014 from 10.0.2.2
vagrant#app:~$ cd /app_config/
vagrant#app:/app_config$ ls
app_config_file
This is into web machine.
roberto#rcisla-pc:~/Desktop/multiple$ vagrant ssh web
Welcome to Ubuntu 12.04 LTS (GNU/Linux 3.2.0-23-generic-pae i686)
* Documentation: https://help.ubuntu.com/
Welcome to your Vagrant-built virtual machine.
Last login: Mon Jan 27 13:47:12 2014 from 10.0.2.2
vagrant#web:~$ cd /web_config/
vagrant#web:/web_config$ ls
web_config_file
vagrant#web:/web_config$
And this is the structure for my directory.
.
├── **app_config**
│ └── *app_config_file*
├── attributes
├── Berksfile
├── Berksfile.lock
├── chefignore
├── definitions
├── files
│ └── default
├── Gemfile
├── libraries
├── LICENSE
├── metadata.rb
├── providers
├── README.md
├── recipes
│ └── default.rb
├── resources
├── templates
│ └── default
├── test
│ └── integration
│ └── default
├── Thorfile
├── Vagrantfile
├── Vagrantfile~
└── **web_config**
└── *web_config_file*
I hope this help you.
Just thinking out loud here. Not sure if it's a solution that meets your demands.
If you set-up a directory structure like this
/Main
/projects
/mercurial_history_1
/mercurial_history_2
/mercurial_history_3
/puppet
/modules
/manifests
default.pp
Vagrantfile
I'm not sure what kind of projects you are running, but if you are running a apache webserver for example. You could specify a separate vhost for every mercurial project inside the VM. So you can point the DocumentRoot to the specific mercurial project.
For this solution you have to add the following line in the Vagrantfile
config.vm.network "private_network", ip: "22.22.22.11" <- Just an example IP
Then on you host machine you can update the hosts file with the IP and corresponding vhostname servername. It's a little bit more work, but you can add vhosts using a provisioner to make life easier ;)
This way you only have one VM running that runs al your mercurial projects
Related
I'm working on a repo which is serving a create-react-app from a node endpoint. So, the react app is nested as a child directory:
.
├── Procfile
├── frontend
│ ├── README.md
│ ├── build
│ ├── package.json <---- "proxy": "http://localhost:$PORT"
│ ├── public
│ ├── src
│ │ ├── App.css
│ │ ├── App.js
│ │ └── // etc...
│ └── .env <----- frontend env file, removed PORT value from here
├── package.json
├── src
│ ├── app.js
│ ├── server.js
│ └── // etc...
├── .env <--- backend env file, PORT=9000 for node
├── static.json
└── yarn.lock
With port value removed from the .env file, CRA runs on port 3000. If I hardcode port 9000 instead of $PORT, then the proxy works properly in development.
However, when deploying to production, I want the frontend to proxy Heroku's dynamic port number, this is one example:
Heroku seems to ignore the port value even if I intentionally define it in the env in their website, with a value of 9000.
My question is how do I define the proxy on the frontend without having CRA to instance at that port number, e.g. apply PORT=9000 in the frontend .env but have CRA load at port 3000.
I've tried defining the port number in the script, while making sure that I've defined PORT=9000 in the frontend env:
"scripts": {
"start": "export PORT=3000 && react-scripts start",
CRA will load at 3000, but I get a proxy error:
Heroku doesn't let you chose your port, but rather allocates a port for your app to use as an environment variable. Read more here
Each web process simply binds to a port, and listens for requests coming in on that port. The port to bind to is assigned by Heroku as the PORT environment variable.
Remove all hardcoded PORT variables
It's not ideal to use $PORT in your package.json file as you cannot add logic to it. In your nodejs app read the port variable like so:
const PORT = process.env.PORT || 3000
This will set the port variable to whatever is in the environment variable PORT and if it is not set, will default to 3000
It is not efficient to serve a production app with CRA
Don't run two servers for react and nodejs, instead use your nodejs app to serve a production built react app
const express = require('express')
const path = require('path')
const app = express()
// All your other routes go here
app.use('/', express.static(path.join(__dirname,'client/build'))) // this must be the last one
NOTE: This is assuming your react app is built inside client/build relative to your project root
The proxy setting is only for development convenience and will not work if the app is not served by CRA
Make heroku build your react app during buildtime with:
npm --prefix client run build # or if you use yarn
yarn --cwd client build
in your outer package.json file's build script
You start script is going to run your nodejs server:
"scripts": {
"start": "node src/server.js",
"build": "npm --prefix client run build"
}
Don't commit your .env files to heroku, instead set environment variables directly using heroku config:set KEY=VALUE if you have heroku cli or use the dashboard settings
NOTE: Do this before pushing your code to have these variables accessible during buildtime of the react app
Question: how to create simple nginx config that will read folders structure as domains (test.local, myblog.local) and shows the page from this folders, including PHP?
Information:
Windows 10 x64 build
Vagrant 1.9.5
VirtualBox 5.0.22 (latest)
Guest OS: Ubuntu Xenial x64 latest
So, i want to create simple nginx config, that will recreate folder structure. See
my config file on pastebin.
Also here is a Vagrantfile config, which use SMB to mount a folder.
The structure of folders:
├───devhost.local
│ ├───log
│ └───public
│ index.html
│ index.php
│
└───test.local
├───log
└───public
index.html
The rights for files and folders for devhost:
ubuntu#ubuntu-xenial:~$ ls -la /var/www/html/devhost.local/
total 4
drwxr-xr-x 2 ubuntu www-data 0 Jun 7 11:17 .
drwxr-xr-x 2 ubuntu www-data 4096 Jun 7 12:44 ..
drwxr-xr-x 2 ubuntu www-data 0 Jun 7 11:17 log
drwxr-xr-x 2 ubuntu www-data 0 Jun 6 14:13 public
My hosts file in Windows:
192.168.33.10 devhost.local
So, when i have default config in my sites-enabled folder i can open guest machine through 192.168.33.10 and i see html page of nginx, but when i remove this default config and enable my wildcard config (see link my config file) so i cannot access my domains. The sudo nginx -t says that everything is ok, also i tried to restart my guest machine, reload/restart nginx service. Also, i disable Windows 10 Firewall (i dont know if its disabled fully, but says that its disabled). Also, the log files is empty and even not created, both access log and error log.
Where is my mistake? If need more information, please, ask me, i will give.
Thanks a lot!
following nginx setup should help.
server {
listen 80 default_server;
root /var/www/html/$host;
index index.html index.php;
location ~ \.php {
# ... fastcgi details
}
}
I found the solution.
First of all, when i keep only one file with config, my nginx doesnt listen port 80, i check sudo netstat -ntlp | grep LISTEN but there wasnt port 80. So i Google, and found another question on stackoverflow (see link at the end).
Solution: recreate the simlink to my file with config, after that when i run sudo nginx -t i see a few errors. So its seems that before this files was empty or something like that, but i didnt notice this because i edit file directly in sites-available folder.
Thanks to everybody!
This question helps me to solve the problem: nginx not listening to port 80
I'm attempting to manage a windows web server with the chef 'iis' cookbook and vagrant.
When I attempt to run chef-solo, it throws an error that the cookbook is not found.
The error message:
sowens-MBP:vagrant-windows sowen$ vagrant provision
==> default: Running provisioner: chef_solo...
==> default: Vagrant does not support detecting whether Chef is installed
==> default: for the guest OS running in the machine. Vagrant will assume it is
==> default: installed and attempt to continue.
Generating chef JSON and uploading...
==> default: Running chef-solo...
==> default: [2015-02-10T16:18:24-08:00] INFO: *** Chef 12.0.3 ***
==> default: [2015-02-10T16:18:24-08:00] INFO: Chef-client pid: 2508
==> default: [2015-02-10T16:18:30-08:00] INFO: Setting the run_list to ["recipe[example-webserver2012]"] from CLI options
==> default:
==> default: [2015-02-10T16:18:30-08:00] INFO: Run List is [recipe[example-webserver2012]]
==> default: [2015-02-10T16:18:30-08:00] INFO: Run List expands to [example-webserver2012]
==> default: [2015-02-10T16:18:30-08:00] INFO: Starting Chef Run for vagrant-2012-r2.nv.com
==> default: [2015-02-10T16:18:30-08:00] INFO: Running start handlers
==> default: [2015-02-10T16:18:30-08:00] INFO: Start handlers complete.
==> default: [2015-02-10T16:18:30-08:00] ERROR: Running exception handlers
==> default:
==> default: [2015-02-10T16:18:30-08:00] ERROR: Exception handlers complete
==> default: [2015-02-10T16:18:30-08:00] FATAL: Stacktrace dumped to C:/var/chef/cache/chef-stacktrace.out
==> default: [2015-02-10T16:18:30-08:00] FATAL: Chef::Exceptions::CookbookNotFound: Cookbook iis not found. If you're loading iis from another cookbook, make sure you configure the dependency in your metadata
Chef never successfully completed! Any errors should be visible in the
output above. Please fix your recipes so that they properly complete.
Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(2) do |config|
config.vm.box = "lmayorga1980/windows-2012r2"
config.vm.communicator = "winrm"
config.vm.network "forwarded_port", host: 3389, guest: 3389
config.vm.provider "virtualbox" do |v|
v.cpus = 2
v.memory = 2048
end
# Provisioning
config.vm.provision "chef_solo" do |chef|
# chef.cookbooks_path = ['C:\vagrant']
chef.add_recipe "example-webserver2012"
end
end
Berksfile
source "https://supermarket.chef.io"
metadata
cookbook 'iis'
metadata.rb
name 'foobar'
...
depends 'iis'
recipes/default.rb
iis_site 'Default Web Site' do
action [:stop, :delete]
end
The entire directory structure looks like this:
The cookbook was created with berks cookbook example-webserver2012.
There are 2 vagrant files, I'm using the one in the top level.
$ tree vagrant-windows/
vagrant-windows/
├── Vagrantfile
└── cookbooks
└── example-webserver2012
├── Berksfile
├── Berksfile.lock
├── CHANGELOG.md
├── Gemfile
├── Gemfile.lock
├── LICENSE
├── README.md
├── Thorfile
├── Vagrantfile
├── attributes
├── chefignore
├── files
│ └── default
├── libraries
├── metadata.rb
├── providers
├── recipes
│ └── default.rb
├── resources
├── templates
│ └── default
└── test
└── integration
Why is cookbook 'iis' not found?
The reason the ISS cookbook isn't found is because your wrapper cookbook, example-webserver2012, declares a dependency on the IIS cookbook with Berkshelf. Unfortunately, Vagrant with the Chef solo provisioner does not know how to resolve Berkshelf dependencies out of the box. You have a couple options here.
use berks vendor to create a folder containing all the resolved cookbooks, and point the Vagrantfile at that folder.
use the vagrant berkshelf plugin to do berkshelf dependency resolution when you run vagrant provision.
From a workflow prospective I find the vagrant berkshelf plugin very useful.
P.S. In your Berksfile, you don't need to declare a dependency on the IIS cookbook, that dependency will be picked up from the metadata.rb because of the metadata line in your Berksfile.
I'm managing a few web services residing on different fixed hosts with ssh. I wanted to use vagrant so that I can edit local files and have them synced automagically.
however I'm having problems as I'm not using no provider or box, it's a fixed host and it feels like I'm going against vagramt's aim.
here's my Vagrantfile:
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Vagrantfile API/syntax version. Don't touch unless you know what you're doing!
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.ssh.host = ...
config.ssh.username = ...
config.ssh.private_key_path = ".ssh/id_rsa"
config.vm.synced_folder "src/", "..."
config.vm.box = "myhost"
config.vm.provision :shell, :path => "bootstrap.sh"
end
and here's my bootstrap.sh file:
pip install flask sqlalchemy
but I can't make vagrant skip providing (with virtualbox or so)
well, as it always comes out - fighting against your tool in order to force it to do things it's not designed to are a bad idea.
there was probably a way to make vagrant use a void box but vagrant is too much for just keeping 2 directories synced. I found this nice tool that does exactly the same as vagrant for sync just without all the provider/provision etc.
I try to start Elastic search in clustering with 2 nodes :
I run Command :
service elasticsearch start
then I run 2 instances of elasticsearch in order to join the cluster with commands:
/bin/elasticsearch
But when I check the head_plugin : localhost:2900/_plugin/head/ I get the Cluster health status Yellow, and the nodes didn't join the cluster
How can I configure the two nodes to make them join the cluster ?
thanks
EDIT:
This is what I get :
root#vmi17663:~# curl -XGET 'http://localhost:9200/_cluster/nodes?pretty=true'
{
"ok" : true,
"cluster_name" : "nearCluster",
"nodes" : {
"aHUjm3SjQa6MbRoWCnL4pQ" : {
"name" : "Primary node",
"transport_address" : "inet[/ip#dress:9300]",
"hostname" : "HOSTNAME",
"version" : "0.90.5",
"http_address" : "inet[/ip#dress:9200]"
}
}
}root#vmi17663:~# curl -XGET 'http://localhost:9201/_cluster/nodes?pretty=true'
{
"ok" : true,
"cluster_name" : "nearCluster",
"nodes" : {
"pz7dfIABSbKRc92xYCbtgQ" : {
"name" : "Second Node",
"transport_address" : "inet[/ip#dress:9301]",
"hostname" : "HOSTNAME",
"version" : "0.90.5",
"http_address" : "inet[/ip#dress:9201]"
}
}
I made it work !
As expected It was iptables Problem I added this rule
-A INPUT -m pkttype --pkt-type multicast -j ACCEPT
and everything went smooth
Make sure you have different elasticsearch.yml files for each node.
Make sure each is configured to join the same cluser via cluster.name: "mycluster"
You can start an additional nodes (new jvm process) off the same code install like this:
<es home>/bin/elasticsearch -d -Des.config=<wherever>/elasticsearch-1/config/elasticsearch.yml
<es home>/bin/elasticsearch -d -Des.config=<wherever>/elasticsearch-2/config/elasticsearch.yml
My setup looks like this:
elasticsearch-1.0.0.RC1
├── LICENSE.txt
├── NOTICE.txt
├── README.textile
├── bin
├── config
├── data
├── lib
├── logs
└── plugins
elasticsearch-2
├── config
├── data
├── logs
├── run
└── work
elasticsearch-3
├── config
├── data
├── logs
├── run
└── work
elasticsearch-1
├── config
├── data
├── logs
├── run
└── work
I start all three with aliases like this:
alias startes1='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-1/config/elasticsearch.yml'
alias startes2='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-2/config/elasticsearch.yml'
alias startes3='/usr/local/elasticsearch-1.0.0.RC1/bin/elasticsearch -d -Des.config=/usr/local/elasticsearch-3/config/elasticsearch.yml'
If your nodes don't join, then you need to check your cluster.name setting, and make sure that each node can communicate to each other via port 9300. (9200 is for incoming traffic, and 9300 is for node to node traffic).
So as #mcolin mentioned make sure your cluster name is the same for each node. To do so, open up your /etc/elasticsearch/elasticsearch.yml file on your 1st server, and find the line that says "cluster.name" and note what it is set to. Then go to your other servers and make sure they are set to the exact same thing.
To do this, you could run this command:
sudo vim /etc/elasticsearch/elasticsearch.yml
and set the following line to be something like:
cluster.name: my_node_name
Additionally, your nodes might not be able to talk to each other. My nodes are running on AWS, so I went to my EC2 panel and made sure my instances were in the same security group. Then I set my security group to allow all instances within it to talk to each other by creating a rule like this:
Custom TCP Rule TCP 9300 dev-elasticsearch
(or to be wild and dangerous, set this:)
All traffic All All dev-elasticsearch
Within a minute of setting this I checked my cluster status and all was well:
curl -XGET 'http://127.0.0.1:9200/_cluster/health?pretty=true'