moving data from a web app on one vagrant box to another - vagrant

I've got a web app running on one vagrant box. however I've started building a different app to do the same job in another vagrant box. I'd like to keep the original box working as is so that I can compare results between the two or in case I need to reimport the data to the second box. What is the best way to connect the two together?

Store the application data on the host machine. The /vagrant folder always points to the Vagrantfile-folder of your host machine.
Then copy/move the necessary data or use rsync to sync data across two folders on the host.
Alternative:
Give both Vagrant boxes a static IP
Use rsync to sync data selectively between bot machines
Write bash-files for automatic execution.
https://www.digitalocean.com/community/tutorials/how-to-use-rsync-to-sync-local-and-remote-directories-on-a-vps

Related

Vagrant DB box - what are the best practices?

Our production servers setup is quite standard:
API + WEB + DB servers.
The API is mainly the one to access the DB, but the WEB does that also in certain cases.
I want to create a similar local setup using Vagrant.
This is where I got so far:
I have 2 git projects, a WEB and an API.
I turned them into Vagrant projects, by putting a Vagrantfile in both main directories.. each Vagrantfile points to a dedicated box which includes all the server dependencies.
Both VM's take the code from the mounted vagrant folder. So far - it works like a charm.
Now, i've got to the point where I need to create a VM for the DB, the thing is.. I obviosuly don't have DB git project - where do I put the Vagrantfile in this case? It's very convenient that the Vagrantfile is part of the code.
What are the best practices?
I hope my question makes sense.
Thanks a lot.
I would see 2 possibilities:
anyway, create another Vagrantfile just for the DB even if you do not have code associated, you can still have a git project only for the Vagrantfile.
The downside is that you need to start vagrant from 3 different files, so not the best
put the DB VM into one of API or WEB (maybe WEB would make more sense but depends your project) so when will start 2 VM from the same Vagrantfile.

Amazon AWS EC2 - Elastic IP - can i mirror my site closing certain ports?

THE MISSION:
I have a development environment running on an Amazon AWS EC2 virtual server which i want to have tested by third parties.
THE PROBLEM:
I do NOT trust the companies who will test it not to sabotage environment and / or steal code. Therefore, i don't want them to know URL's, permanent IP's or even to access the web pages, which they could eventually use a crawler to find.
My environment includes web applications and socket servers. I do NOT want to expose the web applications, while giving access only to socket servers.
THE CONCEPT:
I have opted to use a secondary, impermanent Elastic IP pointing to the environment. this IP will be destroyed after 1 or 2 days, after basic tests have run. Subject to change (depending on suggestions from this thread).
THE QUESTION:
Can i create a secondary Elastic IP instance that allows access only to ports 5000-5100? If so, how?
THE ALTERNATIVE: In case this is not the most efficient procedure, what alternative would you propose?
MY SOLUTIONS: followed FAQ Launching Instance From Backup
create snapshot
create image from snapshot (snapshot menu - create image tag)
instances - launch instance
choose image created from snapshot as your root volume
edit security groups (opened port range for sockets only, no web)
deleted all web code from this instance
after 2 days, will delete instance
followed Create Image From, Instance
select (exclusively) running instance you wish to mirror
right click on selected instance
choose create image from dropdown
to 7. same as above
this second solution seems to be more stable (especially re: status check and connectivity issues).
any better solutions? thanx!

How to create a partition in remote ApacheDS, LDAP server?

I know how to create a partition in local ApacheDS instance from this article. Current problem is I don't know how to create a partition in remote ApacheDS.
I am accessing remote ApacheDS server(in CentOS) from Apache Directory Studio(in Windows).
Any help would be appreciated.
ApacheDS
Version: 2.0.0-M14
Apache Directory Studio
Version: 2.0.0.v20130517
I don't know if your problem is that you can't access the remote instance or another.
But if you want to create a partition follow this "guide".
ApacheDS seems to have a very bad tutorial.
Contrary the other answers, here I explain the real problem. The sad truth is the following:
You can't manipulate the partitions of a non-local Apache Directory Server with Apache Directory Studio.
You can't even do this with a locally running one. The only what you can do, are the Apache Directory Server partitions running inside your Apache Directory Studio.
However, there is a workaround for the problem. It is particularly useful, if you are using linux, or at least you have a cygwin by the hand.
The Apache Directory Server has a complex directory structure, full with small files, partially binary and partially text data.
This data structure doesn't contain any filesystem references, so you can freely clone it.
Create an LDAP server inside your Apache Directory Studio. Open its properties. You get a popup form. Inside this form, you will see some like this:
Location /your/home/directory/.ApacheDirectoryStudio/.metadata/.plugins/org.apache.directory.studio.ldapservers/servers/e56640c7-70ed-4eed-921c-75c475117a11
This is what you want!
This is the directory structure, where your local ApacheDS is running!
And you can now easily synchronize this data structure, ideally with a simple rsync command, into your server or back!
So,
You create the new Apache Directory Server instance inside the Apache Directory Studio
Your check its properties
You stop it, and synchronize your server-side server directory into your this one! For example, rsync -va --delete you#your.server.com:/srv/apacheds/instance/ /your/home/directory/.ApacheDirectoryStudio/.metadata/.plugins/org.apache.directory.studio.ldapservers/servers/e56640c7-70ed-4eed-921c-75c475117a11
You play with the partitions as you wish
You synchronize it back.
Of course if you are playing with the Apache Directory Server file structure on such a low, file-system level, the server needs to be stopped!

best practices for uploading many files to live server while updating database

I have roughly 200 files that I need to push to our live server after business hours. In addition to this push I have a few database updates that I need to run in conjunction with this roll out.
What has been done in the past on this system is to create a directory on the server of the updated files and create a cron script to copy those files to overwrite their previous versions on the server. And then executing the calls to the database.
Here are the problems I am trying to work around:
1) There is no staging server.
2) There is no easy way to push from our version control (svn) to our live server
3) There are a lot of files and the directory structure is deep so setting up a copy of the directories to be copied over on the server seems precarious and time consuming.
What's the best way to do this?
The way I've done similar things in the past is to have a cron job run a script an administrative machine that:
1) checks out the files I need on my production server on some sort of staging machine
2) rsync's the files onto the server
3) runs a post-rsync script on the server (say via ssh'ing to the server)
However, you specify that you have no ability to use a staging machine, by which I assume you mean that you have no administrative machine at all, and that you cannot check out your repository on the server either. That makes doing this cleanly far harder. Are you sure you can't at least use your workstation or some similar box as an administrative or staging machine here?

Windows Azure - Persistence of OS Settings when using WebRoles

I've been watching some videos from the build conference re: Inside Windows Azure etc.
My take away from one of them was that unless I loaded in a preconfigured VHD into a virtual machine role, I would lose any system settings that I might have made should the instance be brought down or recycled.
So for instance, I have a single account with 2 Web Roles running multiple (small) websites. To make that happen I had to adjust the settings in the Hosts file. I know my websites will be carried over in the event of failure because they are defined in the ServiceConfiguration.csfg but will my hosts file settings also carry over to a fresh instance in the event of a failure?
i.e. how deep/comprehensive is my "template" with a web role?
The hosts file will be reconstructed on any full redeployment or reimage.
In general, you should avoid relying on changes to any file that is created by the operating system. If your application is migrated to another server it will be running on a new virtual machine with its own new copy of Windows, and so the changes will suddenly appear to have vanished.
The same will happen if you perform a deployment to the Azure "staging" environment and then perform a "swap VIP": the "staging" environment will not have the changes made to the operating system file.
Microsoft intentionally don't publish inner details of what Azure images look like as they will most likely change in future, but currently
drive C: holds the boot partition, logs, temporary data and is small
drive D: holds a Windows image
drive E: or F: holds your application
On a full deployment, or a re-image, you receive a new virtual machine so all three drives are re-created. On an upgrade, the virtual machine continues to run but the load balancer migrates traffic away while the new version of the application is deployed to drive F:. Drive E: is then removed.
So, answering your question directly, the "template" is for drive E: -- anything else is subject to change without your knowledge, and can't be relied on.
Azure provides Startup Scripts so that you can make configuration changes on instance startup. Often these are used to install additional OS components or make IIS-configuration changes (like disabling idle timeouts).
See http://blogs.msdn.com/b/lucascan/archive/2011/09/30/using-a-windows-azure-startup-script-to-prevent-your-site-from-being-shutdown.aspx for an example.
The existing answers are technically correct and answer the question, but hosting multiple web sites in a single web role doesn't require editing the hosts file at all. Just define multiple web sites (with different host headers) in your ServiceDefinition.csdef. See http://msdn.microsoft.com/en-us/library/gg433110.aspx

Resources