So I just started to work with Vagrant and wondering what is most effective structure or workflow. Should I create separate VM for every project or should I use 1 VM to many projects? For example I have 5 WP projects, very different, from landing page to Woocommerce should I separate it or put in one VM. I think putting many projects to 1 VM defeat the purpose of Vagrant in other hand, putting every project to separate VM is some king of overkill or this is normal practice?
Here is the visual example what I'm talking about:
VM per project
VM with many projects/vhosts
So which one is better? Or it's depends from situation and there is no correct answer?
Or it's depends from situation and there is no correct answer?
I think thats your correct answer !
The best is always to isolate each project from each other specially if they have no dependencies. If you have different php version etc, its best to isolate into a different VM.
on the other side, do you need to work on your 5 projects at the same time ? starting 5 VMs from your host is overkill and you'll run quickly into performance issue (unless you get 64+ GB RAM)
Related
I am aware of the process to install WAS 8.5.5.x and 9.0.x versions using IM response file(s) but would like to know best practices and recommendations to perform WAS installation and upgrade on more than one server, to avoid manual errors and reduce time.
I am open to use to Ansible, Puppet or any other orchestration tools as well, but would like to know possible options if we are not allowed to use these tools.
Ultimate goal is to automate most of the setup/upgrade steps, if not all of them since when dealing with bunch of servers.
Thanks
Assuming you are referring to WebSphere Application Server traditional, take a look at the approaches described here, https://www.ibm.com/support/knowledgecenter/SSEQTP_9.0.0/com.ibm.websphere.installation.base.doc/ae/tins_enterprise_install.html, especially if you are working with larger scale deployments.
Consider creating master images and distributing them in a swinging profile-type setup. They make it easier and faster to install and apply updates since you only need to create images once and distribute many times. You have consistency across systems too.
You can then automate with your preferred automation technology.
We use ansible, simple and effectively.
True, you must of course develop a playbook that will be able to do all this.
I'm having some problems with the "development environment should be as close as possible to the production environment".
(Production machine's operating system is Linux.)
My understanding of development steps (roughly):
code, compile, test/run, repeat
"Normally" I would go through these on my own machine, then push the code to CI for testing, and possibly deploy. The CI would be responsible for running the tests in an environment that matches production, this way if the tests pass, it's safe to assume that the code works in production as well.
The problem of a larger environment
☑ Database - of some kind.
☑ Job Processing Pool - for some long-running background tasks.
☑ User Account Management - used by other systems as well.
☑ Centralized Logging - for sanity.
☑ Forward Proxy - to tie individual http-accessible services under the same url but different paths.
☐ And possible other services or collections of services.
Solutions?
All on my own machine? No way in hell.
All on a virtual machine? Maybe, but security-wise if this setup was supposed to mirror the prod.env., and the prod.env. was like this, well.. that might not be such a good idea in case of a breach.
Divide by responsibility and set them up on multiple virtual machines? Who's gonna manage all those machines? I think it's possible to do better than this.
Use containers such as Docker, or slap similar together by yourself? Sounds good: (Possibly:) very fast iteration cycles, separation of concern, some security by separation, and easy reproducibility.
For the sake of simplicity, let's say that our containerization tooling of choice is Docker, and we are not going to build one ourselves with libvirt / lxc tooling / direct kernel calls.
So Docker it is, possibly with CoreOS or Project Atomic. So now there is a container for an application (or multiple applications) that has been separated from the rest of the system, and can be brought up nearly identically anywhere.
Solution number 1: Production environment is pretty and elegant.
Problem number 1: This is not development environment.
The development environment
Whatever the choice to not having to sprinkle the production environment into my own machine, the problem remains the same:
Even though the production environment is correctly set up, I have to run the compilation and testing somewhere, before being able to deploy (be it to another testing round by CI or whatever).
How do I solve this?
Can it really be that the proper way to solve this is by writing code on my own machine, having it synchronized/directly visible in a virtualized-mirrored-production-like environment, which automates running of the tests?
What happens when I don't want to run all the tests, but only the portion that I'm writing right now? Do I edit the automated compilation process every time? What about remote debugging, since multiple systems must be orchestrated to run in the correct way, and debugging must attach in-between to one of the programs. Not to mention the speed of "code, test" cycle, which would be _very_ slow.
This sounds helluvalot like CI, but multiple developers can't all use the same CI and modify it, so they probably have to have this setup on their own machines.
I was also thinking that the developers could each use a completely virtualized os that contained all the development tools and was mirrored environment-wise with the production, but that would force veteran users to adopt the tooling of the virtual development environment, which doesn't sound such a good idea.
Imagine you're going to manage a number of servers with a number of different services that's used by a number of people. Now say you want to reconfigure or replace some software on one of those servers. Obviously you don't want to work on servers that are in production.
If this was a code change, as a developer, I would make the change on my local development machine, test it locally and commit the change to a version control system. The changes could then be deployed in a staging environment, tested further and finally deployed in a production environment. It would also be easy for me to roll back, if necessary.
Generally, or specifically, how do you achieve this in system administration?
(The first thing that comes to mind is to use virtual machines and put virtual machine images in version control, but I'm sure there is a lot of literature and clever solutions I'm not presently aware of.)
Use chef or puppet to enforce machine configurations, and place their cookbooks and recipes under version control. Yes vms would make things easier but even physical server provioning can be controlled by kickstart or preeseed which can again be version controlled.
Will it be slow if I set this up?
I have both running on my machine and I wanted to setup CI with TFS 2010. So everytime I check in code it sets off a build. Will this make the process of coding while building make my computer really slow?
I just want to test everything else before investing in a separate machine for the builds and stuff.
Slow yes, and from a build quality point of view, I'd be concerned. Developer machines (mine included) have all sorts of ugly things installed on them, and hacks to make things work. I'm a really big fan of having a dedicated build machine (virtual or real).
Yes, it will be slow. Especially if your machine will build when others check in too. If you are the only one making commits, it'll probably be just about bearable.
One of the advantages that a build server brings is preventing the "works fine on my box" arguments. So I'd consider using VM in the first phase to show the benefits of CI to the executives. Then claiming a dedicated server for builds will be easier.
I am experimenting with different open source projects just to see which one I can work with since I am a beginner. Of course, many projects have different dependencies and programs that you must install. I want to keep things organized and I don't want to pollute my main windows account, since I use this machine for everyday computing also.
Will creating a seperate windows account on my computer help separate the dependencies for the projects? Are there any better alternatives (other than using virtual machines)?
Thanks
Although you mentioned other than them, but virtual machines are actually the best option if you're planning on working with a lot of different projects and you don't want to pollute your environment too much. If you build them right, you can have a baseline VM that is simple to revert back to if you want to start from scratch because the environment got too polluted.
The problem with only using a separate account is that many installable tools and libraries means they're still going to be made available for all users on the machine, so it doesn't keep things cleaned up. For example, if Visual Studio tools typically apply to all users on the machine. COM dependencies aren't user specific. Some things install Windows Services that need to be running most of the time, but you don't use unless you're developing for them (like SQL Server Reporting Services).
How about installing many kinds of operation system?
If you have enough money, you can buy a computer only for experiment
Virtual machines is definitely the way to go - most non-trivial software modifies more than HKEY_CURRENT_USER machine state. If you don't want full-blown virtual machines (but oh, they're sweet, especially the ones supporting state snapshots!) you could look at something like sandboxie.