Performance issue with local dev environment - microservices

I have a large project with large set of micro-services. When I am working locally my computer barely handles running all of them plus IDE, MySQL and Mongo. This inconvenience is blocking me to add more micro-services.
Is there a known practice to work in this kind of environment?

Related

Trouble developing on mirrored, but separate, production environment

I'm having some problems with the "development environment should be as close as possible to the production environment".
(Production machine's operating system is Linux.)
My understanding of development steps (roughly):
code, compile, test/run, repeat
"Normally" I would go through these on my own machine, then push the code to CI for testing, and possibly deploy. The CI would be responsible for running the tests in an environment that matches production, this way if the tests pass, it's safe to assume that the code works in production as well.
The problem of a larger environment
☑ Database - of some kind.
☑ Job Processing Pool - for some long-running background tasks.
☑ User Account Management - used by other systems as well.
☑ Centralized Logging - for sanity.
☑ Forward Proxy - to tie individual http-accessible services under the same url but different paths.
☐ And possible other services or collections of services.
Solutions?
All on my own machine? No way in hell.
All on a virtual machine? Maybe, but security-wise if this setup was supposed to mirror the prod.env., and the prod.env. was like this, well.. that might not be such a good idea in case of a breach.
Divide by responsibility and set them up on multiple virtual machines? Who's gonna manage all those machines? I think it's possible to do better than this.
Use containers such as Docker, or slap similar together by yourself? Sounds good: (Possibly:) very fast iteration cycles, separation of concern, some security by separation, and easy reproducibility.
For the sake of simplicity, let's say that our containerization tooling of choice is Docker, and we are not going to build one ourselves with libvirt / lxc tooling / direct kernel calls.
So Docker it is, possibly with CoreOS or Project Atomic. So now there is a container for an application (or multiple applications) that has been separated from the rest of the system, and can be brought up nearly identically anywhere.
Solution number 1: Production environment is pretty and elegant.
Problem number 1: This is not development environment.
The development environment
Whatever the choice to not having to sprinkle the production environment into my own machine, the problem remains the same:
Even though the production environment is correctly set up, I have to run the compilation and testing somewhere, before being able to deploy (be it to another testing round by CI or whatever).
How do I solve this?
Can it really be that the proper way to solve this is by writing code on my own machine, having it synchronized/directly visible in a virtualized-mirrored-production-like environment, which automates running of the tests?
What happens when I don't want to run all the tests, but only the portion that I'm writing right now? Do I edit the automated compilation process every time? What about remote debugging, since multiple systems must be orchestrated to run in the correct way, and debugging must attach in-between to one of the programs. Not to mention the speed of "code, test" cycle, which would be _very_ slow.
This sounds helluvalot like CI, but multiple developers can't all use the same CI and modify it, so they probably have to have this setup on their own machines.
I was also thinking that the developers could each use a completely virtualized os that contained all the development tools and was mirrored environment-wise with the production, but that would force veteran users to adopt the tooling of the virtual development environment, which doesn't sound such a good idea.

Number of deployed portlets - influence on performance

There is wonder question: how does number of deployed portlets affect on performance of Liferay?
If I have deployed 50 war projects (summary about 80 portlets), can it be cause of very slow performance?
Or will it only insignificantly affect on performance of Liferay?
The Number of deployed projects, extracted in tomcat/webapps, will affect dramatically your server startup time for sure.
Depending on the portlet scope and tasks they might add an overhead to your browsing performance, but I don't think it's a big deal as long as you don't have lots of them rendering in the same page.
Now, things are a lot harder in a development machine (the tomcat server is controlled by Eclipse and you have portlets auto-deploying in that server each time you compile or change them). A portlet's auto deployment can cause all the other webapps to deploy too. Also, in development mode, you might experience permGen errors quite regularly, and in this case, since you're gonna have to restart the server each time, the huge startup time can become a major pain.
If you're in a development environment , you should really consider removing the webapps that are not beeing tested or needed.

How could Database Projects and TFS help our dev team?

I have just started working with a new company in a very small IT department, and an even smaller dev team (just 2 developers including me). We mainly develop in house web applications for the company.
Now my background is in desktop applications so this job comes with a slight learning curve for me having never developed ASP.Net web applications before. Currently we do not use TFS however I have made the suggestion and it is something we are going to be adopting soon.
I am also considering recommending we move our SQL databases into database projects as currently we do nightly backups to protect the data but all the updates are manual, we connect to the database and execute queries etc.
Im not a DBA but in my last job we were in the process of migrating our databases to database projects and the DBAs seemed to love the idea. What would the benefits and potential downfalls of this be? Would it aid us with updating databases in our live enviroment after development has been done? Obviously we dont want to loose any data but just update tables / Stored Procs etc.
As a side question I have very limited knowledge of TFS, and although we are going to be using it to handle our version control is it possible to use TFS to update our live websites automatically once development has finished?
Sorry if this is quite a broad question, I am attempting to research this myself but I would like to hear from people who actually use the products and do these things.
Thanks
About database projects: I, and several dba's I know, have had mixed experiences with them. I'm not sure they are exactly where they should be at this time but it may be simply a function of how I work. The deployment model is... difficult and can result in some unexpected behavior. If you go this route test test test to make sure you understand exactly what's happening.
If you are just trying to get version control for the database you might consider SQL Source Control from Red Gate. It looks pretty nice and hooks into TFS. I used one of the early versions (beta and 1.0) for awhile and was very happy with it. Now that I think about it, I'm not sure why I don't have it here... ;)
As far as deploying out of TFS, you can absolutely do this. We have a build server set up so that whenever code is checked in the build server automatically spins up to compile and deploy it out to one of our testing sites. Look here for a primer to get you started. This does require some configuration on your web server to properly support and the documentation is spotty at best.
Once you are happy with the test area, you can hook something into changes to the Build Quality so that it pushes the code to a staging or even production server... Or simply have a build setup to recompile and deploy out there. Although I don't recommend doing production pushes this way. Not because of a technical issue, but rather a timing one. It's usually much faster to just copy / paste from one location to production when necessary thereby limiting downtime.

Set up a development environment for MVC2 as it would be for a development team

So at work we have our environment setup as most of you probably do as well. We have a centralized code base (controlled through SVN), which runs off a Database on the same server (Integration). We bring this code base down and copy the database down locally to work on our machines with.
This is what I need to figure out how to set up. I want to setup a Database in SQL Server 2008 locally, have it connected to my MVC 2 app, and also have it locally setup in IIS so I can test it without going into the debugger and running in the Development Server of VS2010 everytime.
So far searching I haven't really found any articles or anything that tell how to set this up, even though I feel like it is the most common thing to do (as most software shops are setup this way).
Any sources or directions would be awesome.
Thanks!
I am running Windows 7 Ultimate, Visual Studio 2010, SQL Server 2008, and IIS (whichever version comes with Windows 7).
The answer is "it depends". Although most of the software shops are setup like this there are tweaks in the setup. Especially when it comes to Database.
I found 2 cases:
In most cases where I worked, I found they have a DEV database server where the developers are given access on a perticular database which the entire team works on. They do install SQL Management Studio on the dev machines for connecting to the server/database.
Some shops have SQLExpress setup on each developer machine where they maintain a local copy of the database (same as yours)... This comes with additional headache of syncing copies of multiple databases. We used it with Visual Studion Database Projects in past and it worked like charm in many cases where we used to get "delta" updates and apply to server database. Obviously - these updates were done by someone who knew the VS DB PRO features and given some dedicated hours to perfrom the sync.
I still prefer a "controlled environment" as opposed to #2 where the schema changes are controlled by only a few...
Just my 2 cents...
Lots of this really depends on the app and local details. But we do the same sorts of things all the time. First and foremost, you'll want your develop some standardization and/or conventions about environment -- it makes life alot easier if everyone agrees that they should be running the local test DB at .\SQLEXPRESS and if they can agree on what the local urls should be.
Perhaps the tallest pole in the tent is automating the database setup -- there are some real challenges there, especially if your app has a significant amount of data it needs to be usable. I haven't found a perfect solution here, typically we use a combination of a utility called sseutil to create database instances and a database migration framework to make schema changes. Something like RoundhousE looks compelling here.

Does CI need a CI-Server

Is a CI server required for continous integration?
In order to facilitate continous integration you need to automate the build, distribution, and deploy processes. Each of these steps is possible without any specialized CI-Server. Coordinating these activities can be done through file notifications and other low level mechanisms; however, a database driven backend (a CI-Server) coordinating these steps greatly enhances the reliability, scalability, and maintainability of your systems.
You don't need a dedicated server, but a build machine of some kind is invaluable, otherwise there is no single central place where the code is always being built and tested. Although you can mimic this affect using a developer machine, there's the risk of overlap with the code that is being changed on that machine.
BTW I use Hudson, which is pretty light weight - doesn't need much to get it going.
It's important to use a dedicated machine so that you get independent verification, without corruption.
For small projects, it can be a pretty basic machine, so don't let hardware costs get you down. You probably have an old machine in a closet that is good enough.
You can also avoid dedicated hardware by using a virtual machine. Best bet is to find a server that is doing something else but is underloaded, and put the VM on it.
Before I ever heard the term "continuous-integration" (This was back in 2002 or 2003) I wrote a nightly build script that connected to cvs, grabbed a clean copy of the main project and the five smaller sub-projects, built all the jars via ant then built and redeployed a WAR file via a second ant script that used the tomcat ant tasks.
It ran via cron at 7pm and sent email with a bunch of attached output files. We used it for the entire 7 months of the project and it stayed in use for the next 20 months of maintenance and improvements.
It worked fine but I would prefer hudson over bash scripts, cron and ant.
A separate machine is really necessary if you have more than one developer on the project.
If you're using the .NET technology stack here's some pointers:
CruiseControl.Net is fairly lightweight. That's what we use. You could probably run it on your development machine without too much trouble.
You don't need to install or run Visual Studio unless you have Visual Studio Setup Projects. Instead, you can use a free command line build tool called MSBuild.

Resources