Is Go efficient in Cloud environment like Kubernetes? - performance

Go has mechanical sympathy. So does that mean I need to modify my code based on the hardware I am running, for the best possible performance? How does that work in a cloud environment like K8s where a developer doesn't care about the hardware?

Go compiles across all relevant architectures. You do not have to modify your code for different platforms.
In cloud environments (like Kubernetes for example) you usually use docker images or drop in your binary.

Related

docker-compose with similar images

I currently have a Docker instance running on a PI3+ with the following images on separate containers:
lsioarmhf/sonarr
lsioarmhf/radarr
lsioarmhf/jacket
as these three image share a lot of common libraries (i.e. mono) I am wondering if there is a way to reduce their memory and CPU-usage footprint.
In order to do this I was looking at two possibilities:
1) building and mantaining my own image (based on the one by lsioarmhf on github) to include the three images
2) using docker compose
Can anyone please tell me if docker compose would reduce the memory footprint of the common elements of similar images?
Would it be the same of executing three separate containers?
Thanks,
No, docker-compose orchestrates your containers, it doesn't combine their runtime resources in any way. For simple setups it's virtually the same as you starting all 3 manually.
There is no way to do that with docker at all, actually. The images might share disk space but the runtime has to be different, because they're different instances.
Since it looks like you're using a PI3+ as a dedicated board for this project, you might be better off not using Docker at all. If you need it for another project, another microSD card is inexpensive enough to start from scratch, if you're worried about isolation.

Possible alternatives to vagrant-vCenter plugin?

I'm currently looking for a provisioning-solution to deploy, configure and customize VMs in a vSphere / vCenter environment automatically. By doing that, I would like to adopt some changes to each VM individually (e.g. by using different licence keys for different software products).
During my research, I found that vagrant in combination with the vagrant-vcenter plugin with the help of puppet and git (by having different branches for different kinds of VMs) is doing exactly what i want to achieve:
https://github.com/gosddc/vagrant-vcenter
Unfortunatly, this plugin is immature and still in a beta-state.
Does anyone of you know a suitable alternative (could also be commercial)?
I searched for the same since few months. There is no other way to do that currently except using the plugin and modifying it according to your requirements. There is no other commercial solution currently in place. I used the both the plugins vagrant-vcenter and vagrant-vcloud and I was able to do the basic provisioning stuff without issues. You will not have any issues until you want to customize a lot during deployment. Some of the customization you can achieve through scripts that you can run using puppet after the VM is provisioned on vCenter.
There are several libraries out there to interact with vCeneter. The one I use is https://github.com/rlane/rbvmomi. The code base is somewhat old, but so is vCenter. The Vagrant implementation is great for dev, but has several issues going to full blown staging or production environment. For the latter, a library to interact with the underlying API of vCenter such as rbvmomi is preferred.

Trouble developing on mirrored, but separate, production environment

I'm having some problems with the "development environment should be as close as possible to the production environment".
(Production machine's operating system is Linux.)
My understanding of development steps (roughly):
code, compile, test/run, repeat
"Normally" I would go through these on my own machine, then push the code to CI for testing, and possibly deploy. The CI would be responsible for running the tests in an environment that matches production, this way if the tests pass, it's safe to assume that the code works in production as well.
The problem of a larger environment
☑ Database - of some kind.
☑ Job Processing Pool - for some long-running background tasks.
☑ User Account Management - used by other systems as well.
☑ Centralized Logging - for sanity.
☑ Forward Proxy - to tie individual http-accessible services under the same url but different paths.
☐ And possible other services or collections of services.
Solutions?
All on my own machine? No way in hell.
All on a virtual machine? Maybe, but security-wise if this setup was supposed to mirror the prod.env., and the prod.env. was like this, well.. that might not be such a good idea in case of a breach.
Divide by responsibility and set them up on multiple virtual machines? Who's gonna manage all those machines? I think it's possible to do better than this.
Use containers such as Docker, or slap similar together by yourself? Sounds good: (Possibly:) very fast iteration cycles, separation of concern, some security by separation, and easy reproducibility.
For the sake of simplicity, let's say that our containerization tooling of choice is Docker, and we are not going to build one ourselves with libvirt / lxc tooling / direct kernel calls.
So Docker it is, possibly with CoreOS or Project Atomic. So now there is a container for an application (or multiple applications) that has been separated from the rest of the system, and can be brought up nearly identically anywhere.
Solution number 1: Production environment is pretty and elegant.
Problem number 1: This is not development environment.
The development environment
Whatever the choice to not having to sprinkle the production environment into my own machine, the problem remains the same:
Even though the production environment is correctly set up, I have to run the compilation and testing somewhere, before being able to deploy (be it to another testing round by CI or whatever).
How do I solve this?
Can it really be that the proper way to solve this is by writing code on my own machine, having it synchronized/directly visible in a virtualized-mirrored-production-like environment, which automates running of the tests?
What happens when I don't want to run all the tests, but only the portion that I'm writing right now? Do I edit the automated compilation process every time? What about remote debugging, since multiple systems must be orchestrated to run in the correct way, and debugging must attach in-between to one of the programs. Not to mention the speed of "code, test" cycle, which would be _very_ slow.
This sounds helluvalot like CI, but multiple developers can't all use the same CI and modify it, so they probably have to have this setup on their own machines.
I was also thinking that the developers could each use a completely virtualized os that contained all the development tools and was mirrored environment-wise with the production, but that would force veteran users to adopt the tooling of the virtual development environment, which doesn't sound such a good idea.

Deployment/build tool between Ant and Chef

So I've been agonizing over embracing a deployment/configuration management tool like Chef or Puppet for a good long while. Not because I have any hesitation about them in general, but because I don't think they are a good fit for our specific scenario.
As far as I can see, these types of tools are targeted at frequent/wide-scale deployments, where you need to roll out software to 10s-1000s of systems. In our environment, we have a collection of ~25 different web services spread across half a dozen runtimes, with 1-8 deployments of each in production currently. Our big deployment problem is that each of the services has a different deployment story, and it's entirely manual, so it tends to be time consuming and error prone. Another wrinkle is that different instances in production may be different versions of the software, so we may need to concurrently support multiple deployment stories for a single service.
So I feel like we need something more like Ant/Maven/Rake, which is customized for each service. However, my experience with those is they are generally focused on local operations, and specific to a given language/runtime.
Is there a runtime-agnostic framework for describing and orchestrating building/testing/deployment in the manner I'm interested in?
I'm sure if I hit them long enough, I could get Rake or Puppet to do these for me, but I'm looking for something built for this purpose.
(Oh, and to make things worse, everything runs on Windows)
Thanks!
Here's another alternative you might want to consider: kwatee (I'm affiliated) is a free lightweight deploiement tool which besides having a web management interface can also integrate with ant (or maven or anything else with python CLI) to automate build & deploiement on dev/test environments for instance.
One of the nice things is the web configuration interface which make it pretty easy to quickly configure your deploiment stories, i.e. which software/version goes on which server. It's often necessary to setup different parameters in configuration files depending on the target server. For that you can "templatize" your packages using kwatee variable (similar to environment variables) which are configured with different values for each server.
Software must be registered in Kwatee's repository in the form of a folder of files, or an archive (zip, tar, tar.gz, bzip2, war) or a single file (e.g. an exe). Msi's are not supported. To deploy on windows kwatee needs the servers to have either telnet/ftp or ssh/scp (there are free tools out there).

Which do I select - Windows Azure or Amazon EC2 - for hosting unmanaged C++ code?

We have a server solution written entirely in unmanaged Visual C++. It contains complicated methods for really heavy data processing.
The whole thing contains millions lines of code, so rewritning it all in some other language is not an option. We could write some extra code or make isolated changes, but rewriting everything is out of the question.
Now we'd like to put it on a cloud. Which platform do we choose - Amazon EC2 or Windows Azure - and why?
Does it require Administrative rights on the box (e.g. writing to registry, changing box configuration, installing components, etc)? If it does, you can't use Windows Azure today.
If it doesn't require admin privileges then the other things you need to think about are:
What is the architecture? How does it interact with the world? Files? databases?
What dependencies you have?
What is the usage pattern (burst? continuous?)
What would be the cost based on usage and the pricing of both offerings?
That would hopefully give you some more datapoints to help you make a decision.
That depends largely on how you think about costs, future value of the platforms, etc... Azure is still in its very early stages. There are definitely more people using EC2 today. I would recommend computing the costs between the two platforms as a starting point given your estimated usage. Do you want to use features that one platform has over the other? How does your app benchmark between the two platforms? Do you want to take advantage of spot pricing?
In either case I would recommend adding a thin shim layer to abstract you from whichever you choose and enable you to move in the future if you need to.
This is like windows vs. linux....there are no universal right answers, only opinions.

Resources