Continuous integration always fails with VERR_VMX_NO_VMX - continuous-integration

I am developing an application for a class, and use Packer, Vagrant, and VirtualBox to build and deploy my application.
For a fresh install, the (dev's view) build steps looks like:
Build VirtualBox images with packer build {...}.json
Provision and run VirtualBox images with vagrant up
Hey! Web application!
The repository is here: https://github.com/HenryFBP/2019-team-07f-mirror
Long story short, I'm trying to add continuous integration to test each commit so I (and my team members) don't have to.
I've so far tried both AppVeyor and Travis, but each time the build stops with VirtualBox reporting VERR_VMX_NO_VMX, and it just looks like whatever machine I'm running VBoxManage on doesn't have VT-x enabled.
Short of re-writing my provisioning steps in Docker or doing self-hosting, how can I get a CI provider that gives me this VT-x feature?
Should I actually just tell Vagrant to use Docker instead of VirtualBox?
Should I modify my Vagrantfile to use Docker? Should I switch CI providers? Should I use a self-hosted option like Concourse?
I'm hesitant to try another CI provider because I don't want to just rediscover that they don't have VT-x.
Error from Travis: https://travis-ci.org/HenryFBP/2019-team-07f-mirror/builds/507577619
Error from Appveyor: https://ci.appveyor.com/project/HenryFBP/2019-team-07f-mirror/build/job/piku3n5eav1o1ioe
Thanks!
EDIT #1
I am using Jenkins and it rocks! It's self-hosted and super great. I can run whatever I want on bare-metal. It's the solution for me and took literally one afternoon to set up.

Related

Should I use a Docker container as a GitLab Runner?

Newbie to GitLab CI/CD here.
I'd like to use a linux base container as the Runner. Reason being I only have access to a Windows VM, and I'd rather not have to write PowerShell scripts build/test/deploy. Also, the build is using Spring Boot + maven, and I'm doubtful that the build scripts provided by Spring will run on Windows.
To further the problem, the build done by maven+spring spins up a container to execute the build. In other words, my build would run a container in a container, which seems feasible based on this blog.
Any feedback is much appreciated.
Edit based on #JoSSte 's feedback:
Are there any better approaches to setup a runner besides needing a container inside a container? For instance, if WSL enables running bash scripts, can the Windows VM act as the build server?
And to satisfy #JoSSte's request to make this question less opinion based, are there any best practices to approach a problem like this?

How do I continiously deploy my latest NodeJS/TypeScript branch commit to a Windows machine?

I have a Windows machine, that is supposed to continously pull the latest commit from a specific GitLab branch, and build and deploy the commit.
Basically, I want something like Heroku, but self-hosted on a Windows machine.
I tried looking into AppVeyor and Jenkins, but am unsure what to use to acquire the just-mentioned requirements. I get the basics of git, but have no idea how to deploy the application now it is finished. I'm sorry if the question is not specific or detailed enough.
Thanks a lot.
You can install Gitlab-Runner on a Windows machine and configure it to automatically build & deploy the latest changes.

Use docker compose with compiling

I want to deploy a maven application with docker container and if possible also test with docker, but a have some problems.
I because of using java I need to compile my application before using is.
In the process of compiling there also running unit test, which need a database connection.
For testing I used a database container started from hand who run on localhost:5432.
If I start docker-compose now this causes an error because the container can't reach localhost:5432 any more. If I write postgres:5432 in my application.properties it does not compile because of the unknown host postgres.
How to handle this. Is there a way to start a with maven and an with postgres to building time.
As you see I am new to docker-compose, and don't have a workflow yet.
Thanks for your help
You should use your existing desktop-oriented build process to build and test the application and only use Docker to build the final deployment artifact. If you are hard-coding the database location in your source code, there is lurking trouble there of exactly the sort you describe (what will you do if you have separate staging and production databases hosted by your cloud provider?) and you should make that configurable.
During the docker build phase there’s no way to guarantee that any particular network environment, external services, or DNS names will be present, so you can’t do things like run integration tests that depend on an external database. Fortunately that’s a problem the software engineering community has spent a long time addressing in the decades before Docker existed. While many Docker setup are very enthusiastic about mounting application source code directly into containers, that’s much less useful for compiled languages and not really appropriate for controlled production deployments.
In short: run Maven the same way you did before you had Docker, and then just have your Dockerfile COPY the resulting (fully-tested) .jar file into the image.

Using MongoDB (in a container?) in Visual Studio Team Services pipelines

I have a node.js server that communicates with a MongoDB database. As part of the continuous-integration process I'd like to spin up a MongoDB database and run my tests against the server + DB.
With bitbucket pipelines I can spin up a container that has both node.js and MongoDB. I then run my tests against this setup.
What would be the best way to achieve this with Visual Studio Team Services? Some options that come to mind:
1) Hosted pipelines seem easiest but they don't have MongoDB on them. I could use Tool Installers, but there's no mention of a MongoDB installer, and in fact I don't see any tool installer in my list of available tasks. Also, it is mentioned that there is no admin access to the hosted pipeline machines and I believe MongoDB requires admin access. Lastly, downloading and installing Mongo takes quite a bit of time.
2) Set up my own private pipeline - i.e. a VM with Node + Mongo, and install the pipeline agent on it. Do I have to spin up a dedicate Azure instance for this? Will this instance be torn down and set up again on each test run, or will it remain up between test runs (meaning I have to take extra care to clean it up)?
3) Magically use a container in the pipeline through an option that I haven't yet discovered...?
I'd really like to use a container to run my tests because then I can use the same container locally during the development process, rather than having to maintain multiple environments. Can this be done?
So as it turns out, VSTS now has Docker support in its pipeline (when I wrote my question it was in beta and I didn't find it for whatever reason). It can be found at https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.docker.
This command allows you to spin up a container of your choice and run a single command on it. If this command is to be synchronously run as part of the pipeline, then Run in Background needs to be unchecked (this will be the case for regular build commands, I guess). I ended up pushing a build script into my git repository and running it on a container.
And re. my question in (2) above - machines in private pipelines aren't cleaned up between pipeline runs.

Is it possible to keep workspace on Amazon EC2 server between runs?

We develop a software for Linux and Mac using C++ and python. So far we have installed all required packages into virtualenv using pip. Now 3rdParty libraries take substantial amount of time to compile. We want to speed up build process on the build servers.
One way is not to wipe out build agent workspace between builds. Is it possible when using Amazon EC2 servers?
The following Jenkins plugin can be used to copy files into the slave workspace.
https://wiki.jenkins-ci.org/display/JENKINS/Copy+To+Slave+Plugin
After you can get instance to its base state, you can use that to create an AMI. Now if you launch with that AMI in the future, all the libraries should be in place. At that point you can do any additional bootstrapping you need.
It will use your existing key unless you prep the instance before creating an AMI to use the key provided at launch.

Resources