I want to install Mastodon without the need of using the terminal after the Docker image has started running.
I can't find if it's possible to skip the first-time setup wizard.
The best solution for me would be to use environment variables passed to the Docker container.
Related
Newbie to GitLab CI/CD here.
I'd like to use a linux base container as the Runner. Reason being I only have access to a Windows VM, and I'd rather not have to write PowerShell scripts build/test/deploy. Also, the build is using Spring Boot + maven, and I'm doubtful that the build scripts provided by Spring will run on Windows.
To further the problem, the build done by maven+spring spins up a container to execute the build. In other words, my build would run a container in a container, which seems feasible based on this blog.
Any feedback is much appreciated.
Edit based on #JoSSte 's feedback:
Are there any better approaches to setup a runner besides needing a container inside a container? For instance, if WSL enables running bash scripts, can the Windows VM act as the build server?
And to satisfy #JoSSte's request to make this question less opinion based, are there any best practices to approach a problem like this?
I am developing an application for a class, and use Packer, Vagrant, and VirtualBox to build and deploy my application.
For a fresh install, the (dev's view) build steps looks like:
Build VirtualBox images with packer build {...}.json
Provision and run VirtualBox images with vagrant up
Hey! Web application!
The repository is here: https://github.com/HenryFBP/2019-team-07f-mirror
Long story short, I'm trying to add continuous integration to test each commit so I (and my team members) don't have to.
I've so far tried both AppVeyor and Travis, but each time the build stops with VirtualBox reporting VERR_VMX_NO_VMX, and it just looks like whatever machine I'm running VBoxManage on doesn't have VT-x enabled.
Short of re-writing my provisioning steps in Docker or doing self-hosting, how can I get a CI provider that gives me this VT-x feature?
Should I actually just tell Vagrant to use Docker instead of VirtualBox?
Should I modify my Vagrantfile to use Docker? Should I switch CI providers? Should I use a self-hosted option like Concourse?
I'm hesitant to try another CI provider because I don't want to just rediscover that they don't have VT-x.
Error from Travis: https://travis-ci.org/HenryFBP/2019-team-07f-mirror/builds/507577619
Error from Appveyor: https://ci.appveyor.com/project/HenryFBP/2019-team-07f-mirror/build/job/piku3n5eav1o1ioe
Thanks!
EDIT #1
I am using Jenkins and it rocks! It's self-hosted and super great. I can run whatever I want on bare-metal. It's the solution for me and took literally one afternoon to set up.
Right now I'm running Jenkins as windows service. Most likely because of that I'm unable to spawn test application window. It's just appearing as a process in the background. I already tried Log-On option to allow service to interact with desktop, but that doesn't seem to change anything in this case.
I'd like to try running Jenkins through jenkins.war, however the first thing I've noticed is that it begins all the configuration prompting about plugins etc. Application is already configured and has all the builds in place.
Could someone explain if it's necessary to reconfigure everything or it just keeps all the settings in some different location when running as service? There's was no dedicated user, the service was operated just by local system account.
So the solution was fairly simple. The main problem was when Jenkins ran as a service entire configuration was saved in it's install directory so upon running jenkins.war configuration was set in current account user directory so it was like fresh installation.
The thing that helped was thin backup plugin. Ran the backup while service instance was running, then restored it for the new instance that allowed full plugin, configuration and builds restore. After that also passwords for source control had to be updated (restored ones didn't work probably because of per instance encryption key).
I have a node.js server that communicates with a MongoDB database. As part of the continuous-integration process I'd like to spin up a MongoDB database and run my tests against the server + DB.
With bitbucket pipelines I can spin up a container that has both node.js and MongoDB. I then run my tests against this setup.
What would be the best way to achieve this with Visual Studio Team Services? Some options that come to mind:
1) Hosted pipelines seem easiest but they don't have MongoDB on them. I could use Tool Installers, but there's no mention of a MongoDB installer, and in fact I don't see any tool installer in my list of available tasks. Also, it is mentioned that there is no admin access to the hosted pipeline machines and I believe MongoDB requires admin access. Lastly, downloading and installing Mongo takes quite a bit of time.
2) Set up my own private pipeline - i.e. a VM with Node + Mongo, and install the pipeline agent on it. Do I have to spin up a dedicate Azure instance for this? Will this instance be torn down and set up again on each test run, or will it remain up between test runs (meaning I have to take extra care to clean it up)?
3) Magically use a container in the pipeline through an option that I haven't yet discovered...?
I'd really like to use a container to run my tests because then I can use the same container locally during the development process, rather than having to maintain multiple environments. Can this be done?
So as it turns out, VSTS now has Docker support in its pipeline (when I wrote my question it was in beta and I didn't find it for whatever reason). It can be found at https://marketplace.visualstudio.com/items?itemName=ms-vscs-rm.docker.
This command allows you to spin up a container of your choice and run a single command on it. If this command is to be synchronously run as part of the pipeline, then Run in Background needs to be unchecked (this will be the case for regular build commands, I guess). I ended up pushing a build script into my git repository and running it on a container.
And re. my question in (2) above - machines in private pipelines aren't cleaned up between pipeline runs.
What is the best way to use Docker in a Continuous Delivery pipeline?
Should the build artefact be a Docker Image rather than a Jar/War? If so, how would that work - I'm struggling to work out how to seamlessly use Docker in development (on laptop) and then have the CI server use the same base image to build the artefact with.
Well, there are of course multiple best practices and many approaches on how to do this. One approach that I have found successful is the following:
Separate the deployable code (jars/wars etc) from the docker containers in separate VCS-repos (we used two different Git-repos in my latest project). This means that the docker images that you use to deploy your code on is built in a separate step. A docker-build so to say. Here you can e.g. build the docker images for your database, application server, redis cache or similar. When a `Dockerfile´ or similar has changed in your VCS then Jenkins or whatever you use can trigger the build of the docker images. These images should be tagged and pushed to some registry (may it be Docker Hub or some local registry).
The deployable code (jars/wars etc) should be built as usual using Jenkins and commit-hooks. In one of my projects we actually ran Jenkins in a Docker container as described here.
All docker containers that uses dynamic data (such as the storage for a database, the war-files for a Tomcat/Jetty or configuration files that is part of the code base) should mount these files as data volumes or as data volume containers.
The test servers or whatever steps that are part of your pipeline should be set up according to a spec that is known by your build server. We used a descriptor that connected your newly built tag from the code base to the tag on the docker containers. Jenkins pipeline plugin could then run a script that first moved the build artifacts to the host server, pulled the correct docker images from the local registry and then started all processes using data volume containers (we used Fig for managing the docker lifecycle.
With this approach, we were also able to run our local processes (databases etc) as Docker containers. These containers were of course based on the same images as the ones in production and could also be developed on the dev machines. The only real difference between local dev environment and the production environment was the operating system. The dev machines typically ran Mac OS X/Boot2Docker and prod ran on Linux.
Yes, you should shift from jar/war files to images as your build artefacts. The process #wassgren describes should work well, however I found the following to fit our use case better, especially for development:
1- Make a base image. It looks like you're a java shop so as an example, lets pretend your base image is FROM ubuntu:14.04 and installs the jdk and some of the more common libs. Let's call it myjava.
2- During development, use fig to bring up your container(s) locally and mount your dev code to the right spot. Fig uses the myjava base image and doesn't care about the Dockerfile.
3- When you build the project for deployment it uses the Dockerfile, and somewhere it does a COPY of the code/build artefacts to the right place. That image then gets tagged appropriately and deployed.
simple steps to follow.
1)Install jenkins in a container
2)Install framework tool in the same container.(I used SBT).
3)Create a project in jenkins with necessary plugins to integrate data from gitlab and build all jar's to a compressed format (say build.tgz).
4)This build.tgz can be moved anywhere and be triggered but it should satisfy all its environment dependencies (for eg say it required mysql).
5)Now we create a base environment image(in our case mysql installed).
6)With every build triggered, we should trigger a dockerfile on the server which will combine build.tgz and environment image.
7)Now we have our build.tgz along with its environment satisfied.This image should be pushed into registry.This is our final image.It is portable and can be deployed anywhere.
8)This final image can be kept on a container with necessary mountppoints to get outputs and a script(script.sh) can be triggered by mentioning the entrypoint command in dockerfile.
9)This script.sh will be inside the image(base image) and will be configured to do things according to our purpose.
10)Before making this container up you need to stop the previously running container.
Important notes:
An image is created everytime you build and is stored in registry.
Thus this can be reused.This image can be pushed into main production server and can be triggered.
This helps to maintain a clean environment everytime because we are using the base image.
You can also create a stable CD pipeline with Bamboo and Docker. Bamboo Docker integrations come in both a build agent form and as a bundled task within the application. You might find this article helpful: http://blogs.atlassian.com/2015/06/docker-containers-bamboo-winning-continuous-delivery/
You can use the task to build a Docker image that you can use in another environment or deploy your application to a container.
Good luck!