Gitlab CI and Xamarin build fails - xamarin

I've created a complete new Xamarin Forms App project in Visual Studio for Mac and added it to a GitLab repository. After that I created a .gitlab-ci.yml file for setting up my CI build. But the problem is that I get error messages:
error MSB4019: The imported project "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" was not found. Confirm that the expression in the Import declaration "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" is correct, and that the file exists on disk.
This error pops up also for Xamarin.Android.Csharp.targets.
My YML file look like this:
image: mono:latest
stages:
- build
build:
stage: build
before_script:
- msbuild -version
- 'echo BUILDING'
- 'echo NuGet Restore'
- nuget restore 'XamarinFormsTestApp.sln'
script:
- 'echo Cleaning'
- MONO_IOMAP=case msbuild 'XamarinFormsTestApp.sln' $BUILD_VERBOSITY /t:Clean /p:Configuration=Debug
Some help would be appreciated ;)

You will need a mac os host to build Xamarin.iOS application and AFAIK it's not there yet in GitLab. You can find the discussion here and private beta here. For now, I would recommend going your own MacOS Host and registered GitLab runner on that host:
https://docs.gitlab.com/runner/
You can set up the host where you want (VM or Physical device) and install the GitLab runner and Xamarin environment there, tag it and use with the GitLab pipelines as with any other shared runner.

From the comments on your question, it looks like Xamarin isn't available in the mono:latest image, but that's ok because you can create your own docker images to use in Gitlab CI. You will need to have access to a registry, but if you use gitlab.com (opposed to a self-hosted instance) the registry is enabled for all users. You can find more information on that in the docs: https://docs.gitlab.com/ee/user/packages/container_registry/
If you are using self-hosted, the registry is still available (even for free versions) but it has to be enabled by an admin (docs here: https://docs.gitlab.com/ee/administration/packages/container_registry.html).
Another option is to use Docker's own registry, Docker Hub. It doesn't matter what registry you use, but you'll have to have access to one of them so your runners can pull down your image. This is especially true if you're using shared runners that you (or your admins) don't have direct control over. If you can directly control your runners, another option is to build the docker image on all of your runners that need it.
I'm not familiar with Xaramin, but here's how you can create a new Docker image based on mono:latest:
# ./mono-xamarin/Dockerfile
FROM mono:latest # this lets us build off of an existing image rather than starting from scratch. Everything in the mono:latest image will be available in this image
RUN ./install_xamarin.sh # Run whatever you need to in order to install xamarin or anything else you need.
RUN apt-get install git # just an example
Once your Dockerfile is written, you can build it like this:
docker build --file path/to/Dockerfile --tag mono-xamarin:latest
If you build the image on your runners, you can use it immediately like:
# .gitlab-ci.yml
image: mono-xamarin:latest
stages:
- build
...
Otherwise you can now push it to whichever registry you want to use.

Related

Gitlab-CI multi-project-pipeline

currently I'm trying to understand the Gitlab-CI multi-project-pipeline.
I want to achieve to run a pipeline if another pipeline has finshed.
Example:
I have one project nginx saved in namespace baseimages which contains some configuration like fast-cgi-params. The ci-file looks like this:
stages:
- release
- notify
variables:
DOCKER_HOST: "tcp://localhost:2375"
DOCKER_REGISTRY: "registry.mydomain.de"
SERVICE_NAME: "nginx"
DOCKER_DRIVER: "overlay2"
release:
stage: release
image: docker:git
services:
- docker:dind
script:
- docker build -t $SERVICE_NAME:latest .
- docker tag $SERVICE_NAME:latest $DOCKER_REGISTRY/$SERVICE_NAME:latest
- docker push $DOCKER_REGISTRY/$SERVICE_NAME:latest
only:
- master
notify:
stage: notify
image: appropriate/curl:latest
script:
- curl -X POST -F token=$CI_JOB_TOKEN -F ref=master https://gitlab.mydomain.de/api/v4/projects/1/trigger/pipeline
only:
- master
Now I want to have multiple projects to rely on this image and let them rebuild if my baseimage changes e.g. new nginx version.
baseimage
|
---------------------------
| | |
project1 project2 project3
If I add a trigger to the other project and insert the generated token at $GITLAB_CI_TOKEN the foreign pipeline starts but there is no combined graph as shown in the documentation (https://docs.gitlab.com/ee/ci/multi_project_pipelines.html)
How is it possible to show the full pipeline graph?
Do I have to add every project which relies on my baseimage to the CI-File of the baseimage or is it possible to subscribe the baseimage-pipline in each project?
The Multi-project pipelines is a paid for feature introduced in GitLab Premium 9.3, and can only be accessed using GitLab's Premium or Silver models.
A way to see this is to the right of the document title:
Well after some more digging into the documentation I found a little sentence which states that Gitlab CE provides features marked as Core-Feature.
We have 50+ Gitlab packages where this is needed. What we used to do was push a commit to a downstream package, wait for the CI to finish, then push another commit to the upstream package, wait for the CI to finish, etc. This was very time consuming.
The other thing you can do is manually trigger builds and you can manually determine the order.
If none of this works for you or you want a better way, I built a tool to help do this called Gitlab Pipes. I used it internally for many months and realized that people need something like this, so I did the work to make it public.
Basically it listens to Gitlab notifications and when it sees a commit to a package, it reads the .gitlab-pipes.yml file to determine that projects dependencies. It will be able to construct a dependency graph of your projects and build the consumer packages on downstream commits.
The documentation is here, it sort of tells you how it works. And then the primary app website is here.
If you click the versions history ... from multi_project_pipelines it reveals.
Made available in all tiers in GitLab 12.8.
Multi-project pipeline visualizations as of 13.10-pre is marked as premium however in my ee version the visualizations for down/upstream links are functional.
So reference Triggering a downstream pipeline using a bridge job
Before GitLab 11.8, it was necessary to implement a pipeline job that was responsible for making the API request to trigger a pipeline in a different project.
In GitLab 11.8, GitLab provides a new CI/CD configuration syntax to make this task easier, and avoid needing GitLab Runner for triggering cross-project pipelines. The following illustrates configuring a bridge job:
rspec:
stage: test
script: bundle exec rspec
staging:
variables:
ENVIRONMENT: staging
stage: deploy
trigger: my/deployment

Build docker image without docker installed

Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.
I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?
It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.
UPDATE
There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel
So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
We can create Docker image without Docker being installed.
Jib Maven and Gradle Plugins
Google has an open source tool called Jib that is relatively new, but
quite interesting for a number of reasons. Probably the most interesting
thing is that you don’t need docker to run it - it builds the image using
the same standard output as you get from docker build but doesn’t use
docker unless you ask it to - so it works in environments where docker is
not installed (not uncommon in build servers). You also don’t need a
Dockerfile (it would be ignored anyway), or anything in your pom.xml to
get an image built in Maven (Gradle would require you to at least install
the plugin in build.gradle).
Another interesting feature of Jib is that it is opinionated about
layers, and it optimizes them in a slightly different way than the multi-
layer Dockerfile created above. Just like in the fat jar, Jib separates
local application resources from dependencies, but it goes a step further
and also puts snapshot dependencies into a separate layer, since they are
more likely to change. There are configuration options for customizing the
layout further.
Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better
For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container
Have a look at the following tools:
Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.
Fabric8-maven-plugin
The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.
I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.
Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.5.41</version>
</plugin>
then build+deploy using
$ mvn fabric8:deploy
If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.
Buildah
Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:
create a working container, either from scratch or using an image as a starting point
create an image, either from a working container or via the instructions in a Dockerfile
images can be built in either the OCI image format or the traditional upstream docker image format
mount a working container's root filesystem for manipulation
unmount a working container's root filesystem
use the updated contents of a container's root filesystem as a filesystem layer to create a new image
delete a working container or an image
rename a local container
I don't want to force others to install docker on their machines.
If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.
The Docker Engine API is a RESTful API accessed by an HTTP client such
as wget or curl, or the HTTP library which is part of most modern
programming languages.
For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.
Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.
I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.
OpenSource and has a MIT license.
Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.
Requirements
project repository on Gitlab
an openshift cluster (an openshift-online-starter will do for most medium/small
extract how the docker image for this project was created:
# create and push image to ImageStream:
build_rootfs:
image: centos
stage: build-image
dependencies:
- build
before_script:
- mkdir -pv rootfs
- cp -v output/oc-* rootfs/
- mkdir -pv rootfs/etc/pki/tls/certs
- mkdir -pv rootfs/bin-runner
- cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
- chmod -Rv 777 rootfs
tags:
- oc-runner-shared
script:
- registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config

Is it possible to do Docker-compose pull from Gitlab CI?

I have images in my docker-compose.yml file that I would like to pull when my CI Runner is executing the build.
Everytime I try I run into this:
Pulling web (registry.gitlab.com/xxxxx/xxxxx/crm:latest)...
Pulling repository registry.gitlab.com/xxxxx/xxxxx/crm
Error: image xxxxx/xxxxx/crm:latest not found
You have to login to the registry first
- docker login -u "gitlab-ci-token" -p "$CI_JOB_TOKEN" $CI_REGISTRY
To build on #llya-kuchaev answer the related gitlab docs are here: https://about.gitlab.com/2016/05/23/gitlab-container-registry/
Your version of gitlab might change the environment variables you need to use. In version 9+ use $CI_JOB_TOKEN in earlier versions use $CI_BUILD_TOKEN, see https://docs.gitlab.com/ee/ci/variables/ for all the changes that were made in v9 (there were loads)

How to get app Incremental deployment to kubernetes

I tried to setup Continuous Deployment using jenkins for own microservice project which is organized as multi-module maven project (each submodule representing a micro service). I use "Incremental build - only build changed modules" in jenkins to avoid unnessesary building, and then use docker-maven-plugin to build docker image. However, how could I do to only redeploy changed images to kubernetes cluster?
You can use local docker image registry.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
You can then push the development images to this registry as a build step and make your kubernetes containers use this registry.
After you are ready, push the image to your production image registry and adjust container manifests to use proper registry.
More info on private registry server: https://docs.docker.com/registry/deploying/
Currently Kubernetes does not provide proper solution for this. But there are few workarounds mentioned [here]: https://github.com/kubernetes/kubernetes/issues/33664
I like this one 'Fake a change to the Deployment by changing something other than the image'. We can do it in this way :
Define environment variable say "TIMESTAMP" and any value to it in deployment manifest. In CI\CD pipeline we set the value to current timestamp and then pass this updated manifest to 'kubectl apply'. This way, we are faking a change and kubernetes will pull the latest image and deploy to the cluster. Please make sure that 'imagePullPolicy : always' is set.

Simple docker deployment tactics

Hey guys so I've spend the past few days really digging into Docker and I've learned a ton. I'm getting to the point where I'd like to deploy to a digitalocean droplet but I'm starting to wonder about the strategy of building/deploying an image.
I have a perfect Dev setup where I've created a file volume tied to my app.
docker run -d -p 80:3000 --name pug_web -v $DIR/app:/Development test_web
I'd hate to have to run the app in production out of the /Development folder, where I'm actually building the app. This is a nodejs/express app and I'd love to concat/minify/etc. into a local dist folder ane add that build folder to a new dist ready image.
I guess what I'm asking is, A). can I have different dockerfiles, one for Dev and one for Dist? if not B). can I have if statements in my docker files that would do something like... if ENV == 'dist' add /dist... etc.
I'm struggling to figure out how to move this from a Dev environment locally to a tightened up production ready image without any conditionals.
I do both.
My Dockerfile checks out the code for the application from Git. During development I mount a volume over the top of this folder with the version of the code I'm working on. When I'm ready to deploy to production, I just check into Git and re-build the image.
I also have a script that is executed from the ENTRYPOINT command. The script looks at the environment variable "ENV" and if it is set to "DEV" it will start my development server with debugging turned on, otherwise it will launch the production version of the server.
Alternatively, you can avoid using Docker in development, and instead have a Dockerfile at the root of your repo. You can then use your CI server (in our case Jenkins, but Dockerhub also allows for automated build repositories that can do that for you, if you're a small team or don't have access to a dedicated build server.
Then you can just pull the image and run it on your production box.

Resources