install gitlab on Windows with Docker - windows

I searched a lot and found that GitLab Community Edition is not installed on Windows so now I want to install it with the help of Docker. I do not know that is it possible and how I can do it ?

You need to install Docker for Windows.
Share drive for Docker (in Docker's settings > shared drives). For example, drive E:
Then, you need to create 3 directories on drive E: (e:\gitlab\config, e:\gitlab\logs, e:\gitlab\data)
From the Command Prompt, run:
docker run --detach --hostname gitlab.yourdomain.ru
--publish 443:443 --publish 80:80 --publish 22:22 --name gitlab
--restart always --volume e:\gitlab\config:/etc/gitlab
--volume e:\gitlab\logs:/var/log/gitlab
--volume e:\gitlab\data:/var/opt/gitlab gitlab/gitlab-ce:latest
That's it! You have now successfully run GitLab image.

Yes, you can run gitlab-ce on windows using Docker. First, make sure docker is installed on Windows, otherwise install it.
A detailed documentation for how to run gitlab using Docker is found under GitLab Docker images including how to access the web interface.

You can check gitlab documantation from Expose GitLab on different ports section.
Before starting the installation create 3 folder which names "config","data","logs" in a "gitlab" folder. And run your gitlab-ce image with docker run command.Gitlab should be running firstly.
Note that I will use 8082 port for gitlab server.You can change it with any port number.
1-open cmd and show your IP address.You need to look for the IPv4 Address in your network adapter :
ipconfig
2-Run your docker-ce image with this command :
docker run --detach --hostname YOUR-IP-ADRESS --publish 8082:8082 --publish 2282:22 --name gitlab --restart always --volume D:\DevOps\Gitlab/config:/etc/gitlab --volume D:\DevOps\Gitlab/logs:/var/log/gitlab --volume D:\DevOps\Gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest
3-In docker terminal(in docker gui application press to "cli" buton) go here :
cd etc/gitlab
nano gitlab.rb
4-Go to end of file at gitlab.rb and write these lines :
external_url "http://your-ip-address:8082"
gitlab_rails['gitlab_shell_ssh_port'] = 2282
5-After save and close to gitlab.rb file enter this code for reconfiguration:
gitlab-ctl reconfigure
6-Remove your docker container and run with this command again:
docker run --detach --hostname YOUR-IP-ADRESS --publish 8082:8082 --publish 2282:22 --name gitlab --restart always --volume D:\DevOps\Gitlab/config:/etc/gitlab --volume D:\DevOps\Gitlab/logs:/var/log/gitlab --volume D:\DevOps\Gitlab/data:/var/opt/gitlab gitlab/gitlab-ce:latest

I found the solution here, there is a issue related with volumes when installing in Docker for Windows
https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/2280

Use the following docker-compose file:
web:
image: 'gitlab/gitlab-ce:13.7.1-ce'
restart: always
hostname: 'localhost'
environment:
GITLAB_OMNIBUS_CONFIG: |
#KO gitlab_rails['initial_root_password'] = 'adminadmin'
gitlab_rails['gitlab_shell_ssh_port'] = 2222
external_url 'http://localhost'
ports:
- '8185:80'
- '1443:443'
- '2222:22'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
#important here: do not mount /var/opt/gitlab but /var/opt as stated here:
# https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/2280
- '/srv/gitlab/data:/var/opt'
Do (cygwin) docker ps | grep gitlab until status (healthy) is shown, then open a browser at http://localhost:8185
If you are not asked the first time to change root password,
set it like this (cygwin):
docker exec -it $(docker ps | grep gitlab | awk '{print $1}') bash
root#dev:/# gitlab-rails console -e production
--------------------------------------------------------------------------------
Ruby: ruby 2.7.2p137 (2020-10-01 revision 5445e04352) [x86_64-linux]
GitLab: 13.12.5 (f37a805b0b4) FOSS
GitLab Shell: 13.18.0
PostgreSQL: 12.6
--------------------------------------------------------------------------------
Loading production environment (Rails 6.0.3.6)
irb(main):001:0> user = User.where(id: 1).first => #<User id:1 #root>
irb(main):002:0> user.password = 'adminadmin' => "adminadmin"
irb(main):003:0> user.password_confirmation = 'adminadmin' => "adminadmin"
irb(main):004:0> user.save Enqueued ActionMailer::MailDeliveryJob (Job ID: d5dce701-2a79-4bed-b0a4-2abb877c2081) to Sidekiq(mailers) with arguments: "DeviseMailer", "password_change", "deliver_now", {:args=>[#<GlobalID:0x00007f621582b210 #uri=#<URI::GID gid://gitlab/User/1>>]}
=> true
irb(main):005:0> exit
Then login, create a user, give him a first password, login with it update password, create a project, and use project's git url rather than http since the use of a port seems to generate some trouble with the http url. Generating a public private/key and registering the public on in gitlab might be required

Related

Docker compose to have executed 'command: bash' and keep container open

The docker compose yml file below keeps the container open after I run docker compose up -d but command: bash does not get executed:
version: "3.8"
services:
service:
container_name: pw-cont
image: mcr.microsoft.com/playwright:v1.30.0-focal
stdin_open: true # -i
tty: true # -t
network_mode: host # --network host
volumes: # Ensures that updates in local/container are in sync
- $PWD:/playwright/
working_dir: /playwright
command: bash
After I spin the container up, I wanted to visit Docker Desktop > Running container's terminal.
Expectation: Since the file has command: bash, I expect that in docker desktop, when I go to the running container's terminal, it will show root#docker-desktop:/playwright#.
Actual: Container's terminal in docker desktop is showing #, still need to type bash to see root#docker-desktop:/playwright#.
Can the yml file be updated so that bash gets auto executed when spinning up the container?
docker compose doesn't provide that sort of interactive connection. Your docker-compose.yaml file is fine; once your container is running you can attach to it using docker attach pw-cont to access stdin/stdout for the container.
$ docker compose up -d
[+] Running 1/1
⠿ Container pw-cont Started 0.1s
$ docker attach pw-cont
root#rocket:/playwright#
root#rocket:/playwright#
I'm not sure what you are trying to achieve, but using the run command
docker-compose run service
gives me the prompt you expect.

how to run docker with keycloak image as a daemon in prod environment

I am using docker to run my Keycloak server in aws production environment. The problem is keycloak uses wildfly which is constant running. Because of this I cannot close the shell. I am trying to find a way to run docker as a daemon thread.
The command I use to run docker
docker run -p 8080:8080 jboss/keycloak
Just user docker's detach option -d.
docker run -p 8080:8080 -d jboss/keycloak

Execute docker commands in jenkins (in docker container)

With docker compose i launch a jenkins container and i want to have the possibility to execute docker command with(docker installed on the server).
But when i tried to make a simple test run hello-world image i have the following error :
docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
I set the user on the docker group, what's wrong with my docker compose file ?
in other post i see if i add this line :
/var/run/docker.sock:/var/run/docker.sock
my container with jenkins can communicate with docker
my docker compose file
jenkins:
image: jenkins:2.32.3
ports:
- 8088:8080
- 50000:50000
volumes:
- /home/my-user-name/docker-jenkins/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
- /tmp:/tmp
To access the docker.sock file, you must run with a user that has filesystem access to read and write to this socket. By default that's with the root user and/or the docker group on the host system.
When you mount this file into the container, that mount keeps the same uid/gid permissions on the file, but those id's may map to different users inside your container. Therefore, you should create a group inside the container as part of your Dockerfile that maps to the same gid that exists on the host, and assign your jenkins user to this group, so that it has access to the docker.sock. Here's an example from a Dockerfile where I do this:
...
ARG DOCKER_GID=993
RUN groupadd -g ${DOCKER_GID} docker \
&& useradd -m -d /home/jenkins -s /bin/sh jenkins \
&& usermod -aG docker jenkins
...
In the above example, 993 is the docker gid on my host.

How to mount windows folder using docker compose volumes?

How to mount windows folder using docker compose volumes?
I am trying to set up docker container using docker-compose.
My docker-compose.yml file looks as follow:
php-fpm:
build: php-fpm
container_name: php-fpm
volumes:
- ../project:/var/www/dev
When i enter to the container like this:
docker exec -it php-fpm bash
And display content with ls command the /var/www/dev directory is empty.
Does enyone know the solution for this ?
$ docker -v
Docker version 1.12.0, build 8eab29e
$ docker-compose -v
docker-compose version 1.8.0, build d988a55
I have Windows 10 and docker is installed via Docker ToolBox 1.12.0
#edit
The mounted directory is also empty under Linux enviroment
I fixed it by going to: Local Security Policy > Network List Manager Policies and Double-clicked unidentified Networks then change the location type to private and restarted Docker. Source

Docker containers on travis-ci

I have a Dockerfile that I am attempting to test using RSpec, serverspec and docker-api. Locally (using boot2docker as I am on OS X) this works great and all my test pass, but on travis-ci none of the tests pass. My .travis.yml file is as such:
language: ruby
rvm:
- "2.2.0"
sudo: required
cache: bundler
services:
- docker
before_install:
- docker build -t tomasbasham/nginx .
- docker run -d -p 80:80 -p 443:443 --name nginx -v $(pwd)/nginx.conf:/etc/nginx/nginx.conf tomasbasham/nginx
script: bundle exec rspec
Am I doing something noticeably wrong here? The repository I have created and is run on travis-ci is on GitHub. There may be something else amiss that I am unaware of
tl;dr
A container MUST run its program in the foreground.
Your Dockerfile is the issue. From the github repository you provided, the Dockerfile content is:
# Dockerfile for nginx with configurable persistent volumes
# Select nginx as the base image
FROM nginx
MAINTAINER Tomas Basham <me#tomasbasham.co.uk>
# Install net-tools
RUN apt-get update -q && \
apt-get install -qy net-tools && \
apt-get clean
# Mount configurable persistent volumes
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
# Expose both HTTP and HTTPS ports
EXPOSE 80 443
# ENTRYPOINT
ENTRYPOINT ["service", "nginx", "start"]
Before debugging your RSpec/serverspec tests, you should make sure that docker image is able to run a container.
Type those commands from the directory which has the Dockerfile in:
docker build -t tmp .
docker run --rm -it tmp
If you get your shell prompt back to you, that means your container stopped running. If your container isn't starting, then your test suite will fail.
What's wrong with the dockerfile
The entrypoint you defined ENTRYPOINT ["service", "nginx", "start"] will execute a command that will, in turn, start the nginx program in the background. This means the process that was initially run by docker (/bin/service) will terminate, and docker will detect that and stop your container.
To run nginx in the foreground, one must run nginx -g daemon off; as you can find in the Dockerfile for the official nginx image. But since you put daemon off; in your nginx.conf file, you should be fine with just nginx.
I suggest you remove the entrypoint from your Dockerfile (and also remove the daemon off; from your nginx config) and it should work fine.
serverspec
Once you get a container which runs, you will have to focus on the serverspec part on which I'm not experienced with.

Resources