I am trying to start 3 servers(and Envoy loadbalancer) and then run a client, but how do you do it in a makefile?
I have this makefile:
envoy:
envoy -c envoy.yaml
server1:
go run ./server1/main.go
server2:
go run ./server2/main.go
server3:
go run ./server3/main.go
loadbalancer: envoy server1 server2 server3
go run ./client/main.go
But it is stuck in stdout from the envoy command - I need also to start the 3 servers, do you know how?
Thanks
Related
The issue is I get the curl: (7) Failed to connect to localhost port 8090: Connection refused GItLab CI error but this does not happen on my laptop where I get the source html of the webpage. The .gitlab-ci.yml below is a simple reproduction of the issue. I have spent numerous hours trying to figure this out - i'm sure someone else has also.
Aside: This isn't a similar question - since they don't offer a solution.
GitLab Repo: https://gitlab.com/mudassir-ahmed/wordpress-testing-with-gitlab-ci/tree/another-approach but the only file it contains is the .gitlab-ci.yml shown below...
image: docker:stable
variables:
# When using dind service we need to instruct docker, to talk with the
# daemon started inside of the service. The daemon is available with
# a network connection instead of the default /var/run/docker.sock socket.
#
# The 'docker' hostname is the alias of the service container as described at
# https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#accessing-the-services
#
# Note that if you're using the Kubernetes executor, the variable should be set to
# tcp://localhost:2375/ because of how the Kubernetes executor connects services
# to the job container
# DOCKER_HOST: tcp://localhost:2375/
#
# For non-Kubernetes executors, we use tcp://docker:2375/
DOCKER_HOST: tcp://docker:2375/
# When using dind, it's wise to use the overlayfs driver for
# improved performance.
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build:
stage: build
script:
- apk update
- apk add curl
#- hostname -i
- docker container ls
- docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
- curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Referring to the answer from here : gitlab-ci.yml & docker-in-docker (dind) & curl returns connection refused on shared runner
There are two ways to fix this :
Option 1: Replace localhost in curl localhost:8090 with docker like this curl docker:8090
Option 2:
services:
- name: docker:dind
alias: localhost
docker run -d -p 8090:80 --name nginx-server kitematic/hello-world-nginx
curl localhost:8090 # This works on my laptop but not on a GitLab runner.
Assuming that is your code i think that you should somehow add some timeout between docker run and curl.
I have similar issues some time ago after starting docker container on gitlab runner machine i wasnt able to accces my url to. When i added command which check if container is running for " about one minute " it resolved my problem.
"docker inspect -f {{.State.Running}} " + containerName" but in order to do that check, you should add some additional script
We are using jenkins to restart our servers. The order of execution is as follows:
Stop server1
Stop server2
Start server2
Start server1
I have created a jenkins 'Free style project', where server1 and server2 stops as expected and server2 starts as expected.
But console gets hanged after server2 has started, which does not initiate to start server1.
There is no space related issues in our jenkins application.
Can anyone let me know how to resolve this issue..
I want to set up redis configuration in docker.
I have my own redis.conf under D:/redis/redis.conf and have configured it to have bind 127.0.0.1 and have uncommented requirepass foobared
Then used this command to load this configuration in docker:
docker run --volume D:/redis/redis.conf:/usr/local/etc/redis/redis.conf --name myredis redis redis-server /usr/local/etc/redis/redis.conf
Next,
I have docker-compose.yml in my application in maven Project under src/resources.
I have the following in my docker-compase.yml
redis:
image: redis
ports:
- "6379:6379"
And i execute the command :
docker-compose up
The Server runs, but when i check with the command:
docker ps -a
it Shows that redis Image runs at 0.0.0.0:6379.
I want it to run at 127.0.0.1.
How do i get that?
isn't my configuration file loading or is it wrong? or my commands are wrong?
Any suggestions are of great help.
PS: I am using Windows.
Thanks
Try to execute:
docker inspect <container_id>
And use "NetworkSettings"->"Gateway" (it must be 172.17.0.1) value instead of 127.0.0.1.
You can't use 127.0.0.1 as your Redis was run in the isolated environment.
Or you can link your containers.
So first of all you should not be worried about redis saying listening on 0.0.0.0:6379. Because redis is running inside the container. And if it doesn't listen on 0.0.0.0 then you won't be able to make any connections.
Next if you want redis to only listen on localhost on localhost then you need to use below
redis:
image: redis
ports:
- "127.0.0.1:6379:6379"
PS: I have not run container or docker for windows with 127.0.0.1 port mapping, so you will have to see if it works. Because host networking in Windows, Mac and Linux are different and may not work this way
I am able to run openwhisk in my local dev machine. I like to extend this to production environment. Is there any concept of openwhisk cluster?. I am not able to find good documentation on this. How auto-load balancing is achieved, etcc..
OpenWhisk is deployed via ansible and as such can be distributed across multiple VMs in a straightforward way.
Check the README on distributed deployments for further information and guidance.
Openwhisk will use ansible to deploy the openwhisk
I followed the follwoing way for my distributed setup
First ensure ssh passwrod less connectivity to all the servers
git clone https://github.com/apache/incubator-openwhisk.git
Add the remote_user and private_key_file values to the defaults section of
the ansible.cfg file. The remote_user value sets the default ssh user. The
private_key_file is required when using a private key that is not in the
default ~/.ssh folder
[defaults]
remote_user = ubuntu
private_key_file=/path/to/file.pem
Go to tools/ubuntu-setup run all.sh to install all the required softwares.
Now modify the inventory files(hosts) for your first node. this can become your bootstrapper VM
Check if you are able to ping the hosts : ansible all -i environments/distributed/hosts -m ping
if ping is fine run the next commad to generate the config files: ansible-playbook -i environments/distributed/hosts setup.yml
For installing the pre requisites: ansible-playbook -i environments/distributed prereq_build.yml
Deploy registry: ansible-playbook -i environments/distributed registry.yml
Go to openwhisk home run the following command to build the Openwhisk
./gradlew distDocker -PdockerHost=:4243 -PdockerRegistry=:5000
Once the build is successful run the following commands from the ansible folder
ansible-playbook -i environments/distributed/hosts couchdb.yml
ansible-playbook -i environments/distributed/hosts initdb.yml
ansible-playbook -i environments/distributed/hosts wipe.yml
ansible-playbook -i environments/distributed/hosts openwhisk.yml
ansible-playbook -i environments/distributed/hosts postdeploy.yml
Now edit the host file for other hosts and repeat the steps 7-8 and 12
this will create the setup in all the nodes. once done, you can use a node balancer to load balance on that. for sync between db instances i m using couchdb continuous replication
I have build a docker image and started the container using ansible. I'm running into an issue trying to create a dynamic connection to the container from the docker host to set some environment variable and execute some script. I know ansible does not use ssh to connect to the container where I can can use the expect module to run this command "ssh root#localhost -p 12345". How do I add and maintain a connection to the container using ansible docker connection plugin or pointing directly to the docker host? This is all running in AWS EC2 instance.
I think I need to run ansible as an equivalent to this command use by ansible to connect to the container host "docker exec -i -t container_host_server /bin/bash".
name: Create a data container
docker_container:
name: packageserver
image: my_image/image_name
tty: yes
state: started
detach: True
detach: yes
volumes:
/var/www/html
published_ports:
"12345:22"
"80:80"
register: container
Thanks in Advance,
DT
To set environment variables you can use parameter "env" in your docker_container task.
In the docker_container task you can add the parameter "command" to override the command defined as CMD in the Dockerfile of your docker image, somethning like
command: PathToYourScript && sleep infinity
In your example you expose container port 22, so it seems you want run sshd inside container. Although it's not a best practice in Docker, if you want sshd running you have to start that using command parameter in the docker_container task:
command: ['/usr/sbin/sshd', '-D']
Doing it (and having defined a user in the container), you'll be able to connect your container with
ssh -p 12345 user#dockerServer
or, as for your example, "ssh -p 12345 root#localhost" if your image already defined root user and you are working on localhost.