Single instance of a marathon application creates multiple docker containers - mesos

I have followed the sample json app cfg # https://mesosphere.github.io/marathon/docs/native-docker.html#bridged-networking-mode (modified instance count to 1 instead of 2) on the following local setup on my mbp running macos -
mesos-1.9.0 (downloaded source and built locally)
zookeeper-3.4.8 (packaged as a 3rd party framework with mesos-1.9.0 above)
marathon-1.5.0-96 (downloaded source from mesosphere github and built locally)
With a single instance of the bridged python webapp, I have observed that multiple docker containers are created.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
120bf164b817 python:3 "/bin/sh -c 'python3…" 20 seconds ago Up 19 seconds 0.0.0.0:31532->161/udp, 0.0.0.0:31531->8080/tcp mesos-eb3765bb-2c98-4cdf-8dbe-95ef33bdd58b
eee64f6d845b python:3 "/bin/sh -c 'python3…" About a minute ago Up About a minute 0.0.0.0:31733->161/udp, 0.0.0.0:31732->8080/tcp mesos-c17f8df0-f7a3-4352-a266-c2bf74c211fa
5dc28a7457e2 python:3 "/bin/sh -c 'python3…" 2 minutes ago Up 2 minutes 0.0.0.0:31811->161/udp, 0.0.0.0:31810->8080/tcp mesos-d44f0ff6-73a1-4609-bc9a-2a32330fc37e
I don't think that this the expected behavior and for a single marathon app instance, only 1 docker container should be created.
Please help me fix this if my observation is true or correct my understanding.
TIA.

I assume you can use UNIQUE operator of constraints.
https://mesosphere.github.io/marathon/docs/constraints.html
e.g:
$ curl -X POST -H "Content-type: application/json" localhost:8080/v2/apps -d '{
"id": "sleep-unique",
"cmd": "sleep 60",
"instances": 3,
"constraints": [["hostname", "UNIQUE"]]
}'

Related

Trouble creating a new container in Docker. Error response from daemon: Conflict. The container name is already in use by container

I'm running the intro tutorial for Docker on Mac and am getting an error as follows:
docker run -d -p 80:80 --name docker-tutorial docker101tutorial
docker: Error response from daemon: Conflict. The container name "/docker-tutorial" is already in use by container "c5a91ef51a529a00dcbef180560dc2b392f3d9ab05b8c29fa1bf640d64271de7". You have to remove (or rename) that container to be able to reuse that name. See 'docker run --help'.
Can you advise on this error — it seems that I would need to delete a prior container? But I don't believe I created one.
Can anyone please advise as to how to troubleshoot this issue as I am not very proficient in terminal and am new to Docker.
When I type docker ps -a, I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f5ed32612a0a ubuntu "bash" 27 minutes ago Exited (129) 22 minutes ago happy_tesla
b179c651b8d7 hello-world "/hello" 40 minutes ago Exited (0) 40 minutes ago mystifying_rubin
c5a91ef51a52 docker101tutorial "/docker-entrypoint.…" 42 minutes ago Created docker-tutorial
916e57976203 hello-world "/hello" 48 minutes ago Exited (0) 48 minutes ago exciting_dewdney
To make it short, the reason why this is happening to you is because, when you name containers (with the flag --name foo), then you have to make sure this name is unique among all the containers you have on your host.
Then regarding your statement:
Can you advise on this error - it seems that I would need to delete a prior container? But I don't believe I created one
If I read your docker ps -a output, this is untrue, you created one 42 minutes ago, see the really last bit of the below line? This is the name of an existing container, docker-tutorial:
c5a91ef51a52 docker101tutorial "/docker-entrypoint.…" 42 minutes ago Created docker-tutorial
Just run:
docker rm docker-tutorial
Then you should be able to go back your tutorial.
For the sake of completeness, since it can be unexpected at first usage, the command docker rm will output back the name of the container that it just deleted:
$ docker rm I-do-exist
I-do-exist
And if you do not have such a named container, then it will output a clear error:
$ docker rm I-do-not-exist
Error: No such container: I-do-not-exist
The command being docker run and not run, I suspect there might be some typo, maybe a non-printable character.
Try to type the complete command from a fresh prompt.
Please post the command you are running again removing the backslash
Please post output docker ps -a it will show you what containers are there running/stopped
You can solve your problem with just two commands
In your terminal type:
docker ps -qa
to find the name that you called your docker container and check the its status says 'exited'(i.e. Container called 'Zen-wu')
Select your docker container number and remove it like the example below
docker rm 828a52b426f2
optional
If you want to remove All the exited docker containers do the following command
docker rm $(docker ps -qa)

Typing two letters at the same time causes docker exec -it shell to exit abruptly

I'm running Docker Toolbox on VirtualBox on Windows 10.
I'm having an annoying issue where if I docker exec -it mycontainer sh into a container - to inspect things, the shell will abruptly exit randomly back to the host shell, while I'm typing commands. Some experimenting reveals that it's when I press two letters at the same time (as is common when touch typing) that causes the exit.
The container will still be running.
Any ideas what this is?
More details
Here's a minimal docker image I'm running inside. Essentially, I'm trying to deploy kubernetes clusters to AWS via kops, but because I'm on Windows, I have to use a container to run the kops commands.
FROM alpine:3.5
#install aws-cli
RUN apk add --no-cache \
bind-tools\
python \
python-dev \
py-pip \
curl
RUN pip install awscli
#install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#install kops
RUN curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
RUN chmod +x kops-linux-amd64
RUN mv kops-linux-amd64 /usr/local/bin/kops
I build this image:
docker build -t mykube .
I run this in the working directory of my the project I'm trying to deploy:
docker run -dit -v "${PWD}":/app mykube
I exec into the shell:
docker exec -it $containerid sh
Inside the shell, I start running AWS commands as per here.
Here's some example output:
##output of previous dig command
;; Query time: 343 msec
;; SERVER: 10.0.2.3#53(10.0.2.3)
;; WHEN: Wed Feb 14 21:32:16 UTC 2018
;; MSG SIZE rcvd: 188
##me entering a command
/ # aws s3 mb s3://clus
##shell exits abruptly to host shell while I'm writing
DavidJ#DavidJ-PC001 MINGW64 ~/git-workspace/webpack-react-express (master)
##container is still running
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37a341cfde83 mykube "/bin/sh" 5 minutes ago Up 3 minutes gifted_bhaskara
##nothing in docker logs
$ docker logs --details 37a341cfde83
A more useful update
Adding the -D flag gives an important clue:
$ docker -D exec -it 04eef8107e91 sh -x
DEBU[0000] Error resize: Error response from daemon: no such exec
/ #
/ #
/ #
/ #
/ # sdfsdfjskfdDEBU[0006] [hijack] End of stdin
DEBU[0006] [hijack] End of stdout
Also, I've ascertained that what specifically is causing the issue is pressing two letters at the same time (which is quite common when I'm touch typing).
There appears to be a github issue for this here, though this one is for docker for windows, not docker toolbox.
This issue appears to be a bug with docker and windows. See the github issue here.
As a work around, prefix your docker exec command with winpty, which comes with git bash.
eg.
winpty docker exec -it mycontainer sh
Check the USER which is the one you are login with when doing a docker exec -it yourContainer sh.
Its .bahsrc, .bash_profile or .profile might include a command which would explain why the session abruptly quits.
Check also the logs associated to that container (docker logs --details yourContainer) in order to see if that closed session generated anything in stderr.
Reasons I can think of for a process to be killed in your container include:
Pid 1 exiting in the container. This would cause the container to go into a stopped state, but a restart policy could have restarted it. See your docker container inspect output to see if this is happening. This is the most common cause I've seen.
Out of memory on the OS, where the kernel would then kill processes. View your system logs and dmesg to see if this is happening.
Exceeding the container memory limit, where docker would kill the container, possibly restarting it depending on your policy. You would again view docker container inspect but the status will have different details.
Process being killed on the host, potentially by a security tool.
Perhaps a selinux or apparmor policy being violated.
Networking issues. Never encountered it myself, but since docker is a client / server design, there's a potential for a network disconnect to drop the exec session.
The server itself is failing, and you'd see various logs in syslog / dmesg indicating problems it can't recover from.

Hyperledger Composer Error Identity has not been registered once issued after restart

I am using hyperledger composer 0.16.0 and I want to persist data to database so that data can be used even after restart. so I am using loopback-connector-mongodb
Context
I have been following this tutorial and I am able to complete it.
I have setup fabric by issuing below steps
cd ${HOME}/fabric-tools/
./stopFabric.sh
./teardownFabric.sh
./downloadFabric.sh
./startFabric.sh
cd ${HOME}/tmt/Profile/
composer card create -p connection.json -u PeerAdmin -c Admin#org1.example.com-cert.pem -k 114aab0e76bf0c78308f89efc4b8c9423e31568da0c340ca187a9b17aa9a4457_sk -r PeerAdmin -r ChannelAdmin
composer card import -f PeerAdmin#fabric-network.card
composer runtime install -c PeerAdmin#fabric-network -n dam-network
cd ../dam-network/
# added model.cto file below
composer archive create -t dir -n .
composer network start -c PeerAdmin#fabric-network -a dam-network#0.0.1.bna -A admin -S adminpw
composer card import -f admin#dam-network.card
composer network ping -c admin#dam-network
chmod -R 777 ${HOME}/.composer
## onetime setup using npm install -g loopback-connector-mongodb
docker run -d --name mongo --network composer_default -p 27017:27017 mongo
cd ${HOME}/tmt/docker
docker build -t myorg/my-composer-rest-server .
#Which is attached below
source envvars.txt
docker run \
-d \
-e COMPOSER_CARD=${COMPOSER_CARD} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_AUTHENTICATION=${COMPOSER_AUTHENTICATION} \
-e COMPOSER_MULTIUSER=${COMPOSER_MULTIUSER} \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-v ~/.composer:/home/composer/.composer \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
I issue a new identity, to an existing participant and I create a business card for this identity with the following command
composer participant add -c admin#dam-network -d ' {"$class": "com.asset.tmt.User","userId": "tmtadmin","email": "tmtadmin#gmail.com","firstName": "TMT","lastName": "Admin","userGroup": "peerAdmin"} '
composer identity issue -u tmtadmin -a com.asset.tmt.User#tmtadmin -c admin#dam-network
composer card import -f tmtadmin#dam-network.card
Then, I import that business card via POST /wallet/import and I am able to call different REST API operations. After that, I stop the composer-rest-server and after a few minutes I start the composer-rest-server again with the commands 
cd ${HOME}/fabric-tools/
./startFabric.sh
docker start mongo rest
Issuing above command is not working so I am killing rest and then running it again by issuing below commands. Correct me if I am wrong
docker stop rest
docker rm rest
docker run \
-d \
-e COMPOSER_CARD=${COMPOSER_CARD} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_AUTHENTICATION=${COMPOSER_AUTHENTICATION} \
-e COMPOSER_MULTIUSER=${COMPOSER_MULTIUSER} \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-v ~/.composer:/home/composer/.composer \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
Then, I authenticate to the REST API using the configured authentication mechanism (in my case passport-github strategy) and if I try to call one operation for REST API it throws a A business network card has not been specified error message, then I import the previous business card via POST /wallet/import getting a no content which is supposed to be correct.
Finally, when I try to call another REST API operation I get the following error:
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "Error trying login and get user Context. Error: Error trying to enroll user or load channel configuration. Error: Enrollment failed with errors [[{\"code\":400,\"message\":\"Authorization failure\"}]]",
"stack": "Error: Error trying login and get user Context. Error: Error trying to enroll user or load channel configuration. Error: Enrollment failed with errors [[{\"code\":400,\"message\":\"Authorization failure\"}]]\n at client.getUserContext.then.then.catch (/home/composer/.npm-global/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:305:34)\n at <anonymous>\n at process._tickDomainCallback (internal/process/next_tick.js:228:7)"
}
}
Expected Behavior
This should work even after restart
Actual Behavior
This is the main issue, I don't know why my identity is not being recognized by the REST API if I used it previously to call some operations.
Your Environment
* Version used: 0.16.0
* Environment name and version (e.g. Chrome 39, node.js 5.4): chrome latest and node.js 8.9.1
* Operating System and version (desktop or mobile): Ubuntu desktop
My envvars.txt
COMPOSER_CARD=admin#dam-network
COMPOSER_NAMESPACES=never
COMPOSER_AUTHENTICATION=true
COMPOSER_MULTIUSER=true
COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "xxxxxxxxxxxxx",
"clientSecret": "xxxxxxxxxxxxxxxxxxxxx",
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "/",
"failureRedirect": "/"
}
}'
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "10.142.0.10"
}
}'
model.cto
/**
* Model Definitions
*/
namespace com.asset.tmt
participant User identified by userId {
o String userId
o String email
o String firstName
o String lastName
o String userGroup
}
asset Asset identified by assetId {
o String assetId
o String name
o String creationDate
o String expiryDate
}
transaction ChangeAssetValue {
o String expiryDate
o String assetId
o String userId
}
update:
After following what #R Thatcher told, When I issue command docker-compose start , it is starting fabric network but not the business network which is deployed earlier.
tmt#blockchain:~/tmt/dam-network$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8a6833bd7d3a myorg/my-composer-rest-server "pm2-docker compos..." 17 hours ago Exited (0) 10 hours ago rest
9bffab63a048 mongo "docker-entrypoint..." 17 hours ago Exited (0) 10 hours ago mongo
5bafb4dd5662 dev-peer0.org1.example.com-dam-network-0.16.0-4a77c4c8eabde9e440464f91b1655a48c6c5e0dac908e36a7b437034152bf141 "chaincode -peer.a..." 17 hours ago Exited (0) 4 minutes ago dev-peer0.org1.example.com-dam-network-0.16.0
4bfc67f13811 hyperledger/fabric-peer:x86_64-1.0.4 "peer node start -..." 17 hours ago Up 6 minutes 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
762a42bc0eb7 hyperledger/fabric-orderer:x86_64-1.0.4 "orderer" 17 hours ago Up 6 minutes 0.0.0.0:7050->7050/tcp orderer.example.com
49c925a8cc43 hyperledger/fabric-couchdb:x86_64-1.0.4 "tini -- /docker-e..." 17 hours ago Up 6 minutes 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
cee51891308f hyperledger/fabric-ca:x86_64-1.0.4 "sh -c 'fabric-ca-..." 17 hours ago Up 6 minutes 0.0.0.0:7054->7054/tcp ca.org1.example.com
What is the correct way to bring it up?
1)When I try to start network by issuing below command
tmt#blockchain:~/tmt/dam-network$ composer network start -c PeerAdmin#fabric-network -a dam-network#0.0.1.bna -A admin -S adminpw
Starting business network from archive: dam-network#0.0.1.bna
Business network definition:
Identifier: dam-network#0.0.1
Description: Blockchain dam integration
Processing these Network Admins:
userName: admin
✖ Starting business network definition. This may take a minute...
Error: Error trying to instantiate composer runtime. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: chaincode error (status: 500, message: chaincode exists dam-network)
Command failed
2) When I try to start docker container manually by issuing docker start container I still see it is not up.
The startFabric.sh does more than just start the Fabric - it actually removes your Containers and recreates new Containers from the Docker Images. The impact of this is that you lose all your data and your Business Network from the Fabric.
If you want to stop and start your Fabric after you have created it you need to change to the directory where the docker-compose.yml file is (in my case /home/rob/fabric-tools/fabric-scripts/hlfv1/composer)
Run docker-compose stop to stop the Fabric Containers and docker-compose start to restart where you left off. It is necessary to be in the correct folder before using the docker-compose command.

Could not find entity for dockerfile/elasticsearch

Im new to Docker and im having an issue.
Im trying to start the pblittle/docker-logstash container using this command:
sudo docker run -d -e LOGSTASH_CONFIG_URL=pathtomyconfig --link dockerfile/elasticsearch:es -p 9292:9292 -p 9200:9200 pblittle/docker-logstash
and i am getting the following error:
Error: Could not find entity for dockerfile/elasticsearch
If i do sudo docker ps is get the following:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0021bd567d95 dockerfile/elasticsearch:latest /elasticsearch/bin/e 7 minutes ago Up 7 minutes 0.0.0.0:9200->9200/tcp, 0.0.0.0:9300->9300/tcp cranky_meitner2
e900cbb758a5 dockerfile/redis:latest redis-server /etc/re 12 hours ago Up 7 minutes 0.0.0.0:6379->6379/tcp redis
592a4f8d97f2 dockerfile/redis:latest redis-server /etc/re 12 hours ago Up 7 minutes 6379/tcp cocky_thompson4
What the devil am i doing wrong? How can i work out what is going on?
You got the value for the --link option wrong. The --link option of the docker run command takes a container name or id as value followed by : followed by an alias (which can be whatever you want).
What you did wrong with --link dockerfile/elasticsearch:es is to pass in a Docker image name instead of a docker container name/id.
Firstly you need a running container for ElasticSearch:
sudo docker run -d -p 9300:9300 --name myelasticsearch dockerfile/elasticsearch
Then you can run your logstash container linking to the container named myelasticsearch:
sudo docker run -d -e LOGSTASH_CONFIG_URL=pathtomyconfig --link myelasticsearch:es -p 9292:9292 -p 9200:9200 pblittle/docker-logstash

How to run cloud-init manually?

I'm writing a CloudFormation template and I'm trying to debug the user-data script I provide in the template. How can I run the cloud-init manually and make it perform the same actions it does when starting a new instance?
You can just run it like this:
/usr/bin/cloud-init -d init
This runs the cloud init setup with the initial modules. (The -d option is for debug) If want to run all the modules you have to run:
/usr/bin/cloud-init -d modules
Keep in mind that the second time you run these it doesn't do much since it has already run at boot time. To force to run after boot time you can run from the command line:
( cd /var/lib/cloud/ && sudo rm -rf * )
In older versions the equivalent of cloud-init init is:
/usr/bin/cloud-init start
You may also find this question useful although it applies to the older versions of cloud-init: How do I make cloud-init startup scripts run every time my EC2 instance boots?
The documentation for cloud init here just gives you examples. But it doesn't explain the command line options or each one of the modules, so you have to play around with different values in the config to get your desired results. Of course you can also look at the code.
rm -f /var/log/cloud-init.log \
&& rm -Rf /var/lib/cloud/* \
&& cloud-init -d init \
&& cloud-init -d modules --mode final
Kudus to #Rico, and also, if you want to run a single module - either for testing or because your distro doesn't enable a module by default (hi Precise!), you can
/usr/bin/cloud-init -d single -n <module-name>
For example when my distro doesn't run write_files by default (like a lot of old distros), I use this at the top of runcmd:
runcmd:
- /usr/bin/cloud-init -d single -n write-files
[I know its not really an answer to the OP, but when looking to solve my problem this question was one of the top results, so I figure other people might find this useful]
As documentation, you can simply run
sudo cloud-init clean
and add --logs to clean all log file.
It will redo everything when you reboot
On most Linux distros (including CentOS and Ubuntu), you can restart the cloud-init service using systemctl:
systemctl restart cloud-init
And then check the output of the journal to see the results:
journalctl -f -u cloud-init
On Amazon Linux 2, we figured out that cloud-init is run after initial launch and then removed. This caused a problem when we built custom AMIs with Packer and then wanted to launch them with user-data scripts. Here is the Packer shell provisioner (HCL2 format) we use at the end of a build to reset cloud-init:
provisioner "shell" {
inline = [
"echo 'Waiting for cloud-init'; while [ ! -f /var/lib/cloud/instance/boot-finished ]; do sleep 1; done; echo 'Done'",
"sudo yum install cloud-init -y",
"sudo cloud-init clean",
]
}
AMIs built with templates that have this will launch with cloud-init support.

Resources