mc: <error> while trying to run bitnami/minio-client the container is exiting within a seconds - docker-machine

docker run -it --name mc3 dockerhub:5000/bitnami/minio-client
08:05:31.13
08:05:31.14 Welcome to the Bitnami minio-client container
08:05:31.14 Subscribe to project updates by watching https://github.com/bitnami/containers
08:05:31.14 Submit issues and feature requests at https://github.com/bitnami/containers/issues
08:05:31.15
08:05:31.15 INFO  ==> ** Starting MinIO Client setup **
08:05:31.16 INFO  ==> ** MinIO Client setup finished! ** mc: Configuration written to /.mc/config.json. Please update your access credentials.
mc: Successfully created /.mc/share.
mc: Initialized share uploads /.mc/share/uploads.json file.
mc: Initialized share downloads /.mc/share/downloads.json file.
**mc: /opt/bitnami/scripts/minio-client/run.sh is not a recognized command. Get help using --help flag.
dockerhub:5000/bitnami/minio-client - name of the image
It would be great if someone reach out to help me how to solve this issue as I'm stuck here for more than 2 days

MinIO has two components:
Server
Client
The Server runs continuously, as it should, so it can serve the data.
On the other hand the client, which you are trying to run, is used to perform operations on a running server. So its expected for it to run and then immediately exit as its not a daemon and its not meant to run forever.
What you want to do is to first launch the server container in background (using -d flag)
$ docker run -d --name minio-server \
--env MINIO_ROOT_USER="minio-root-user" \
--env MINIO_ROOT_PASSWORD="minio-root-password" \
minio/minio:latest
Then launch the client container to perform some operation, for example making/creating a bucket, which it will perform on the server and exit immidieatly after which it will clean up the client container (using -rm flag).
$ docker run --rm --name minio-client \
--env MINIO_SERVER_HOST="minio-server" \
--env MINIO_SERVER_ACCESS_KEY="minio-root-user" \
--env MINIO_SERVER_SECRET_KEY="minio-root-password" \
minio/mc \
mb minio/my-bucket
For more information please checkout the docs
Server: https://min.io/docs/minio/container/operations/installation.html
Client: https://min.io/docs/minio/linux/reference/minio-mc.html

Related

Passing Java APM Agent settings in Docker

I monitor my jar using Elastic APM Agent, i run these commands manually :
java -javaagent:../infrastructure/agent/apm-agent.jar \
-Delastic.apm.service_name=server \
-Delastic.apm.server_urls=http://${APM_HOST}:8200 \
-Delastic.apm.application_packages=package.coù \
-jar ./target/server-0.0.1-SNAPSHOT.jar &
Now , i want to pass these parameters using docker run , i create the image and i try with this command to pass these settings , but the application is not starting
docker run --name app -e CATALINA_OPTS='-Dspring.config.location=/usr/local/tomcat/application-recette.properties,/usr/local/tomcat/application.yml'
-e CATALINA_OPTS='-Delastic.apm.service_name=server'
-e CATALINA_OPTS='-Delastic.apm.server_urls=http://10.128.0.4:8200'
-e CATALINA_OPTS='-Delastic.apm.application_packages=package.com'
-d -p 9000:8080 image:v1
any idea to resolve this ?
Thanks
actually there are many reasons why your app not starting depending on how you setup and configured your ELK stack , but for me I did the following and it's working fine :
shipped application.jar and apm-agent.jar via Dockerfile and run them inside container :
FROM openjdk:8-jre-alpine
COPY javaProjects/test-apm/target/test-apm-0.0.1-SNAPSHOT.jar /app.jar
COPY elastic-apm-agent-1.19.0.jar /apm-agent.jar
CMD ["/usr/bin/java","-javaagent:/apm-agent.jar", "-Delastic.apm.service_name=my-cool-service -Delastic.apm.application_packages=main.java -Delastic.apm.server_urls=http://localhost:8200","-jar", "/app.jar"]
create image from this Dockerfile:
docker build -t test-apm:latest ./
run the created image :
docker run --network host -p 8080:8080 test-apm:latest
note my apm-server and ELK-stack was running on my host machine ,
I think if you do the same and make little changes to mach you environments it should work fine ,

Error "Docker: invalid publish opts format " runing Graphviz docker container on Mac Os

I'm completely new to docker and am using it for the first time.
I have installed Docker Desktop for Mac OS and run the 'Hello-world' container successfully. I am now trying to run this 'omerio/graphviz-server' from https://hub.docker.com/r/omerio/graphviz-server (which is what I really want Docker for) and although the 'docker pull omerio/graphviz-server' command completes successfully:
devops$ docker pull omerio/graphviz-server
Using default tag: latest
latest: Pulling from omerio/graphviz-server
863735b9fd15: Pull complete
4fbaa2f403df: Pull complete
44be94a95984: Pull complete
a3ed95caeb02: Pull complete
ae092b5d3a08: Pull complete
d0edb8269c6a: Pull complete
Digest: sha256:02cd3e2355526a927e951a0e24d63231a79b192d4716e82999ff80e0893c4adc
Status: Downloaded newer image for omerio/graphviz-server:latest
the command to start the container (given on https://hub.docker.com/r/omerio/graphviz-server): 'docker run -d -p : omerio/graphviz-server' gives me the error message:
devops$ docker run -d -p : omerio/graphviz-server
docker: invalid publish opts format (should be name=value but got ':').
See 'docker run --help'.
Searching for this error message returns no information at all. I see that the container in question was last updated over 3 years ago - could it be an old format that Docker no longer supports?
-p option of docker run command binds ports between host and container (see docs), and its usage is most of the time the following :
docker run <other options> \
-p <port on the host>:<port in the container> \
<my_image> <args>
As for your example : it seems that running the image needs an argument (the port in the container). Let's choose 8080 for example (that means port 8080 will be used by the application inside the container).
If you want to access it directly on your host (via localhost), you should bind 8080 port (in the container, the port we chose previously) to any available port on your host (let's say 8081), like this :
docker run \
-p 8081:8080 \
omerio/graphviz-server 8080
You should now be able to access the application (port 8080 of the application running in the container) from your host via localhost:8081.

RabbitMQ console returned 431

I start rabbitmq on docker with command:
docker run -d --hostname my-rabbit --name rabbit-fox -p 5672:5672 -p 8090:15672 rabbitmq:3-management
it runs fine and i can log into console, but later on Chrome browser i get this:
and can not use then console in the browser.
Clearing browser's cache & memory did the work in my case.
After facing the same issue these steps performed:
I tried to re-run the docker, even I got to the point I re-installed the RabbitMQ server image without any result.
It simply was solved when I cleared the browser's Cache & Memory

Volume trouble with GitLab docker image on Windows

I'm trying to run official image gitlab/gitlab-ce:latest by docker on my Windows 10.
First I tried to run it like below and it worked.
docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 8080:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume /srv/gitlab/config:/etc/gitlab \
--volume /srv/gitlab/logs:/var/log/gitlab \
--volume /srv/gitlab/data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
But the problem was that changes in container were not saving. I found that the thing is in volumes. This attaching works only for Boot2Docker VM. Ok, I successfully shared my disk C:/ from host(Window) in docker settings(desktop application) and test it. Window's folder shared and I can see files in test container.
Now I'm trying ti run gitlab image like this:
docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 8080:80 --publish 22:22 \
--name gitlab \
--restart always \
--volume C:\Users\Public\Gitlab\config:/etc/gitlab \
--volume C:\Users\Public\Gitlab\logs:/var/log/gitlab \
--volume C:\Users\Public\Gitlab\data:/var/opt/gitlab \
gitlab/gitlab-ce:latest
And got this error on container:
# Logfile created on 2017-06-21 16:33:44 +0000 by logger.rb/56438
[2017-06-21T16:33:45+00:00] INFO: Started chef-zero at chefzero://localhost:8889 with repository at /opt/gitlab/embedded
One version per cookbook
[2017-06-21T16:33:45+00:00] INFO: Forking chef instance to converge...
[2017-06-21T16:33:45+00:00] INFO: *** Chef 12.12.15 ***
[2017-06-21T16:33:45+00:00] INFO: Platform: x86_64-linux
[2017-06-21T16:33:45+00:00] INFO: Chef-client pid: 26
[2017-06-21T16:33:45+00:00] WARN: unable to detect ipaddress
[2017-06-21T16:33:46+00:00] INFO: HTTP Request Returned 404 Not Found: Object not found: chefzero://localhost:8889/nodes/9ca249ba6250
[2017-06-21T16:33:46+00:00] INFO: Setting the run_list to ["recipe[gitlab]"] from CLI options
[2017-06-21T16:33:46+00:00] INFO: Run List is [recipe[gitlab]]
[2017-06-21T16:33:46+00:00] INFO: Run List expands to [gitlab]
[2017-06-21T16:33:46+00:00] INFO: Starting Chef Run for 9ca249ba6250
[2017-06-21T16:33:46+00:00] INFO: Running start handlers
[2017-06-21T16:33:46+00:00] INFO: Start handlers complete.
[2017-06-21T16:33:46+00:00] INFO: HTTP Request Returned 404 Not Found: Object not found:
[2017-06-21T16:33:47+00:00] INFO: Loading cookbooks [gitlab#0.0.1, runit#0.14.2, package#0.0.0]
[2017-06-21T16:33:47+00:00] INFO: directory[/etc/gitlab] mode changed to 775
[2017-06-21T16:33:47+00:00] WARN: Skipped selecting an init system because it looks like we are running in a container
[2017-06-21T16:33:48+00:00] INFO: template[/var/opt/gitlab/.gitconfig] owner changed to 998
[2017-06-21T16:33:48+00:00] INFO: template[/var/opt/gitlab/.gitconfig] group changed to 998
[2017-06-21T16:33:48+00:00] INFO: template[/var/opt/gitlab/.gitconfig] mode changed to 644
[2017-06-21T16:33:48+00:00] INFO: Running queued delayed notifications before re-raising exception
[2017-06-21T16:33:48+00:00] ERROR: Running exception handlers
[2017-06-21T16:33:48+00:00] ERROR: Exception handlers complete
[2017-06-21T16:33:48+00:00] FATAL: Stacktrace dumped to /opt/gitlab/embedded/cookbooks/cache/chef-stacktrace.out
[2017-06-21T16:33:48+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2017-06-21T16:33:48+00:00] ERROR: ruby_block[directory resource: /var/opt/gitlab/git-data] (gitlab::gitlab-shell line 26) had an error: Mixlib::ShellOut::ShellCommandFailed: Failed asserting that ownership of "/var/opt/gitlab/git-data" was git
---- Begin output of set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] ----
STDOUT:
STDERR: + readlink -f /var/opt/gitlab/git-data
+ stat --printf=%U /var/opt/gitlab/git-data
+ [ root = git ]
---- End output of set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] ----
Ran set -x && [ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ] returned 1
[2017-06-21T16:33:48+00:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
Please help, what did I do wrong?
I think I have a solution for running Gitlab from Docker on Windows 10; it appears to be working for me thus far.
For all of the Powershell, you'll need an elevated prompt.
Init
This first part gets the folders/volumes setup, then creates and starts the Gitlab container. (Remember that you have to have the Docker Desktop running and tell it to make the C drive shared.)
mkdir c:\GitlabConfig
mkdir c:\GitlabConfig\backups
docker volume create gitlab-logs
docker volume create gitlab-data
docker run --detach `
--name gitlab `
--restart always `
--hostname gitlab.local `
--publish 4443:443 --publish 4480:80 --publish 8222:22 `
--volume C:\GitlabConfig:/etc/gitlab `
--volume gitlab-logs:/var/log/gitlab `
--volume gitlab-data:/var/opt/gitlab `
gitlab/gitlab-ce
Wait a few minutes for Gitlab to finish initializing; just keep refreshing "localhost:4480/" in a browser until the web page comes up.
You can now use Gitlab however you were planning.
Backups
Edit the c:\GitlabConfig\gitlab.rb file. Find the 2 settings indicated, uncomment, and set them like so (this is what you'll end up with):
gitlab_rails['manage_backup_path'] = false
gitlab_rails['backup_path'] = "/etc/gitlab/backups"
Note that the "backups" folder is the same one created on the host back at the beginning, just how it's known inside the container.
Next, restart the container.
docker restart gitlab
Now you can backup Gitlab and it'll show up on the Windows host
docker exec -it gitlab gitlab-rake gitlab:backup:create
You will see a c:\GitlabConfig\backups\{prefix}_gitlab_backup.tar file in Windows after the process completes.
Restores
Once everything's ready for a restore, you can just run
gitlab gitlab-rake gitlab:backup:restore BACKUP={prefix}
Where {prefix} is everything that comes before the "_gitlab_backup.tar" of the filename you want to use for the restore. The restore functionality looks in the folder you configured earlier in the gitlab.rb file.
With this approach, I've been able to setup a running Gitlab container on Windows 10. I can just backup that main "c:\GitlabConfig" folder using any method I like.
Additionally, you can nuke the container and the 2 docker-volumes and start from scratch with just that folder's contents. If you start the new container by pointing it to your saved config folder, it'll have most of your stuff right out of the gate. But after it's done booting up, you can restore a backup and you'll be right back where you were. The docs presently indicate that there's problems with restoring when Gitlab is running in a container, but I didn't have any. If the restore runs into trouble, it should tell you what you need to fix before trying again
The hyperv filesharing mechanism does not support unix style file permissions. Because of this, the application encounters an error when it tries to assert that ownership is what it expects. I would guess that it tries to do a 'chown', followed by the following shell line that is described in your output:
[ "$(stat --printf='%U' $(readlink -f /var/opt/gitlab/git-data))" = 'git' ]
Error message indicates that it is expecting the owner to be 'git', but it gets 'root' instead.
You have two major possibilities. You could try to configure/change the process running inside your container to handle the situation. That might be as simple as changing the configuration so that it expects 'root' instead of 'git'. It may end up being much more involved than that. Your mileage may vary.
Your other option is to use a file system that does properly support unix style permissions and ownership. This would mean that you should switch to using a named volume, or switch back to using a host volume within the hyperv vm that Docker For Windows sets up. Either way, your files will reside in the VM instead of on your host's file system.
I would recommend using a named volume in this situation. Just keep in mind that if you reset your Docker For Windows VM, any data in a named volume will be reset as well.

How do I run the Hetionet v1.0 docker container?

I'm trying to run the Hetionet v1.0 docker container mentioned in this SO post.
I've setup a digitalocean droplet with Docker
I ran docker pull dhimmel/hetionet and it worked
Now I run docker run dhimmel/hetionet and the following happens (and never returns to the interactive shell prompt).
If that completed successfully I think the last thing I'm supposed to do is run sh ~/run-docker.sh. Furthermore nothing is live at my droplet's ip_address:7474.
The error in the screenshot above looks a lot like it could be related to some redundant #Path("/") annotation, as described in this SO post's comment, buried in the docker container but I'm not sure.
Is the output from running docker run dhimmel/hetionet supposed to hang my shell? I'm running a 2 GB Memory / 40 GB Disk Droplet on Ubuntu 16.04 with Docker 1.12.5.
Thanks for your interest in the Hetionet Docker.
The output in 3 is expected. It looks like a Docker container successfully launched, downloaded the Hetionet database, and launched the Neo4j server. I'll look into fixing the warnings, but they're not errors, as Neo4j is still launching.
For production, we use a more advanced Docker run command. Depending on your use case, you may want to use the development docker run command:
docker run \
--publish=7474:7474 \
--publish=7687:7687 \
--volume=$HOME/neo4j/hetionet-data:/data \
--volume=$HOME/neo4j/hetionet-logs:/var/lib/neo4j/logs \
dhimmel/hetionet
Both the production and development command map ports. This will make it so the Neo4j server running inside your Docker container is available at http://localhost:7474/. This is most likely what you want. If you're doing this on DigitalOcean, you would replace http://localhost with the IP address of your droplet.
For an interactive shell session in a dhimmel/hetionet container, you can use:
docker run --interactive --tty dhimmel/hetionet bash
However, that command does not launch the Neo4j server -- it just let's you explore the image.
Does this clear things up?

Resources