starting mongod service in windows - windows

guys i am trying to start mongod service with authentication , it ask me to give the database path , so i pass a config parameter with the following , putting in mind that i am already in the directory of the mongod service C:\Program Files\MongoDB\Server\4.2\bin
mongod --auth --config "C:\Program Files\MongoDB\Server\4.2\bin\mongod.cfg"
as the configuration file has the dbpath , but cmd get stuck
here is the configuration file
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: C:\Program Files\MongoDB\Server\4.2\data
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: C:\Program Files\MongoDB\Server\4.2\log\mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
#processManagement:
#security:
#operationProfiling:
#replication:
#sharding:
## Enterprise-Only Options:
#auditLog:
#snmp:

Add these to your config file:
processManagement:
windowsService:
serviceName: MongoDB
displayName: MongoDB
description: MongoDB Server - Standalone DB
security:
authorization: enabled
Then you can install the service with this command:
mongod.exe --config "C:\Program Files\MongoDB\Server\4.2\bin\mongod.cfg" --install
After you created the service, you cas start either from "Service" or by command line with
net start MongoDB
In order to stop/remove a service (entirely!) run
net stop MongoDB
sc config MongoDB start=disabled
mongod.exe --config "C:\Program Files\MongoDB\Server\4.2\bin\mongod.cfg" --remove
rmdir /S "C:\Program Files\MongoDB\Server\4.2\data"

Related

Cannot open Minio in browser after dockerizing it in Spring Boot App

I have a problem in opening minio in the browser. I just created Spring Boot app with the usage of it.
Here is my application.yaml file shown below.
server:
port: 8085
spring:
application:
name: springboot-minio
minio:
endpoint: http://127.0.0.1:9000
port: 9000
accessKey: minioadmin #Login Account
secretKey: minioadmin # Login Password
secure: false
bucket-name: commons # Bucket Name
image-size: 10485760 # Maximum size of picture file
file-size: 1073741824 # Maximum file size
Here is my docker-compose.yaml file shown below.
version: '3.8'
services:
minio:
image: minio/minio:latest
container_name: minio
environment:
MINIO_ROOT_USER: "minioadmin"
MINIO_ROOT_PASSWORD: "minioadmin"
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
I run it by these commands shown below.
1 ) docker-compose up -d
2 ) docker ps -a
3 ) docker run minio/minio:latest
Here is the result shown below.
C:\Users\host\IdeaProjects\SpringBootMinio>docker run minio/minio:latest
NAME:
minio - High Performance Object Storage
DESCRIPTION:
Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
USAGE:
minio [FLAGS] COMMAND [ARGS...]
COMMANDS:
server start object storage server
gateway start object storage gateway
FLAGS:
--certs-dir value, -S value path to certs directory (default: "/root/.minio/certs")
--quiet disable startup information
--anonymous hide sensitive information from logging
--json output server logs and startup information in json format
--help, -h show help
--version, -v print the version
VERSION:
RELEASE.2022-01-08T03-11-54Z
When I write 127.0.0.1:9000 in the browser, I couldn't open the MinIo login page.
How can I fix my issue?
The MinIO documentation includes a MinIO Docker Quickstart Guide that has some recipes for starting the container. The important thing here is that you cannot just docker run minio/minio; it needs a command to run, probably server. This also needs to be translated into your Compose setup.
The first example on that page breaks down like so:
docker run \
-p 9000:9000 -p 9001:9001 \ # publish ports
-e "MINIO_ROOT_USER=..." \ # set environment variables
-e "MINIO_ROOT_PASSWORD=..." \
quay.io/minio/minio \ # image name
server /data --console-address ":9001" # command to run
That final command is important. In your example where you just docker run the image and get a help message, it's because you omitted the command. In the Compose setup you also don't have a command: line; if you look at docker-compose ps I expect you'll see the container is exited, and docker-compose logs minio will probably show the same help message.
You can include that command in your Compose setup with command::
version: '3.8'
services:
minio:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: "..."
MINIO_ROOT_PASSWORD: "..."
volumes:
- ./data:/data
ports:
- 9000:9000
- 9001:9001
command: server /data --console-address :9001 # <-- add this

why does dotnet publish command work on Windows git bash terminal but not in Dockerfile?

I have a file named Dockerfile-dev with this content:
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1.102 AS build-env
WORKDIR /app
COPY . ./
RUN export DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER=0
# RUN dotnet restore
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1.2
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "AspNetCore.dll"]
Running docker build -f Dockerfile-dev . fails on the dotnet publish command:
Step 5/9 : RUN dotnet publish -c Release -o out
---> Running in c20e3f3e8110
Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Core
Copyright (C) Microsoft Corporation. All rights reserved.
/usr/share/dotnet/sdk/3.1.102/NuGet.targets(123,5): error : Unable to load the service index for source https://api.nuget.org/v3/index.json. [/app/AspNetCore.sln]
/usr/share/dotnet/sdk/3.1.102/NuGet.targets(123,5): error : The SSL connection could not be established, see inner exception. [/app/AspNetCore.sln]
/usr/share/dotnet/sdk/3.1.102/NuGet.targets(123,5): error : The remote certificate is invalid according to the validation procedure. [/app/AspNetCore.sln]
The command '/bin/sh -c dotnet publish -c Release -o out' returned a non-zero code: 1
However, when I directly run dotnet publish -c Release -o out from the git bash terminal, that completes successfully. What could be causing this - is there any additional command I need to include in the Dockerfile to address permissions?
Here's the output from running docker info if it helps reveal anything:
Client:
Debug Mode: false
Server:
Containers: 7
Running: 0
Paused: 0
Stopped: 7
Images: 35
Server Version: 19.03.12
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc version: dc9208a3303feef5b3839f4323d9beb36df0a9dd
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.19.76-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.945GiB
Name: docker-desktop
ID: YSLA:6VCF:UOAI:D5AI:QWRE:XE55:IHAU:347O:VOOL:ISH6:WO3G:UEZH
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 40
Goroutines: 52
System Time: 2020-08-12T01:31:50.272361169Z
EventsListeners: 3
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
I don't know what the issue is, but I might be able to help you solve it.
Here's the steps I would go through.
Comment out everything in your Dockerfile from RUN dotnet publish... onwards.
Build the image docker run build -t temp .
Run the container in interactive mode docker run -it temp /bin/bash. You're now in the container and can test your commands in real time.
Run dotnet publish -c Release -o out in the container. What happens? As someone else mentioned it could be proxy related. Try setting the http_proxy and https_proxy environment variables e.g. export http_proxy=http://example.role:1234. What happens if you run dotnet publish -c Release -o out after setting the proxy environment variables?
If the above doesn't work, at least you're in a container terminal where the command is failing, so you can experiment a bit...
If it is a proxy issue, you can add the following to your Dockerfile:
ARG HTTP_PROXY
ENV http_proxy $HTTP_PROXY
ENV https_proxy $HTTP_PROXY
ENV no_proxy localhost,127.0.0.1
and then when you build your container pass --build-arg HTTP_PROXY=http://example.role:1234.
This could be related to docker/for-win issue 4858 which mentions:
I used wrong certificate. The certificate to be used is the certificate authority certificate (root certificate) but I used the certificate issued to the system.
I generated the root certificate from the chain and imported to container.
Resolution: The ca certificate is named as ca-cert.crt. Added the following lines the Dockerfile.
COPY ca-cert.crt /usr/local/share/ca-certificates/ca-cert.crt
RUN chmod 644 /usr/local/share/ca-certificates/ca-cert.crt && update-ca-certificates
(similar to this answer)
You can see here examples using volumes and secrets, but you might not need them in your case.

docker-compose Error Cannot start service mongo: driver failed programming external connectivity on endpoint

I'm setting up grandnode with mondodb in docker using docker compose.
docker-compose.yml
version: "3.6"
services:
mongo:
image: mongo:3.6
volumes:
- mongo_data_db:/data/db
- mongo_data_configdb:/data/configdb
ports:
- 27017:27017
grandnode:
image: grandnode/grandnode:4.10
ports:
- 8080:8080
depends_on:
- mongo
volumes:
mongo_data_db:
external: true
mongo_data_configdb:
external: true
Getting below error while using the docker-compose.
E:\docker\grandnode>docker-compose up
Creating network "grandnode_default" with the default driver
Creating grandnode_mongo_1 ... error
ERROR: for grandnode_mongo_1 Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: for mongo Cannot start service mongo: driver failed programming external connectivity on endpoint grandnode_mongo_1 (1e54342c07b093e32189aad487927f226b3ed0d1b6bdf7413588377b0e99bc2c): Error starting userland proxy: mkdir /port/tcp:0.0.0.0:27017:tcp:172.20.0.2:27017: input/output error
ERROR: Encountered errors while bringing up the project.
It happen to me, in Xubuntu 20.04.
The problem was that I had mongod running in my computer.
Stop mongod, was the solution for me.
I did this:
sudo systemctl stop mongod
Check that mongod was stopped with:
systemctl status mongod | grep Active
The output of this command should be:
Active: inactive (dead)
Then, executed again this:
docker-compose up -d
Everything worked as expected.
Unless you want to connect to your MongoDB instance from your local host, you don't need that port mapping "27017:27017".
Both services are on the same network and will see each other anyway. Grandnode can connect to MongoDB at mongo:27017
The problem was because the Shared Drives were unchecked.
Check the drives required
Click Apply
Restart Docker
This will fix the issue.
stop your MongoDB server from your OS.
for linux
sudo systemctl stop mongod
if this still doesn't work then uninstall MongoDB from the local machine and run docker compose once again
for Linux user
sudo systemctl stop MongoDB
sudo docker-compose up -d

Docker volumes + mongodb (-v) - how to make them to work in Windows 10?

I'm trying to create mongodb container:
docker run --name mydb -d -p 27017:27999 -e MONGO_INITDB_DATABASE=mydb -v /myproject/dbtest:/data/db -v /myproject/docker/mongodb:/etc/mongod mongo:3.6.5-jessie --config /etc/mongod/mongo.conf
I have a problem with the -v flag: no matter what I'm trying it's not mapping my \myproject\dbtest and \myproject\docker\mongodb folders.
I'm trying to create a docker container for my project. Since that I already have a working mongod in my system then I want to map it into a different port (27999).
I tried also creating it using a docker file:
FROM mongo:3.6.5-jessie
ADD mongod.conf /etc/mongod.conf
ENTRYPOINT ["/usr/bin/mongod","--config","/etc/mongod.conf"]
This time it managed to find the configuration but I can't managed to connect into the db from outside of the container.
I tried:
127.0.0.1:2799
<the docker ip of the container>:27017
<the docker ip of the container>:27999
Here's my mongod.conf:
storage:
dbPath: /data/db
journal:
enabled: true
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
net:
port: 27017
bindIp: 0.0.0.0
processManagement:
timeZoneInfo: /usr/share/zoneinfo
Anyone managed to find out how to make it to work in Windows 10?
I'm using Docker CE v18.0.3

Consul - Deploy different config for different hosts

I am trying to deploy a consul cluster. I have the following machines:
consul-server01
consul-server02
consul-server03
web01
database01
I have 3 separate config files. One on each server respectively.
/etc/consul.d/server/config.json
/etc/consul.d/web/config.json
/etc/consul.d/database/config.json
If I add a new server (say web02), how can I have it automatically adopt the web server config?
Does consul support configuration discovery, or do I need to use chef/puppet/ansible/salt to deploy the web config to the web server?
Resources:
https://www.digitalocean.com/community/tutorials/how-to-configure-consul-in-a-production-environment-on-ubuntu-14-04
You can load your configurations into the initial Consul instances or clusters key/value store and then use consul-template to configure the additional nodes.
Create a data-container deriving from consul, mount a named volume onto /data - called myconfig
create a small ruby/whatever script "generate_key.rb" which generates a key into /data/consul/encrypt.json if it yet does not exist. It ends up looking like this.
{ 'encrypt': 'somekey generated by consul keygen' }
For generating a key, use : consul keygen
Start this script on container start ( ENTRYPOINT or CMD)
Setting up the consul-server
in the Dockerfile do
FROM consul
VOLUME /data/consul
# create a placeholder for the optional gossip key
RUN mkdir -p /data/consul && \
echo "{}" > /data/consul/encrypt.json && \
mkdir -p /consul/config &&
ln -s /data/consul/encrypt.json /consul/config/encrypt.json
# you server config
COPY consul-config.json /consul/config/server_config.json
CMD ["agent","-server"]
Your conusl-config.json should look similar to this
{
"datacenter": "stable",
"acl_datacenter": "stable",
"data_dir": "/consul/data",
"ui": true,
"dns_config": {
"allow_stale": false
},
"log_level": "INFO",
"node_name": "consul",
"client_addr" : "0.0.0.0",
"server": true,
"bootstrap":
true
}
# For every consul client
Create the same placeholder symlink
RUN mkdir -p /data/consul && \
echo "{}" > /data/consul/encrypt.json && \
mkdir -p /consul/config &&
ln -s /data/consul/encrypt.json /consul/config/encrypt.json
why those symlinks and dummy files
This ensures, that if we now mount the data volume, the encrypt key get replaced by the one generated by the config - and if not, the server starts without it. Consul needs a proper json file, it is not allowed to be missing nor to be empty
docker-compose example
version: "2"
services:
someconsuleclient:
image: mymongodb
container_name: someconsuleclient
depends_on:
- consul
volumes_from:
- dwconfig:ro
consul:
container_name: consul
image: myconsulimage
depends_on:
- config
volumes_from:
- config:ro
config:
image: myconfigimage
container_name: config
volumes:
- config:/data/
volumes:
config:
driver: local
So we have a config service to generate the encrypt.json, we have a consul server and we have a consul example client. Now you can add new consul-nodes very easy, while having a gossip encryption.
Of course, you can add arbitrary configuration, additional, for every client int /data/consul/custom_client.json in the bootstrap of your config container and share those across all clients. All .json files in the consul-config dir are merged, this way you can easily build "additions"

Resources