How to get or set the clustered database username and password in Jelastic JPS - jelastic

I am trying to set up a Jelastic clustered database as described in Setting Up Auto-Clusterization with Cloud Scripting but I don't see documentation there that describes how to either set or retrieve the cluster username and password.
I did try passing db_user and db_pass to the cluster, names I found in some of the sample JPS files, as well as having those as settings but the credentials were still just the Jelastic generated ones.
Here is the JPS I am trying to use; it includes a simple Debian container that requires the database credentials as environment variables. In this case the Docker container includes just the MariaDB client for testing purpose, the real environment is bit more complex than that, running scripts in the startup that need the database connection.
{
"version": "1.5",
"type": "install",
"name": "Database test",
"skipNodeEmails": true,
"globals":
{
"MYSQL_ROOT_USERNAME": "root",
"MYSQL_ROOT_PASSWORD": "${fn.password(20)}",
"MYSQL_USERNAME": "username",
"MYSQL_PASSWORD": "${fn.password(20)}",
"MYSQL_DATABASE": "database",
"MYSQL_HOSTNAME": "ProxySQL"
},
"nodes":
[
{
"image": "mireiawen/debian-sql",
"count": 1,
"cloudlets": 8,
"nodeGroup": "vds",
"displayName": "SQL worker",
"env":
{
"MYSQL_ROOT_USERNAME": "${globals.MYSQL_ROOT_USERNAME}",
"MYSQL_ROOT_PASSWORD": "${globals.MYSQL_ROOT_PASSWORD}",
"MYSQL_USERNAME": "${globals.MYSQL_USERNAME}",
"MYSQL_PASSWORD": "${globals.MYSQL_PASSWORD}",
"MYSQL_DATABASE": "${globals.MYSQL_DATABASE}",
"MYSQL_HOSTNAME": "${globals.MYSQL_HOSTNAME}"
}
},
{
"nodeType": "mariadb-dockerized",
"nodeGroup": "sqldb",
"count": "2",
"cloudlets": 16,
"cluster":
{
"scheme": "master"
}
}
]
}
This JPS seems to launch the MariaDB master-master cluster correctly with the ProxySQL included, I am just lacking on the documentation about how to either provide the database credentials to the database cluster, or a way to retrieve the generated ones to be used as variables in the JPS to send those to the containers.

The mechanism has been improved so now you can pass custom credentials to cluster using either environment variables or cluster settings:
type: install
name: env. variables
nodes:
nodeType: mariadb-dockerized
nodeGroup: sqldb
count: 2
cloudlets: 8
env:
DB_USER: customuser
DB_PASS: custompass
cluster:
scheme: master
or
type: install
name: cluster settings
nodes:
nodeType: mariadb-dockerized
nodeGroup: sqldb
count: 2
cloudlets: 8
cluster:
scheme: master
db_user: customuser
db_pass: custompass

Thank you for the good question. The mechanism of passing custom credentials should be and will be improved soon. At the moment you can use the example below. In short, we disable automated clustering and enable it again with custom username and password.
---
version: 1.5
type: install
name: Database test
skipNodeEmails: true
baseUrl: https://raw.githubusercontent.com/jelastic-jps/mysql-cluster/master
globals:
logic_jps: ${baseUrl}/addons/auto-clustering/scripts/auto-cluster-logic.jps
MYSQL_USERNAME: username
MYSQL_PASSWORD: ${fn.password(20)}
nodes:
- image: mireiawen/debian-sql
count: 1
cloudlets: 8
nodeGroup: extra
displayName: SQL worker
env:
MYSQL_USERNAME: ${globals.MYSQL_USERNAME}
MYSQL_PASSWORD: ${globals.MYSQL_PASSWORD}
- nodeType: mariadb-dockerized
nodeGroup: sqldb
count: 2
cloudlets: 16
cluster: false
onInstall:
install:
jps: ${globals.logic_jps}
envName: ${env.envName}
nodeGroup: sqldb
settings:
path: ${baseUrl}
scheme: master
logic_jps: ${globals.logic_jps}
db_user: ${globals.MYSQL_USERNAME}
db_pass: ${globals.MYSQL_PASSWORD}
repl_user: repl-${fn.random}
repl_pass: ${fn.password(20)}
After environment is ready, you can test the connection by executing the following command in your docker image:
mysql -h proxy -u $MYSQL_USERNAME -p$MYSQL_PASSWORD

Related

How to connect java app in docker container with database in another container?

I have a simple Springboot app connecting to two different SQL Server database. When all of them are hosted locally, I have no issues. But I need to have each of them in a separated docker container, when I do this I get an SQLServerException at the start of my Springboot app telling me :
com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host '172.21.0.3', port 1434 has failed. Error: "'172.21.0.3'. Verify the connection properties.
Where 172.21.0.3 is the IP of one of my database and 1434 it's port.
I use a docker network called network_gls (which doesn't seems to work) to connect my containers (gls_app, mssql_1 & mssql_2) together, when I execute :
docker inspect network_gls
(NOTE : The execution of this line is after the start of the Springboot app container & before it's error)
I get the following result :
[
{
"Name": "network_gls",
"Id": "88895acb2247b3b63b0cc29656fcb6d1a0d4a8192a8c7c1bb7b79362509e0742",
"Created": "2020-09-28T15:21:39.995019917Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.21.0.0/16",
"Gateway": "172.21.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0754d8766736806549e99500c143420c556e9370c14f897f6beb82c24a3c1124": {
"Name": "mssql_1",
"EndpointID": "6d886cf8f2aed256d8cbc7141d9ea5242f7ce61d95ae5412c16905d1b490f133",
"MacAddress": "02:42:ac:15:00:02",
"IPv4Address": "172.21.0.2/16",
"IPv6Address": ""
},
"54d20a9a409053eaf53eb5c7e73e340ab29c12ceaf8ac20b109d1403cba0c3d3": {
"Name": "mssql_2",
"EndpointID": "e675f72fc6c737201a31dd485496e749d386165eaa90a6647e0bf13507683028",
"MacAddress": "02:42:ac:15:00:03",
"IPv4Address": "172.21.0.3/16",
"IPv6Address": ""
},
"7e4ae1a46358fe9081c5277cb52ec49681b44631d6d9c1cdcaf6116326277d37": {
"Name": "gls_app",
"EndpointID": "d9051cd0134f5074b2b756b44b60cced85d2cac2fd04653e0f52ddb9ada339b9",
"MacAddress": "02:42:ac:15:00:04",
"IPv4Address": "172.21.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
And in my Springboot application, my connection string looks like this (example of the database in mssql_2) :
jdbc:sqlserver://172.21.0.3:1433;DatabaseName=gls
The docker networking aspect is new to me, tell me if I'm missing important information in this question
Thanks in advance
In my case it does work, when I use the Container Name instead of the IP address.
So instead of:
jdbc:sqlserver://172.21.0.3:1433;DatabaseName=gls
try this:
jdbc:sqlserver://mssql_2:1433;DatabaseName=gls
Also you can try to publish your ports, if you want to test if you have problems with your networking.
https://docs.docker.com/config/containers/container-networking/
Edit: Thanks to David for pointing out container_name is not required. It can be connected using service name.
You can create a docker compose and use it to start your DB. Given is an example of docker compose you use and application.properties.
You can use your docker service name while connecting to db from another container. To connect from localhost, in most cases port are exposed as port:port, it can be accessed as localhost.
docker-compose build web .
docker-compose up db
docker-compose up web
or
docker-compose up
You can use localhost when not accessing it from container
Docker Compose File
version: "3.3"
services:
web:
build:
context: ./
dockerfile: Dockerfile
image: web:latest
ports:
- 8080:8080
environment:
POSTGRES_JDBC_USER: UASENAME
POSTGRES_JDBC_PASS: PASSWORD
SPRING_DATASOURCE_URL: "jdbc:postgresql://db:5432/DATABASE"
SPRING_PROFILES_ACTIVE: dev
command: mvn spring-boot:run -Dspring.profiles.active=dev
depends_on:
- db
- rabbitmq
db:
image: "postgres:9.6-alpine"
ports:
- 5432:5432
expose:
- 5432
volumes:
- postgres:/var/lib/postgresql/data
environment:
POSTGRES_USER: USERNAME
POSTGRES_PASSWORD: PASSWORD
POSTGRES_DB: DATABASE
volumes:
postgres:
app:
This is application properties (for local development):
spring.jpa.database-platform=org.hibernate.dialect.PostgreSQLDialect
spring.datasource.url=jdbc:postgresql://localhost:5432/database
spring.datasource.username=USERNAME
spring.datasource.password=PASSWORD
Hope this will answer.

How to store Key/Value values in a config for Consul

I am using the Consul Docker Image on dockerhub. I wanted to know if there is a way to store the Key/Value settings in a config that the docker images can load on boot. I understand that the Image has the /consul/config and /consul/data volumes that can be used. but I have not found a way to achieve this.
The following is how I run consul
version: '3.4'
service:
consul:
container_name: consul
image: consul:latest
ports:
- "8500:8500"
- "8300:8300"
volumes:
- ./consul:/consul/config
In my host consul dir I have a file called config.json which contains the following
{
"node_name": "consul_server",
"data_dir": "/data",
"log_level": "INFO",
"client_addr": "0.0.0.0",
"bind_addr": "0.0.0.0",
"ui": true,
"server": true,
"bootstrap_expect": 1
}

How to monitor docker services using elastic stack

I have a docker swarm running a number of services. I'm using the elastic stack (kibana, elastic, filebeat, etc) for monitoring.
For the business logic I'm writing logs and using filebeat to move them to logstash and analyze the data in kibana.
But I'm having trouble in monitoring the liveness of my docker services. some of them are deployed globally (like filebeat) and some of them have a number of replicas. I wan't to be able to see in kibana that the number of running containers is equal to the number that the service should have. I'm trying to use metricbeat with docker module, the most useful metricset I've found is container, but it doesn't seem to contain enough information for me to display or analyze the number of instances of a service.
I'd appreciate any advice how to achieve this.
The metricbeat config
metricbeat.autodiscover:
providers:
- type: docker
hits.enabled: true
metricbeat.modules:
- module: docker
enabled: true
metricsets:
- container
- healthcheck
- info
period: 10s
hosts: [ "unix:///var/run/docker.sock" ]
processors:
- add_docker_metadata: ~
- add_locale:
format: offset
output.logstash:
hosts: [ "mylogstash.com" ]
The metricset container log data (the relevant docker part)
...
"docker" : {
"container": {
"id": "60983ad304e13cb0245a589ce843100da82c5fv9e093aad68abb439cdc2f3044"
"status": "Up 3 weeks",
"command": "./entrypoint.sh",
"image": "registry.com/myimage",
"created": "2019-04-08T11:38:10.000Z",
"name": "mystack_myservice.wuiqep73p99hcbto2kgv6vhr2.mufs70y24k5388jxv782in18f",
"ip_addresses": [ "10.0.0.148" ]
"labels" : {
"com_dokcer_swarm_node_id": "wuiqep73p99hcbto2kgv6vhr2",
"com_docker_swarm_task_name": "stack_service.wuiqep73p99hcbto2kgv6vhr2.mufs70y24k5388jxv782in18f",
"com_docker_swarm_service_id": "kxm5dk43yzyzpemcbz23s21xo",
"com_docker_swarn_task_id": "mufs70y24k5388jxv782in18f",
"com_docker_swarm_task" : "",
"com_docker_stack_namespace": "mystack",
"com_docker_swarm_service_name": "mystack_myservice"
},
"size": {
"rw": 0,
"root_fs": 0
}
}
}
...
For future reference:
I wrote a bash script which runs by interval and write a json log for each of the swarm services. the script is wrapped in image docker service logger

Config Server: native property source is ignored

This is my bootstrap.yml content file:
server.port: 8888
spring:
application:
name: configserver
profiles:
active: native, git, vault
cloud:
config:
enabled: false
server:
native:
searchLocations: classpath:config/
# searchLocations: file://${native_location}
order: 3
git:
uri: file:///home/jcabre/projects/wsec-sccs/server/repo
order: 2
vault:
host: ${vault_server_host:localhost}
port: ${vault_server_port:8200}
scheme: ${vault_server_scheme:https}
backend: ${vault_backend:configserver}
profileSeparator: /
order: 1
As you can see I've stand up three backends: native, git, vault.
So classpath:/config/application.yml content:
foo: FROM NATIVE APPLICATION
/home/jcabre/projects/wsec-sccs/server/repo/application.yml content:
foo: FROM GIT
And Vault:
$ vault kv get configserver/configclient/
=== Data ===
Key Value
--- -----
foo FROM VAULT
$vault kv get configserver/configclient/dev
=== Data ===
Key Value
--- -----
foo FROM DEV VAULT
When I try to get foo config key using curl:
$ curl -sS -X GET http://localhost:8888/configclient/default -H "X-Config-Token: ${vault_token}" | jq .
{
"name": "configclient",
"profiles": [
"default"
],
"label": null,
"version": null,
"state": null,
"propertySources": [
{
"name": "vault:configclient",
"source": {
"foo": "FROM VAULT"
}
},
{
"name": "file:///home/jcabre/projects/wsec-sccs/server/repo/application.yml",
"source": {
"foo": "FROM GIT APPLICATION"
}
}
]
}
I only get git and vault property sources, but it doesn't send me native.
How can this be happening?
Any ideas?
Not sure if you ever got an answer to this, but I had a similar problem (no native profile when Vault was enabled) so I looked through the code (latest in GitHub).
It would appear that the NativeEnvironmentRepository is only enabled if the native profile is present AND no other environment repositories are configured. So it doesn't look like you are able to do what you want in the question.

docker stack network issue

i have crated the docker stack file as below, it created the 3 services, as excepted but i am unable to access out side of the host. and its not creating any port also. i have created a overlay network called test01. When i create a this manually via command line it works perfectly.
version: '3.0'
networks:
default:
external:
name: test01
services:
mssql:
image: microsoft/mssql-server-windows-developer
environment:
- SA_PASSWORD=Password1
- ACCEPT_EULA=Y
ports:
- 1433:1433
volumes:
- c:\Databases:c:\Databases
deploy:
placement:
constraints: [node.labels.os==Windows]
web:
image: iiswithdb:latest
ports:
- 8080:8080
deploy:
replicas: 3
lbs:
image: nginx:latest
ports:
- 80: 80
deploy:
placement:
constraints: [node.labels.os==Windows]
Your services need to explicitly join the network you are defining. You can do this in the compose file. Otherwise they will use the default network created by the stack/compose. https://docs.docker.com/compose/compose-file/#networks
c:\Program Files\docker>docker network inspect test01
[
{
"Name": "test01",
"Id": "8ffz8xihux13gx1uuhalub5by",
"Created": "2017-09-11T12:30:35.7747711+05:30",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2f283e7c21608d09a57a7cdef25a836d77c0ceb8030ae15796ff692e43b0eb73": {
"Name": "test_web.1.jti1pyrgxv3v4yet9m9cpk0i4",
"EndpointID": "bed2a5e0d077fcf48ab2d6fe419a8a69a45c3033e1a8602cf6395f93bec405b8",
"MacAddress": "00:15:5d:f3:aa:1a",
"IPv4Address": "10.0.0.5/24",
"IPv6Address": ""
},
"8c55fad8ad54e5286bb7fc54da52ad1958854bceacbf0260400e7dc3c00c1c45": {
"Name": "test_mssql.1.mn31bwoh8iwg5sge5rllh7gc9",
"EndpointID": "00c6e68d6a22ee0dc5ad90cda7ab958323a0b07206ce4583f11baa8b3476de8f",
"MacAddress": "00:15:5d:f3:aa:23",
"IPv4Address": "10.0.0.3/24",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.driver.overlay.vxlanid_list": "4097",
"com.docker.network.windowsshim.hnsid": "b76fa7e3-530d-4133-b72a-1d1818cd3c16"
},
"Labels": {},
"Peers": [
{
"Name": "node2-f3dedf0e26d9",
"IP": "10.30.50.10"
},
{
"Name": "node3-2e1ad7fb91be",
"IP": "10.30.50.13"
}
]
}
]
Below is the output
c:\Program Files\docker>docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
bo9uovidd4z3 test_web replicated 3/3 iiswithdb:latest *:8080->8080/tcp
sujwg53gjnp3 test_lbs replicated 0/1 nginx:latest *:80->80/tcp
vyxyoaji8jkd test_mssql replicated 1/1 microsoft/mssql-server-windows-developer:latest *:1433->1433/tcp
c:\Program Files\docker>docker service ps test_mssql
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
mn31bwoh8iwg test_mssql.1 microsoft/mssql-server-windows-developer:latest node2 Running Running 6 minutes ago
When i inspect SQL server container i can't find any port taged
c:\Program Files\docker>docker service ps test_lbs
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
j4x806u1ucdr test_lbs.1 nginx:latest Running Pending 32 minutes ago
c:\Program Files\docker>docker service ps test_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
jti1pyrgxv3v test_web.1 iiswithdb:latest node2 Running Running 22 minutes ago
1gudznmi9ufz \_ test_web.1 iiswithdb:latest node2 Shutdown Failed 27 minutes ago "task: non-zero exit (21479434…"
xxkr98na4qsy test_web.2 iiswithdb:latest node3 Running Running 29 minutes ago
7j1y6vc90qvf test_web.3 iiswithdb:latest node3 Running Running 29 minutes ago
C:\Users\Administrator>docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
19qeljqt3wuf test_mssql replicated 1/1 microsoft/mssql-server-windows-developer:latest *:1433->1433/tcp
48gamfl4j4rl test_web replicated 3/3 iiswithdb:latest *:8080->8080/tcp
nxycxrigmz4u test_lbs replicated 1/1 nginx:latest *:80->80/tcp
C:\Users\Administrator>docker service ps test_lbs
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
81fm4xplekig test_lbs.1 nginx:latest node2 Running Running 25 minutes ago
C:\Users\Administrator>docker service ps test_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
aivzt7eagf4f test_web.1 iiswithdb:latest node1 Running Running about an hour ago
sny1zf7osibq test_web.2 iiswithdb:latest node2 Running Running about an hour ago
lwzlpaks1b4t \_ test_web.2 iiswithdb:latest node2 Shutdown Failed about an hour ago "task: non-zero exit (21479434…"
iav5mxqdbzoy test_web.3 iiswithdb:latest node3 Running Running about an hour ago
C:\Users\Administrator>docker service ps test_mssql
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
pfu8qyw7vqxp test_mssql.1 microsoft/mssql-server-windows-developer:latest node2 Running Running 26 minutes ago

Resources