How to configure a MongoDB cluster which supports sessions? - spring

I want to explore the new transaction feature of MongoDB and use Spring Data MongoDB. However, I get the exception message "Sessions are not supported by the MongoDB cluster to which this client is connected". Any hint regarding the config of MongoDB 3.7.9 is appreciated.
The stacktrace starts with:
com.mongodb.MongoClientException: Sessions are not supported by the
MongoDB cluster to which this client is connected
at com.mongodb.MongoClient.startSession(MongoClient.java:555) ~[mongodb-driver-3.8.0-beta2.jar:na]
at org.springframework.data.mongodb.core.SimpleMongoDbFactory.getSession(SimpleMongoDbFactory.java:163)
~[spring-data-mongodb-2.1.0.DATAMONGO-1920-SNAPSHOT.jar:2.1.0.DATAMONGO-1920-SNAPSHOT]

I was having the same issue when I was trying to connect it to a single standalone mongo instance, however as written in the official documentation, that Mongo supports transaction feature for a replica set. So, I then tried to create a replica set with all instances on MongoDB 4.0.0, I was able to successfully execute the code.
So,
Start a replica set (3 members), then try to execute the code, the issue will be resolved.
NB : you can configure a replica set on the same machine for tests https://docs.mongodb.com/manual/tutorial/deploy-replica-set-for-testing/

We were able to config in local as below
On Linux, a default /etc/mongod.conf configuration file is included
when using a package manager to install MongoDB.
On Windows, a
default <install directory>/bin/mongod.cfg configuration file is
included during the installation
On macOS, a default /usr/local/etc/mongod.conf configuration file is included
when installing from MongoDB’s official Homebrew tap.
Add the following config
replication:
oplogSizeMB: 128
replSetName: "rs0"
enableMajorityReadConcern: true
sudo service mongod restart;
mongo;
rs.initiate({
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "localhost:27017" }
]
}
)
check for the config to be enabled
rs.conf()
we can use the connection URL as
mongodb://localhost/default?ssl=false&replicaSet=rs0&readPreference=primary
docs: config-options single-instance-replication

Replica set is the resolution for the issue for sure
But doing replica of 3 nodes is not mandatory.
Solution 1 (for standalone setup)
For standalone mongo installation you can skip configuring 2nd or 3rd node as described on the official mongo documentations here
And you'll need to set a replSetName in the configuration
replication:
oplogSizeMB: <int>
replSetName: <string>
enableMajorityReadConcern: <boolean>
and then run details of which are here
rs.initiate()
after this the connection string would be like below:-
mongodb://localhost:27017/<database_name>?replicaSet=<replSet_Name>
keys above that you need to replace:-
database_name = name of the database
replSet_Name = name of the replica set you setup in the above configuration
Solution 2 (only for docker based requirement)
Example Docker image with single node replica set acting as primary node for development environment is as below:-
I had hosted the docker image on the docker hub
docker pull krnbr/mongo:latest
Contents of the same Dockerfile are below:-
FROM mongo
RUN echo "rs.initiate({'_id':'rs0','members':[{'_id':0,'host':'127.0.0.1:27017'}]});" > /docker-entrypoint-initdb.d/replica-init.js
RUN cat /docker-entrypoint-initdb.d/replica-init.js
CMD [ "--bind_ip_all", "--replSet", "rs0" ]
Docker run command (replace with the image name that you build yoursef or use the on shared above i.e krnbr/mongo):-
without volume
docker run -d --name mongo -p 27017:27017 <Image Name> mongod --replSet rs0 --port 27017
with volume
docker run -d --name mongo -p 27017:27017 -v ~/.mongodb:/data/db <Image Name> mongod --replSet rs0 --port 27017
for supporting binding of any ip
docker run -d --name mongo -p 27017:27017 -v ~/.mongodb:/data/db <Image Name> mongod --bind_ip_all --replSet rs0 --port 27017

I disabled TLS (inside Spring Data MongoDB), and now the transaction feature with the developement release 3.7.9 works fine.

With the reference to the answer give by #kakabali, I have few a bit different scenario and configure it.
I am configure mongo with spring boot and try to use transactions management and getting the error:
com.mongodb.MongoClientException: Sessions are not supported by the
MongoDB cluster to which this client is connected at
I follow few of the steps given by above answer and added few:
Change the mongo.cfg and added this
replication:
oplogSizeMB: 128
replSetName: "rs0"
enableMajorityReadConcern: true
Restart the service as I am using Windows10.
Open mongo console and run rs.initilize()

Make sure you're using the updated API - for example:
MongoClient mongoClient = MongoClients.create();
MongoDatabase dataBase = mongoClient.getDatabase("mainDatabase");
MongoCollection<Document> collection = dataBase.getCollection("entities");
Also make sure you have mongo.exe open.

Related

M1 mac cannot run jboss/keycloak docker image

Switched to m1 mac a week ago and I cannot get my application up and running with docker because of the jboss/keycloak image not working as expected. Getting the following message from the container when trying to access localhost:8080
12:08:12,456 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-5) MSC000001: Failed to start service org.wildfly.network.interface.private: org.jboss.msc.service.StartException in service org.wildfly.network.interface.private: WFLYSRV0082: failed to resolve interface private
12:08:12,526 ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0013: Operation ("add") failed - address: ([("interface" => "private")]) - failure description: {"WFLYCTL0080: Failed services" => {"org.wildfly.network.interface.private" => "WFLYSRV0082: failed to resolve interface private"}}
12:08:13,463 ERROR [org.jboss.as] (Controller Boot Thread) WFLYSRV0026: Keycloak 12.0.4 (WildFly Core 13.0.3.Final) started (with errors) in 20826ms - Started 483 of 925 services (54 services failed or missing dependencies, 684 services are lazy, passive or on-demand)
Tried with all image versions and all behave the same. Has anyone managed to run this image without issues? Thanks
Also you can build the keycloak docker image locally, I was able to start keycloak after doing that. Here are the steps I follow;
Clone Keycloak containers repository: git clone git#github.com:keycloak/keycloak-containers.git
Open server directory (cd keycloak-containers/server)
Checkout at desired version, eg. git checkout 12.0.4
Build docker image docker build -t jboss/keycloak:12.0.4 .
Run Keycloak docker run --rm -p 9080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak:12.0.4
Using this image, I am now able to startup keycloak. https://hub.docker.com/r/wizzn/keycloak
For Keycloak 16, docker 20.10 and docker-compose 1.29, this image works flawlessly: https://hub.docker.com/r/sleighzy/keycloak - as suggested by #zakjan.
A service like:
keycloak:
image: sleighzy/keycloak
environment:
... your Keycloak config
Should be enough to get up and running.
I'm on an m1 and I ran this and it worked.
docker run --platform=linux/amd64 -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:17.0.0 start-dev
I merely add --platform=linux/amd64 to their docker command I found in https://www.keycloak.org/getting-started/getting-started-docker
The location for building a quarkus version of keycloak has changed, so this method will not work anymore for any major releases greater than 16. But the following script will. Just save it as an sh. file and execute it in your terminal. By enabling the last line, this will also directly start an instance of Keycloak.
The version number can be changed, but this is only tested for M1 chips and version 17.0.0.
VERSION=17.0.0 # set version here
cd /tmp
git clone git#github.com:keycloak/keycloak.git
cd keycloak/quarkus/container
git checkout $VERSION
docker build -t "quarkus-keycloak:$VERSION" .
#docker run -p 8080:8080 -e KEYCLOAK_ADMIN=admin -e KEYCLOAK_ADMIN_PASSWORD=admin "quarkus-keycloak:$VERSION" start-dev --http-relative-path /auth
There is an update to this issue - images for AMD64 and ARM64 architectures are now available and can be found here: https://quay.io/repository/keycloak/keycloak?tab=tags.
Ref the discussions in Github (https://github.com/keycloak/keycloak-containers/issues/341 and https://github.com/keycloak/keycloak/issues/8825).
jboss/keycloak not supported arm64 for now.
But you can use that image on docker hub: mihaibob/keycloak
https://hub.docker.com/r/mihaibob/keycloak
I'm using this and haven't difference.
I don't have a mac but I just started working with jboss/keycloak lately and have been able to get it to start.
Essentially what I did (assuming docker is installed):
docker pull jboss/keycloak:16.1.0
docker run --env-file targetDB.txt -p 8080:8080 jboss/keycloak:16.1.0
Might have to do those commands with sudo
This pulls the jboss/keycloak image from docker hub and then it runs it exposing the port 8080 within the container to the host machine. It also uses the environment variables in the .txt file (which contains info on the database endpoint you wish to connect keycloak to to persist data).
If you don't specify --env-file <text file> I believe keycloak uses its default h2 Database which isn't the best.
I have my local jboss/keycloak pointing to an postgres db I have in an AWS RDS environment, so the contents of the targetDB.txt for me is:
DB_VENDOR=postgres
DB_ADDR=<my postgres aws rds endpoint>:5432
DB_DATABASE=<name of the database>
DB_USER=<db username to connect to postgres instance>
DB_PASSWORD=<password associated with db username to connect>
If I'm not mistaken the name of the Database in DB_DATABASE field must already exist. So you'll need to create that before running the docker run command.
After you do the docker run command above and the logs show it starting up you should be able to access the keycloak admin console on your local browser:
http://localhost:8080/auth
If this is the first time you're running keycloak you have to create a master/admin user before you can log in.
To add a master user, run these commands (while your keycloak is already running):
docker exec <container id or container name> /opt/jboss/keycloak/bin/add-user-keycloak.sh -u <USERNAME> -p <PASSWORD>
then you need to restart your keycloak container:
docker restart <container id or container name>
Again you might have to do those commands with sudo.
After thats done, go back to your local web browser http://localhost:8080/auth and you can now access the login page and actually login with the username and password you created above.

Near Mainnet Archivel Node Set up

I tried setting up the NEAR mainnet archival node using docker by following this documentation - https://github.com/near/nearup#building-the-docker-image. The docker run command does not specify any port in the document.
So I also ran the docker run without any port, but when I tried to check the port by docker ps it does not show any port but the neard node runs.
I did not find any docs on the node APIs, can we use the archival APIs - https://docs.near.org/docs/api/rpc to query the node.
Docker run command used to set up archival mainnet node:
sudo docker run -d -v $PWD:/root/.near --name nearup nearprotocol/nearup run mainnet
JSON RPC on nearcore is explosed on port 3030
As for the running an archival node you might be interested in this doc page https://docs.near.org/docs/roles/integrator/exchange-integration#steps-to-start-archive-node
P. S. nearup is considered oldish though still in use.
I have updated the documentation for nearup to specify the port binding for RPC now: https://github.com/near/nearup#building-the-docker-image
You can use the following command:
docker run -v $HOME/.near:/root/.near -p 3030:3030 --name nearup nearprotocol/nearup run mainnet
And you can validate nearup is running and the RPC /status endpoint is available by running:
docker exec nearup nearup logs
and
curl 0.0.0.0:3030/status
Also please make sure that you have changed the ~/.near/mainnet/config.json to contain the variable:
{
...
"archive": true,
...
}

How to get embedded Redis metrics?

I have used Embedded Redis for caching in my springboot application. The redis runs on localhost and default port "6379" on application start up.
Is there a way to get metrics(memory-used, keyspace_hits, keyspace_misses, etc..) for embedded redis, from outside the application, may be command line or any API?
PS: I have used Redisson as client to perform cache operations with redis.
Thanks.
Redis has provided a command line interface : redis-cli to interact with it and get the metrics. redis-cli can be used on embedded redis as well.
install command line interface
npm install -g redis-cli
connect to redis running locally(cmd: rdcli -h host -p port -a password )
rdcli -h localhost
use any redis commands
localhost:6379> info memory
#Memory
used_memory:4384744
used_memory_human:4.18M
used_memory_rss:4351856
used_memory_peak:4385608
used_memory_peak_human:4.18M
used_memory_lua:35840
mem_fragmentation_ratio:0.99
mem_allocator:dlmalloc-2.8
Ref: "Installing and running Node.js redis-cli" section of this post https://redislabs.com/blog/get-redis-cli-without-installing-redis-server

New servers aren't displayed in docker weblogic 12.2.1

I'm trying to create new servers linked to the adminServer in weblogic. So I followed this documentation.
I managed to create successful the wlsadmin (container name) server and then I try to create other to Servers:
docker run -d --link wlsadmin:wlsadmin -p 7002:7001 -p 7003:7002 1221-domain createServer.sh
docker run -d --link wlsadmin:wlsadmin -p 7004:7001 1221-domain createServer.sh
they get created successfuly, but in the admin-console under the Environment/Servers they aren't displayed at all, but in the Environment\Machines two new Machines are created.
Docker network inspect bridge shows me that the containers are in the same network and docker ps shows me that the containers are running (also I can get inside them).
The docker logs doesn't show any error.
This means I cannot install any .war-s
Any idea what is wrong with the setup?
Weblogic Version: 12.2.1
Docker Version: 17.03.1-ce
I found an alternative solution: to add the servers manually (before the servers were added automatically).
Steps:
go to Environment\Machines
select a machine and go to server tab
add new server (check ip with docker inspect container)
click on server and add it to the Docker cluster
go on Environment\Servers and click on Control tab
start the new server

Using composer-wallet-redis , where to see the card in redis service?

I follow the guideline .
Install composer-wallet-redis image and start the container.
export NODE_CONFIG={"composer":{"wallet":{"type":"#ampretia/composer-wallet-redis","desc":"Uses a local redis instance","options":{}}}}
composer card import admin#test-network.card
I found the card still store in my local machine at path ~/.composer/card/
How can I check whether the card exist in the redis server?
How to import the business network cards into the cloud custom wallet?
The primary issue (which I will correct in the README) is that the module name should the composer-wallet-redis The #ampretia was a temporary repo.
Assuming that redis is started on the default port, you can run the redis CLI like this
docker run -it --link composer-wallet-redis:redis --rm redis redis-cli -h redis -p 6379
You can then issue redis cli commands to look at the data. Though it is not recommended to view the data or modify it. Useful to confirm to yourself it's working. The KEYS * command will display everything but this should only be used in a development context. See the warnings on the redis docs pages.
export NODE_ENVIRONMENT
start the docker container
composer card import
execute docker run *** command followed by KEYS * ,empty list or set.
#Calanais

Resources