How to get embedded Redis metrics? - spring-boot

I have used Embedded Redis for caching in my springboot application. The redis runs on localhost and default port "6379" on application start up.
Is there a way to get metrics(memory-used, keyspace_hits, keyspace_misses, etc..) for embedded redis, from outside the application, may be command line or any API?
PS: I have used Redisson as client to perform cache operations with redis.
Thanks.

Redis has provided a command line interface : redis-cli to interact with it and get the metrics. redis-cli can be used on embedded redis as well.
install command line interface
npm install -g redis-cli
connect to redis running locally(cmd: rdcli -h host -p port -a password )
rdcli -h localhost
use any redis commands
localhost:6379> info memory
#Memory
used_memory:4384744
used_memory_human:4.18M
used_memory_rss:4351856
used_memory_peak:4385608
used_memory_peak_human:4.18M
used_memory_lua:35840
mem_fragmentation_ratio:0.99
mem_allocator:dlmalloc-2.8
Ref: "Installing and running Node.js redis-cli" section of this post https://redislabs.com/blog/get-redis-cli-without-installing-redis-server

Related

How can I interact with a Corda node via RPC, using curl?

Hope all are safe and well! I asked this question on Slack but was suggested I ask here.
I have a Corda 4.3 compatibility zone setup using the bootstrapper, and I have setup my node.conf file user section as below:
rpcUsers = [
{
username=user1,
password=password1,
permissions=[ ALL ]
}
]
My RPC settings are:
rpcSettings {
address="localhost:10201"
adminAddress="localhost:10202"
}
And I can see that the port is open:
# nc -v localhost 10201
localhost (127.0.0.1:10201) open
^Cpunt!
My questions are:
is it possible to connect to a Corda node and execute API commands using RPC?
by API commands I mean the same as if I was connecting to Corda shell, is this the case?
Thanks,
Viv
SSH is disabled, by default, you could enable it with the below settings in node.conf file.
sshd {
port = <portNumber>
}
Once enabled you could connect to the node using SSH and execute all the command that could normally execute from the node's shell.
Use the below command to connect to the node:
ssh -p [portNumber] [host] -l [user]
For more details on node shell refer the docs here: https://docs.corda.net/docs/corda-os/4.4/shell.html
You can create a SpringBoot webserver like in this example:
You create an RPC connection; which uses the RPC user credential that you identified in your node.conf.
The RPC connection gets injected into the controller where you define your API's.
The injected RPC connection exposes a proxy that you can use for many things including starting flows and querying the vault. Have a look at the StandardController example to see various RPC interactions with the node. You can add your own API's to the template CustomController.
The webserver is a simple SpringBoot application.
When you start the webserver with this Gradle task, it will inject the RPC connection into the controller and expose your API's on the port that you supply in application.properties file.
Now that the webserver is running, you can call your API's either using CURL or Postman.

How to configure a MongoDB cluster which supports sessions?

I want to explore the new transaction feature of MongoDB and use Spring Data MongoDB. However, I get the exception message "Sessions are not supported by the MongoDB cluster to which this client is connected". Any hint regarding the config of MongoDB 3.7.9 is appreciated.
The stacktrace starts with:
com.mongodb.MongoClientException: Sessions are not supported by the
MongoDB cluster to which this client is connected
at com.mongodb.MongoClient.startSession(MongoClient.java:555) ~[mongodb-driver-3.8.0-beta2.jar:na]
at org.springframework.data.mongodb.core.SimpleMongoDbFactory.getSession(SimpleMongoDbFactory.java:163)
~[spring-data-mongodb-2.1.0.DATAMONGO-1920-SNAPSHOT.jar:2.1.0.DATAMONGO-1920-SNAPSHOT]
I was having the same issue when I was trying to connect it to a single standalone mongo instance, however as written in the official documentation, that Mongo supports transaction feature for a replica set. So, I then tried to create a replica set with all instances on MongoDB 4.0.0, I was able to successfully execute the code.
So,
Start a replica set (3 members), then try to execute the code, the issue will be resolved.
NB : you can configure a replica set on the same machine for tests https://docs.mongodb.com/manual/tutorial/deploy-replica-set-for-testing/
We were able to config in local as below
On Linux, a default /etc/mongod.conf configuration file is included
when using a package manager to install MongoDB.
On Windows, a
default <install directory>/bin/mongod.cfg configuration file is
included during the installation
On macOS, a default /usr/local/etc/mongod.conf configuration file is included
when installing from MongoDB’s official Homebrew tap.
Add the following config
replication:
oplogSizeMB: 128
replSetName: "rs0"
enableMajorityReadConcern: true
sudo service mongod restart;
mongo;
rs.initiate({
_id: "rs0",
version: 1,
members: [
{ _id: 0, host : "localhost:27017" }
]
}
)
check for the config to be enabled
rs.conf()
we can use the connection URL as
mongodb://localhost/default?ssl=false&replicaSet=rs0&readPreference=primary
docs: config-options single-instance-replication
Replica set is the resolution for the issue for sure
But doing replica of 3 nodes is not mandatory.
Solution 1 (for standalone setup)
For standalone mongo installation you can skip configuring 2nd or 3rd node as described on the official mongo documentations here
And you'll need to set a replSetName in the configuration
replication:
oplogSizeMB: <int>
replSetName: <string>
enableMajorityReadConcern: <boolean>
and then run details of which are here
rs.initiate()
after this the connection string would be like below:-
mongodb://localhost:27017/<database_name>?replicaSet=<replSet_Name>
keys above that you need to replace:-
database_name = name of the database
replSet_Name = name of the replica set you setup in the above configuration
Solution 2 (only for docker based requirement)
Example Docker image with single node replica set acting as primary node for development environment is as below:-
I had hosted the docker image on the docker hub
docker pull krnbr/mongo:latest
Contents of the same Dockerfile are below:-
FROM mongo
RUN echo "rs.initiate({'_id':'rs0','members':[{'_id':0,'host':'127.0.0.1:27017'}]});" > /docker-entrypoint-initdb.d/replica-init.js
RUN cat /docker-entrypoint-initdb.d/replica-init.js
CMD [ "--bind_ip_all", "--replSet", "rs0" ]
Docker run command (replace with the image name that you build yoursef or use the on shared above i.e krnbr/mongo):-
without volume
docker run -d --name mongo -p 27017:27017 <Image Name> mongod --replSet rs0 --port 27017
with volume
docker run -d --name mongo -p 27017:27017 -v ~/.mongodb:/data/db <Image Name> mongod --replSet rs0 --port 27017
for supporting binding of any ip
docker run -d --name mongo -p 27017:27017 -v ~/.mongodb:/data/db <Image Name> mongod --bind_ip_all --replSet rs0 --port 27017
I disabled TLS (inside Spring Data MongoDB), and now the transaction feature with the developement release 3.7.9 works fine.
With the reference to the answer give by #kakabali, I have few a bit different scenario and configure it.
I am configure mongo with spring boot and try to use transactions management and getting the error:
com.mongodb.MongoClientException: Sessions are not supported by the
MongoDB cluster to which this client is connected at
I follow few of the steps given by above answer and added few:
Change the mongo.cfg and added this
replication:
oplogSizeMB: 128
replSetName: "rs0"
enableMajorityReadConcern: true
Restart the service as I am using Windows10.
Open mongo console and run rs.initilize()
Make sure you're using the updated API - for example:
MongoClient mongoClient = MongoClients.create();
MongoDatabase dataBase = mongoClient.getDatabase("mainDatabase");
MongoCollection<Document> collection = dataBase.getCollection("entities");
Also make sure you have mongo.exe open.

Using composer-wallet-redis , where to see the card in redis service?

I follow the guideline .
Install composer-wallet-redis image and start the container.
export NODE_CONFIG={"composer":{"wallet":{"type":"#ampretia/composer-wallet-redis","desc":"Uses a local redis instance","options":{}}}}
composer card import admin#test-network.card
I found the card still store in my local machine at path ~/.composer/card/
How can I check whether the card exist in the redis server?
How to import the business network cards into the cloud custom wallet?
The primary issue (which I will correct in the README) is that the module name should the composer-wallet-redis The #ampretia was a temporary repo.
Assuming that redis is started on the default port, you can run the redis CLI like this
docker run -it --link composer-wallet-redis:redis --rm redis redis-cli -h redis -p 6379
You can then issue redis cli commands to look at the data. Though it is not recommended to view the data or modify it. Useful to confirm to yourself it's working. The KEYS * command will display everything but this should only be used in a development context. See the warnings on the redis docs pages.
export NODE_ENVIRONMENT
start the docker container
composer card import
execute docker run *** command followed by KEYS * ,empty list or set.
#Calanais

Hooking up into running heroku phoenix application

Previous night I was tinkering with Elixir running code on my both machines at home, but when I woke up, I asked myself Can I actually do the same using heroku run command?
I think theoretically it should be entirely possible if setup properly. Obviously heroku run iex --sname name executes and gives me access to shell (without functioning backspace which is irritating) but i haven't accessed my app yet.
Each time I executed the command it gave me different machine. I guess it's how Heroku achieve sandbox. I also was trying to find a way to determine address of my app's machine but haven't got any luck yet.
Can I actually connect with the dyno running the code to evaluate expressions on it like you would do iex -S mix phoenix.server locally ?
Unfortunately it's not possible.
To interconnect Erlang VM nodes you'd need EPMD port (4369) to be open.
Heroku doesn't allow opening custom ports so it's not possible.
In case You'd want to establish a connection between your Phoenix server and Elixir node You'd have to:
Two nodes on the same machine:
Start Phoenix using iex --name phoenix#127.0.0.1 -S mix phoenix.server
Start iex --name other_node#127.0.0.1
Establish a connection using Node.ping from other_node:
iex(other_node#127.0.0.1)1> Node.ping(:'phoenix#127.0.0.1')
(should return :pong not :pang)
Two nodes on different machines
Start Phoenix using some external address
iex --name phoenix#195.20.2.2 --cookie someword -S mix phoenix.server
Start second node
iex --name other_node#195.20.2.10 --cookie someword
Establish a connection using Node.ping from other_node:
iex(other_node#195.20.2.10)1> Node.ping(:'phoenix#195.20.2.2')
(should return :pong not :pang)
Both nodes should contact each other on the addresses they usually see each other on the network. (Full external IP when different networks, 192.168.X.X when in the same local network, 127.0.0.1 when on the same machine)
If they're on different machines they also must have set the same cookie value, because by default it takes automatically generated cookie in your home directory. You can check it out by running:
cat ~/.erlang.cookie
What's last you've got to make sure that your EPMD port 4369 is open, because Erlang VM uses it for internode data exchange.
As a sidenote if you will leave it open make sure to make your cookie as private as possible, because if someone knows it, he can have absolute power over your machine.
When you execute heroku run it will start a new one-off dyno which is a temporary instance that is deprovisioned when you finish the heroku run session. This dyno is not a web dyno and cannot receive inbound HTTP requests through Heroku's routing layer.
From the docs:
One-off dynos can never receive HTTP traffic, since the routers only route traffic to dynos named web.N.
https://devcenter.heroku.com/articles/one-off-dynos#formation-dynos-vs-one-off-dynos
If you want your phoenix application to receive HTTP requests you will have to set it up to run on a web dyno.
It has been a while since you've asked the question, but someone might find this answer valuable, though.
As of 2021 Heroku allows forwarding multiple ports, which allows to remsh into a running ErlangVM node. It depends on how you deploy your application, but in general, you will need to:
Give your node a name and a cookie (i.e. --name "myapp#127.0.0.1" --cookie "secret")
Tell exactly which port a node should bind to, so you know which pot to forward (i.e. --erl "-kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000")
Forward EPMD and Node ports by running heroku ps:forward 9001:4369,9000
Remsh into your node: ERL_EPMD_PORT=9001 iex --cookie "secret" --name console#127.0.0.1 --remsh "myapp#127.0.0.1"
Eventually you should start your server with something like this (if you are still using Mix tool): MIX_ENV=prod elixir --name "myapp#127.0.0.1" --cookie "secret" --erl "-kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000" -S mix phx.server --no-halt
If you are using Releases, most of the setup has already been done for you by the Elixir team.
To verify that EPMD port has been forwarded correctly, try running epmd -port 9001 -names. The output should be:
epmd: up and running on port 4369 with data:
name myapp#127.0.0.1 at port 9000
You may follow my notes on how I do it for Dockerized releases (there is a bit more hustle): https://paveltyk.medium.com/elixir-remote-shell-to-a-dockerized-release-on-heroku-cc6b1196c6ad

Session doesn't get saved in symfony with Memcached

I can't store sessions in Memcached server !
I installed Memcached for PHP and the server
I run the server with this command
memcached -u root -d -m 64 -l 127.0.0.1 -p 11211
I have this in php.ini in fpm and cli
extension=memcached.so
session.save_handler = memcached
session.save_path = unix:/tmp/memcached.sock
I followed this for symfony2
https://gist.github.com/K-Phoen/4327229
You think everything is good ?
You are wrong , because I don't know !!
why the sessions are not stored in memcached
PS : I don't run the memcached server with service memcached start because that would be start the server in a different port with nobody as user ..
Help me debug this please.
You appear to be telling PHP to connect to the daemon via a socket, but the command you start Memcached with, doesn't include the -s <file> parameter, to have it create the socket you want to use.
See: MemcacheD sockets.

Resources