How to access Spring-boot JMX remotely - spring

I know that spring automatically expose JMX beans. I was able to access it locally using VisualVM.
However on prod how I can connect to remotely to the app using it's JMX beans? Is there a default port or should I define anything in addition?
Thanks,
ray.

By default JMX is automatically accessible locally, so running jconsole locally would detect all your local java apps without port exposure.
To access an app via JMX remotely you have to specify an RMI Registry port. The thing to know is that when connecting, JMX initializes on that port and then establishes a data connection back on a random high port, which is a huge problem if you have a firewall in the middle. ("Hey sysadmins, just open up everything, mkay?").
To force JMX to connect back on the same port as you've established, you have a couple of the following options. Note: you can use different ports for JMX and RMI or you can use the same port.
Option 1: Command line
-Dcom.sun.management.jmxremote.port=$JMX_REGISTRY_PORT
-Dcom.sun.management.jmxremote.rmi.port=$RMI_SERVER_PORT
If you're using Spring Boot you can put this in your (appname).conf file that lives alongside your (appname).jar deployment.
Option 2: Tomcat/Tomee configuration
Configure a JmxRemoteLifecycleListener:
Maven Jar:
<dependency>
<groupId>org.apache.tomcat</groupId>
<artifactId>tomcat-catalina-jmx-remote</artifactId>
<version>8.5.9</version>
<type>jar</type>
</dependency>
Configure your server.xml:
<Listener className="org.apache.catalina.mbeans.JmxRemoteLifecycleListener"
rmiRegistryPortPlatform="10001" rmiServerPortPlatform="10002" />
Option 3: configure programmatically
#Configuration
public class ConfigureRMI {
#Value("${jmx.rmi.host:localhost}")
private String rmiHost;
#Value("${jmx.rmi.port:1099}")
private Integer rmiPort;
#Bean
public RmiRegistryFactoryBean rmiRegistry() {
final RmiRegistryFactoryBean rmiRegistryFactoryBean = new RmiRegistryFactoryBean();
rmiRegistryFactoryBean.setPort(rmiPort);
rmiRegistryFactoryBean.setAlwaysCreate(true);
return rmiRegistryFactoryBean;
}
#Bean
#DependsOn("rmiRegistry")
public ConnectorServerFactoryBean connectorServerFactoryBean() throws Exception {
final ConnectorServerFactoryBean connectorServerFactoryBean = new ConnectorServerFactoryBean();
connectorServerFactoryBean.setObjectName("connector:name=rmi");
connectorServerFactoryBean.setServiceUrl(String.format("service:jmx:rmi://%s:%s/jndi/rmi://%s:%s/jmxrmi", rmiHost, rmiPort, rmiHost, rmiPort));
return connectorServerFactoryBean;
}
}
The trick, you'll see, is the serviceUrl in which you specify both the jmx:rmi host/port and the jndi:rmi host/port. If you specify both, you won't get the random high "problem".
Edit: For JMX remoting to work, you'll need to make a decision about authenticating. It's better to do it in 3 distinct steps: 1) basic setup with -Dcom.sun.management.jmxremote.authenticate=false then 2) add a password file (-Dcom.sun.management.jmxremote.password.file). See here for instructions. + -Dcom.sun.management.jmxremote.ssl=false and then 3) set up SSL.

Add the following JVM Properties in "$JAVA_OPTS" (in your application):
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=<PORT_NUMBER> -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=<HOST'S_IP>
In the Jconsole/Visual VM use the following to connect:
service:jmx:rmi:///jndi/rmi://<HOST'S_IP>:<PORT_NUMBER>/jmxrmi
It doesn't enable security, but will help you to connect to the remote server.

A tested approach on Java 1.8.0_71 and Spring Boot(1.3.3.RELEASE).
Append below parameters to JVM arguments for monitored JVM.
-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=12348 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.rmi.port=12349 -Dcom.sun.management.jmxremote.password.file=/somewhere/jmxremote.password -Dcom.sun.management.jmxremote.access.file=/somewhere/jmx/jmxremote.access
The com.sun.management.jmxremote.port is used to define the fixed RMI registry port, and the com.sun.management.jmxremote.rmi.port is used to instruct JVM to use fixed RMI port, but NOT use random one.
By setting this, I am able to connect JVM client from remote host to the monitored JVM via a firewall just opening 12348 and 12349 port.
I tested using java -jar cmdline-jmxclient-0.10.3.jar user:pwd hostip:12348 on a remote machine, which generates below output(shortened just for demonstration).
java.lang:type=Runtime
java.lang:name=PS Scavenge,type=GarbageCollector
Tomcat:J2EEApplication=none,J2EEServer=none,WebModule=//localhost/,j2eeType=Filter,name=requestContextFilter
java.nio:name=mapped,type=BufferPool
Tomcat:host=localhost,type=Host
java.lang:name=Compressed Class Space,type=MemoryPool
.......
The jar is downloaded from Here.

Another alternative
Reference for jmxremote.password and jmxremote.access files
import java.util.HashMap;
import java.util.Map;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.DependsOn;
import org.springframework.jmx.support.ConnectorServerFactoryBean;
import org.springframework.remoting.rmi.RmiRegistryFactoryBean;
#Configuration
public class ConfigureRMI {
#Value("${jmx.rmi.password.file:/tmp/jmxremote.password}")
private String passwordFile;
#Value("${jmx.rmi.access.file:/tmp/jmxremote.access}")
private String accessFile;
#Value("${jmx.rmi.port:19999}")
private Integer rmiPort;
#Bean
public RmiRegistryFactoryBean rmiRegistry() {
final RmiRegistryFactoryBean rmiRegistryFactoryBean = new RmiRegistryFactoryBean();
rmiRegistryFactoryBean.setPort(rmiPort);
rmiRegistryFactoryBean.setAlwaysCreate(true);
return rmiRegistryFactoryBean;
}
#Bean
#DependsOn("rmiRegistry")
public ConnectorServerFactoryBean connectorServerFactoryBean() throws Exception {
final ConnectorServerFactoryBean connectorServerFactoryBean = new ConnectorServerFactoryBean();
connectorServerFactoryBean.setObjectName("connector:name=rmi");
Map<String, Object> properties = new HashMap<>();
properties.put("jmx.remote.x.password.file", passwordFile);
properties.put("jmx.remote.x.access.file", accessFile);
connectorServerFactoryBean.setEnvironmentMap(properties);
connectorServerFactoryBean.setServiceUrl(String.format("service:jmx:rmi:///jndi/rmi://:%s/jmxrmi", rmiPort));
return connectorServerFactoryBean;
}
}

Related

Spring Boot 2.7.1 LetsEncrypt PEM keystore throws Resource location must not be null

So I read that Spring Boot now supports PEM since 2.7.0
https://docs.spring.io/spring-boot/docs/2.7.0-SNAPSHOT/reference/htmlsingle/#howto.webserver.configure-ssl 17.3.7. Configure SSL
So I am using PEM generated by certbot.
My application.properties
spring.jpa.generate-ddl=true
spring1.jpa.hibernate.ddl-auto=update
spring.jpa.show-sql=false
spring.jpa.properties.hibernate.format_sql=false
server.port=443
server.ssl.certificate=fullchain1.pem
server.ssl.certificate.certificate-private-key=privkey1.pem
server.ssl.trust-certificate=fullchain1.pem
When I launch I get
org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.boot.web.server.WebServerException: Could not load key store 'null'
Caused by: org.springframework.boot.web.server.WebServerException: Could not load key store 'null'
Caused by: java.lang.IllegalArgumentException: Resource location must not be null
As per the documentation SSL configuration springboot
UPDATE:
Adding the content from link to directly in the answer, as link can
get updates
SSL can be configured declaratively by setting the various
server.ssl.* properties, typically in application.properties or
application.yml. The following example shows setting SSL properties
using a Java KeyStore file:
server.port=8443
server.ssl.key-store=classpath:keystore.jks
server.ssl.key-store-password=secret
server.ssl.key-password=another-secret
The following example shows setting SSL properties using PEM-encoded
certificate and private key files:
server.port=8443
server.ssl.certificate=classpath:my-cert.crt
server.ssl.certificate-private-key=classpath:my-cert.key
server.ssl.trust-certificate=classpath:ca-cert.crt
Your properties are not correctly declared,
server.ssl.certificate.certificate-private-key=privkey1.pem should be changed to server.ssl.certificate-private-key=privkey1.pem
So this workaround works
#Configuration
public class SSLConfig {
#Bean
public ConfigurableServletWebServerFactory webServerFactory() throws Exception {
TomcatServletWebServerFactory factory = new TomcatServletWebServerFactory();
Ssl ssl = new Ssl();
ssl.setEnabled(true);
ssl.setCertificate("cert1.pem");
ssl.setCertificatePrivateKey("privkey1.pem");
ssl.setKeyStoreType("PKCS12");
ssl.setKeyStorePassword(""); // without this decrytption fails
factory.setSsl(ssl);
factory.setPort(443);
return factory;
}
}
server.ssl.key-store=file:///Users/...
Have you tried this way of setting the path? First, make sure your application up with the correct path, then dig into the next step.

Cannot connect to Postgres database , there is authentication issue [duplicate]

I have recently tried my hands on Postgres. Installed it on local (PostgreSQL 13.0).
Created a maven project and used Spring Data JPA, works just fine. Whereas when I tried using Gradle project, I am not able to connect to the DB and keep getting the following error.
org.postgresql.util.PSQLException: The authentication type 10 is not
supported. Check that you have configured the pg_hba.conf file to
include the client's IP address or subnet, and that it is using an
authentication scheme supported by the driver. at
org.postgresql.core.v3.ConnectionFactoryImpl.doAuthentication(ConnectionFactoryImpl.java:614)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:222)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.jdbc.PgConnection.(PgConnection.java:194)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.Driver.makeConnection(Driver.java:450)
~[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.Driver.connect(Driver.java:252)
~[postgresql-42.1.4.jar:42.1.4] at
java.sql.DriverManager.getConnection(Unknown Source) [na:1.8.0_261]
at java.sql.DriverManager.getConnection(Unknown Source)
[na:1.8.0_261] at
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:94)
[postgresql-42.1.4.jar:42.1.4] at
org.postgresql.ds.common.BaseDataSource.getConnection(BaseDataSource.java:79)
[postgresql-42.1.4.jar:42.1.4]
I tried using JDBCTemplate as well. Doesn't work
Modified the pg_hba.cfg file referring to this post - Doesn't work
Used the deprecated Lib of - Doesn't Work either.
Please Suggest me a solution for this problem.
My code and Config:
#Configuration
public class DataSourceConfig {
#Bean
public DriverManagerDataSource getDataSource() {
DriverManagerDataSource dataSourceBuilder = new DriverManagerDataSource();
dataSourceBuilder.setDriverClassName("org.postgresql.Driver");
dataSourceBuilder.setUrl("jdbc:postgresql://localhost:5432/postgres");
dataSourceBuilder.setUsername("postgres");
dataSourceBuilder.setPassword("root");
return dataSourceBuilder;
}
}
#Component
public class CustomerOrderJDBCTemplate implements CustomerOrderDao{
private DataSource dataSource;
private JdbcTemplate jdbcTemplateObject;
#Autowired
ApplicationContext context;
public void setDataSource() {
//Getting Bean by Class
DriverManagerDataSource dataSource = context.getBean(DriverManagerDataSource.class);
this.dataSource = dataSource;
this.jdbcTemplateObject = new JdbcTemplate(this.dataSource);
}
#Override
public Customer create(Customer customer) {
setDataSource();
String sql = "insert into CustomerOrder (customerType, customerPayment) values (?, ?)";
//jdbcTemplateObject.update(sql, customerOrder.getCustomerOrderType(), customerOrder.getCustomerOrderPayment());
KeyHolder holder = new GeneratedKeyHolder();
jdbcTemplateObject.update(new PreparedStatementCreator() {
#Override
public PreparedStatement createPreparedStatement(Connection connection) throws SQLException {
PreparedStatement ps = connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS);
ps.setString(1, customer.getType());
ps.setString(2, customer.getPayment());
return ps;
}
}, holder);
long customerId = holder.getKey().longValue();
customer.setCustomerID(customerOrderId);
return customer;
}
}
dependencies
implementation('org.springframework.boot:spring-boot-starter-web')
compile("org.springframework.boot:spring-boot-devtools")
compile(group: 'org.postgresql', name: 'postgresql', version: '42.1.4')
compile("org.springdoc:springdoc-openapi-ui:1.4.1")
compile("org.springframework:spring-jdbc:5.2.5.RELEASE")
password_encryption is set like this:
postgres=# show password_encryption;
password_encryption
---------------------
scram-sha-256
(1 row)
I solved similar issue by applying below steps in PostgreSQL Version 13:
Change password_encryption to md5 in postgresql.conf
Windows: C:\Program Files\PostgreSQL\13\data\postgresql.conf
GNU/Linux: /etc/postgresql/13/main/postgresql.conf
Change scram-sha-256 to md5 in pg_hba.conf
Windows: C:\Program Files\PostgreSQL\13\data\pg_hba.conf
GNU/Linux: /etc/postgresql/13/main/pg_hba.conf
host all all 0.0.0.0/0 md5
Change Password ( this restore password in md5 format).
Example: ALTER ROLE postgres WITH PASSWORD 'root';
Make sure you set listen_addresses = '*' in postgresql.conf if you are working non production environment.
According to the wiki, the supported JDBC driver for SCRAM-SHA-256 encryption is 42.2.0 or above.
In my case, the driver was 41.1.1. Change it to 42.2.0 or above. That fixed it for me.
(Maven, pom.xml):
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.0</version>
</dependency>
Get your pg_hba.conf File in the Directory
C:\Program Files\PostgreSQL\13\data\pg_hba.conf
And Simply Change scram-sha-256 under Column Method to trust.
It worked For me!
By setting password_encryption to scram-sha-256 (which is the default value in v13) you also get scram-sha-256 authentication, even if you have md5 in pg_hba.conf.
Now you are using an old JDBC driver version on the client side that does not support that authentication method, even though PostgreSQL introduced it in v10, three years ago.
You should upgrade your JDBC driver. An alternative would be to set password_encryption back to md5, but then you'll have to reset all passwords and live with lower security.
<?xml version="1.0" encoding="UTF-8"?>
4.0.0
<groupId>org.example</groupId>
<artifactId>postgresJDBC</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<java.version>11</java.version>
<maven.compiler.target>${java.version}</maven.compiler.target>
<maven.compiler.source>${java.version}</maven.compiler.source>
</properties>
<dependencies>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>42.2.18</version>
</dependency>
</dependencies>
you have to check your maven dependency if you are using postgresql 9.1+ then your dependency should be like above
to know about maven dependency refer this link How do you add PostgreSQL Driver as a dependency in Maven?
Change METHOD to "trust" in pg_hba.conf
In case you are struggling to get this working in Docker:
Firstly: run the container with -e POSTGRES_HOST_AUTH_METHOD=md5 (doc)
docker run -e POSTGRES_HOST_AUTH_METHOD=md5 -e POSTGRES_PASSWORD=doesntmatter -p 5432:5432 --name CONTAINERNAME -d postgres
Secondly: allow md5 encryption as discussed in other answers:
docker exec -ti -u postgres CONTAINERNAME bash -c "echo 'password_encryption=md5' >> /var/lib/postgresql/data/postgresql.conf"
Thirdly: restart the container
docker restart CONTAINER NAME
Fourthly: you need to recreate the postgres password in md5 format
docker exec -ti -u postgres CONTAINERNAME psql
alter role postgres with password 'THE-NEW-PASSWORD';
* please be aware scram-sha-256 is much better than md5 (doc)
use these :
wget https://jdbc.postgresql.org/download/postgresql-42.2.24.jar
Copy it to your hive library
sudo mv postgresql-42.2.24.jar /opt/hive/lib/postgresql-42.2.24.jar
For me, updating the postgres library helped fixing this.
working fine with version 12.6 ... just downgrade the PostgreSQL
You might need to check the version of Postgres you are running. Migh need to update spring version if the version is being pointed through spring parent.
In my case: since current postgres is at v13. Modified spring parent version: it was on 1.4; made it to match to 2.14. Finally update maven dependency and re-run the application.This fixed the issue.
Suggestions:
Current JDBC driver will help (e.g. postgresql-42.3.6.jar)
Copy it to the /jars folder under your spark install directory (I'm assuming a single machine here in this example)
Python - install "findspark" to make pyspark importable as a regular library
Here is an example I hope will help someone:
import findspark
findspark.init()
from pyspark.sql import SparkSession
sparkClassPath = "C:/spark/spark-3.0.3-bin-hadoop2.7/jars"
spark = SparkSession \
.builder \
.config("spark.driver.extraClassPath", sparkClassPath) \
.getOrCreate()
df = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://{YourHostName}:5432/{YourDBName}") \
.option("driver", "org.postgresql.Driver") \
.option("dbtable", "{YourTableName}") \
.option("user", "{YourUserName") \
.option("password", "{YourSketchyPassword") \
.load()
Install pgadmin if you have not already done so.
Try it via Docker
You need to download postgresql..jar and then move it into .../jre/lib/ext/ folder. It worked for me
Even after changing pg_hba.conf to MD5 on everything it didn't work.
What worked was doing this:
show password_encryption;
If it shows up as being scram-sha-256 do this:
set password_encryption = 'md5';
Restart server, this solved my issue
Use latest maven dependency for Postgres in pom.xml
Changing trust for ipv4 local connect worked for me.
Solution:
Get your pg_hba.conf File in the Directory C:\Program Files\PostgreSQL\13\data\pg_hba.conf
And Simply Change scram-sha-256 under Column Method to trust.
I guess the solution to this problem is using version 9.6.
It works just fine after changing the version.
Open pg_hba.conf
Set IPv4 local connections to trust

Task execution is not working after lunching the task in spring cloud data flow

I have created one Spring boot application with #EnablesTask annotation and try to print the arguments in log.
package com.custom.samplejob;
import org.springframework.boot.CommandLineRunner;
import org.springframework.cloud.task.configuration.EnableTask;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
#EnableTask
public class TaskConfiguration {
#Bean
public CommandLineRunner commandLineRunner() {
return args -> {
System.out.println(args);
};
}
}
After I have run that mvn clean install to have the jar in local maven repo.
com.custom:samplejob:0.0.1-SNAPSHOT
Using custom docker-compose to run spring cloud data flow locally on windows using the below parameters
set HOST_MOUNT_PATH=C:\Users\user\.m2 (Local maven repository mounting)
set DOCKER_MOUNT_PATH=/root/.m2/
set DATAFLOW_VERSION=2.7.1
set SKIPPER_VERSION=2.6.1
docker-compose up
Using the below commend to register the app
app register --type task --name custom-task-trail-1 --uri maven://com.custom:samplejob:0.0.1-SNAPSHOT
Created task using UI(below URL) and lunch the task. Task was successfully launched.
http://localhost:9393/dashboard/#/tasks-jobs/tasks
These are the logs I can see in the docker-compose up terminal,
dataflow-server | 2021-02-15 13:20:41.673 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : Preparing to run an application from com.custom:samplejob:jar:0.0.1-SNAPSHOT. This may take some time if the artifact must be downloaded from a remote host.
dataflow-server | 2021-02-15 13:20:41.693 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : Command to be executed: /usr/lib/jvm/jre-11.0.8/bin/java -jar /root/.m2/repository/com/custom/samplejob/0.0.1-SNAPSHOT/samplejob-0.0.1-SNAPSHOT.jar --name=dsdsds --spring.cloud.task.executionid=38
dataflow-server | 2021-02-15 13:20:41.702 INFO 1 --- [nio-9393-exec-9] o.s.c.d.spi.local.LocalTaskLauncher : launching task custom-task-trail-1-48794885-9a0a-4c46-a2a1-299bf91763ad
dataflow-server | Logs will be in /tmp/4921907601400/custom-task-trail-1-48794885-9a0a-4c46-a2a1-299bf91763ad
But in task execution list, it's doesn't show the status and start date and end date of that task executions,
can some one help me to resolve this or am I missing anything here in local installation or task spring boot implementation wise?
I have enabled kubernetes on docker desktop and installed spring data flow server top of that.
And I tried with docker uri to register app and generate docker image using the jib-maven-plugin.
Now its works the sample task application in my case.

Kafka Connect can't find ZoneRulesProvider after adding db.timezone property

I'm running Kafka Connect (Confluent) in distributed mode, and have recently added the db.timezone property to my JDBC Source Connector. After doing so I'm seeing this error when I load the connector:
java.lang.NoClassDefFoundError: Could not initialize class java.time.zone.ZoneRulesProvider\n\tat java.time.ZoneRegion.ofId(ZoneRegion.java:120)\n\tat java.time.ZoneId.of(ZoneId.java:411)
This is happening from this code in JDBCSourceConnectorConfig:
https://github.com/confluentinc/kafka-connect-jdbc/blob/master/src/main/java/io/confluent/connect/jdbc/source/JdbcSourceConnectorConfig.java#L807
If I log into my Kafka Connect box and run java -version I get:
openjdk version "1.8.0_262"
OpenJDK Runtime Environment (build 1.8.0_262-b10)
OpenJDK 64-Bit Server VM (build 25.262-b10, mixed mode)
If I create a small Java Program like the following and run it on that box, it works fine:
import java.time.ZoneId;
import java.time.zone.ZoneRulesProvider;
import java.util.TimeZone;
public class TestTime {
public static void main(String[] args) {
String dbTimeZone = "America/New_York";
System.out.println(TimeZone.getTimeZone(ZoneId.of(dbTimeZone)));
System.out.println(ZoneRulesProvider.getAvailableZoneIds());
}
}
So why is confluent/kafka connect breaking on it? Why would it not be able to find ZoneRulesProvider?
After doing a restart of kafka connect on my servers this issue appears to have gone away.

Proxy problem using testcontainers and spring-boot

I try to write an spring-boot integration test using the docker-compose module of testcontainers. I get the following exception on
startup:
java.lang.ExceptionInInitializerError
Caused by: com.github.dockerjava.api.exception.InternalServerErrorException:
{"message":"Get https://quay.io/v2/: dial tcp: lookup quay.io on 192.168.65.1:53: no such host"}
I already tried to add our company http proxy using with Env but it doesn't work.
#RunWith(SpringRunner.class)
#SpringBootTest
public class FtpExportIntegrationTest {
#ClassRule
public static DockerComposeContainer environment =
new DockerComposeContainer(new File("src/test/resources/docker-compose.yml"))
.withExposedService("search-kafka", 9092)
.withEnv("HTTP_PROXY", "http://proxy.mycompany.com:8080")
.withEnv("HTTPS_PROXY", "http://proxy.mycompany.com:8080")
.withEnv("http_proxy", "http://proxy.mycompany.com:8080")
.withEnv("https_proxy", "http://proxy.mycompany.com:8080");
You need to set it in your Docker daemon settings, not container's.

Resources