CustomHostnameVerifier instead of default Hostname Verifier for ssl in Spring Boot Reactor Netty - spring

Using protocol Http2 and trying to disable the hostname verification
But this didn't work for me
return HttpClient.create()
.secure(sslContextSpec ->
sslContextSpec.sslContext(createSslContext(pcfSmpcClientProperties))
.handlerConfigurator(
(handler)->{
SSLEngine engine = handler.engine();
//engine.setNeedClientAuth(true);
SSLParameters params = new SSLParameters();
List<SNIMatcher> matchers = new LinkedList<>();
SNIMatcher matcher = new SNIMatcher(0) {
#Override
public boolean matches(SNIServerName serverName) {
return true;
}
};
matchers.add(matcher);
params.setSNIMatchers(matchers);
engine.setSSLParameters(params);
}
)
)
.wiretap(true)
.protocol(HttpProtocol.H2)
.compress(true)
.followRedirect(true)

You are trying to supress the Server name indicator [SNI] and what you are lookin is to skip HostName verification.
In-order to achieve it you can set the endpointIdentificationAlgorithm either null or "", based on your JDK version.
SSLEngine engine = handler.engine();
SSLParameters params = new SSLParameters();
params.setEndpointIdentificationAlgorithm("");
engine.setSSLParameters(params);

Related

No server chosen by com.mongodb.async.client.ClientSessionHelpe from cluster description ClusterDescription

I am trying to connect to aws DocumentDB with async mongoClient.
I created a DocumentDB cluster in aws and success connect via ssh command line.
I went over here and created MongoClient and success connected and insert events.
But when I tried create com.mongodb.async.client.MongoClient, connection failed with folowing error:
No server chosen by WritableServerSelector from cluster description
ClusterDescription{type=REPLICA_SET, connectionMode=MULTIPLE,
serverDescriptions=[ServerDescription{address=aws-cluster:27017,
type=UNKNOWN, state=CONNECTING,
exception={com.mongodb.MongoSocketReadTimeoutException: Timeout while
receiving message}, caused by
{io.netty.handler.timeout.ReadTimeoutException}}]}. Waiting for 30000
ms before timing out.
ClusterSettings clusterSettings = ClusterSettings.builder()
.applyConnectionString(new ConnectionString(connectionString)).build();
List<MongoCredential> credentials = new ArrayList<>();
credentials.add(
MongoCredential.createCredential(
mongoUserName,
mongoDBName,
mongoPassword));
MongoClientSettings settings = MongoClientSettings.builder()
.credentialList(credentials)
.clusterSettings(clusterSettings)
.streamFactoryFactory(new NettyStreamFactoryFactory())
.writeConcern(WriteConcern.ACKNOWLEDGED)
.build();
com.mongodb.async.client.MongoClient mongoClient = MongoClients.create(settings);
MongoDatabase testDB = mongoClient.getDatabase("myDB");
MongoCollection<Document> collection = testDB.getCollection("test");
Document doc = new Document("name", "MongoDB").append("type", "database");
//**trying insert document => here I got an error**
collection.insertOne(doc, new SingleResultCallback<Void>() {
#Override
public void onResult(final Void result, final Throwable t) {
System.out.println("Inserted!");
}
});
Do you have any ideas, why does it happen?
I solved it by using uri:
String uri = "mongodb://<username>:<Password>#<hostname>:27017/?ssl=true&ssl_ca_certs=cert";
MongoClientSettings settings = MongoClientSettings.builder()
.streamFactoryFactory(new NettyStreamFactoryFactory())
.applyConnectionString(new ConnectionString(uri))
.build();
com.mongodb.async.client.MongoClient mongoClient = MongoClients.create(settings);
I encountered a similar error , for me it was related to the TLS configs.
I disabled the TLS in documentDB https://docs.aws.amazon.com/documentdb/latest/developerguide/security.encryption.ssl.html
In my case I had to restart the cluster after disabling the TLS. (TLS was not needed for the use case). After the restart the connection was established successfully.

Create java RestHighLevelClient in elastic cluster mode

If elasticsearch runs on single mode, I can easily establish the RestHighLevel connection with this line of code:
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("localhost", 9200, "http"),
new HttpHost("localhost", 9201, "http")));
But if my elastic cluster has 3 machines, e.g., "host1", "host2", "host3", how to create the rest high level client in cluster mode ?
Thanks
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("host1", 9200, "http"),
new HttpHost("host2", 9200, "http"),
new HttpHost("host2", 9200, "http")
)
);
As the doc it looks like you were referencing states, RestClient.builder accepts an array of HttpHosts to connect to. The client (which under the hood is the ES low-level REST client) will round-robin requests to these hosts. See also the Javadoc.
As per the Elasticsearch docs you can pass multiple Elasticsearch hosts in RestClient.builder().
The better solution is to load the Elasticsearch hosts from configuration(application.conf in case of Scala-based application) instead of hardcoding it in the codebase.
Here is the Scala-based solution using Java Varargs(:_*).
application.conf
es_hosts = ["x.x.x.x","x.x.x.x","x.x.x.x"] // You can even use service-name/service-discovery
es_port = 9200
es_scheme = "http"
Code snippet
import collection.JavaConverters._
import com.typesafe.config.ConfigFactory
import org.apache.http.HttpHost
import org.elasticsearch.client.{RestClient, RestHighLevelClient}
val config = ConfigFactory.load()
val port = config.getInt(ES_PORT)
val scheme = config.getString(ES_SCHEME)
val es_hosts = config.getStringList(ES_HOSTS).asScala
val httpHosts = es_hosts.map(host => new HttpHost(host, port, scheme))
val low_level_client = RestClient.builder(httpHosts:_*)
val high_level_client: RestHighLevelClient = new RestHighLevelClient(low_level_client)
To create High level REST client using multiple hosts, you can do something like following:
String[] esHosts = new String[]{"node1-example.com:9200", "node2-example.com:9200",
"node3-example.com:9200"};
final ClientConfiguration clientConfiguration = ClientConfiguration.builder()
.connectedTo(esHosts)
.build();
RestHighLevelClient restClient = RestClients.create(clientConfiguration).rest();
// Hostnames used for building client can be verified as following
List<Node> nodes = restClient.getLowLevelClient().getNodes();
nodes.forEach(node -> System.out.println(node.toString()));
References:
Docs for High Level REST Client
Source code for ClientConfigurationBuilder
I am creating my elastic REST client with below steps:
RestHighLevelClient client=null;
List<HttpHost> hostList = new ArrayList<>();
for (String host : hosts) {
String[] hostDetails = host.split("\\:");hostList.add(new
HttpHost(hostDetails[0],Integer.parseInt(hostDetails[1]),https));
}
try(RestHighLevelClient client1 = new RestHighLevelClient(
RestClient.builder(hostList.toArray(new
HttpHost[hostList.size()]))
.setHttpClientConfigCallback(
httpClientBuilder ->
// to do this only if auth is enabled
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider))))
{
client = client1;
}
} catch (IOException e) {
log.error("exception occurred while setting elastic client");
}

Exception when Implementing RPC Security Management in corda using spring webserver

Spring Web server not started when we specify RPC Security Management configuration in node.conf file.
Getting error as Unresolved reference: proxy, while running PartyAServer.
Below is my server configuration for PartyA node :-
task runPartyAServer(type: JavaExec) {
classpath = sourceSets.main.runtimeClasspath
main = 'com.example.server.Server'
environment "server.port", "10022"
environment "config.rpc.host", "localhost"
environment "config.rpc.port", "10006"
}
I am able to start the node with following node A configuration, but facing error while running PartyA server.
node {
name "O=PartyA,L=London,C=GB"
advertisedServices = ["com.example"]
p2pPort 10005
rpcPort 10006
cordapps = ["$corda_release_group:corda-finance:$corda_release_version",
"com.example:java-source:$version",
"com.example:base:$version"]
}
Below is my node.conf for PartyA node :-
extraAdvertisedServiceIds=[
"com.example"
]
myLegalName="O=PartyA,L=London,C=GB"
networkMapService {
address="localhost:10002"
legalName="O=Controller,L=London,C=GB"
}
p2pAddress="localhost:10005"
rpcAddress="localhost:10006"
rpcUsers=[]
security = {
authService = {
dataSource = {
type = "DB",
passwordEncryption = "SHIRO_1_CRYPT",
connection = {
jdbcUrl = "jdbc:oracle:thin:#172.16.105.21:1521:SFMS"
username = "abinay"
password = "abinay"
driverClassName = "oracle.jdbc.OracleDriver"
}
}
options = {
cache = {
expireAfterSecs = 120
maxEntries = 10000
}
}
}
}
Without having username and password how nodeRPCConnection(proxy) will be established with following code,
#PostConstruct
public void initialiseNodeRPCConnection() {
NetworkHostAndPort rpcAddress = new NetworkHostAndPort(host,rpcPort);
CordaRPCClient rpcClient = new CordaRPCClient(rpcAddress);
rpcConnection = rpcClient.start(username, password);
proxy = rpcConnection.getProxy();
staticMap.put("proxy",proxy);
}
This strikes me as more likely to be an oracle connection issue.
I'd start by writing some java code just to ensure that you can connect to the oracle DB, and then focus on getting that to work in the cordapp environment.
There's a good developer example on this (in both java and kotlin: https://github.com/corda/samples-java/tree/master/Basic/flow-database-access ; https://github.com/corda/samples-kotlin/tree/master/Basic/flow-database-access)
Best of luck on this!

Ldap template authentication with user group

I stuck with spring ldaptemplate authentication method returning zero count while using the group string, the string like below.
CN=Jirra-Acdolite-DG,OU=Jira Security Group,OU=Apps Security Group,OU=Security Groups,OU=Global,OU=BT,DC=barcadero,DC=com
We are trying with ldap user group with following code.
try {
LdapContextSource ctxSrc = new LdapContextSource();
ctxSrc.setUrl(url);
// ctxSrc.setBase(base);
ctxSrc.setUserDn(ManagerDn);
ctxSrc.setPassword(ManagerPassword);
ctxSrc.setReferral("follow");
ctxSrc.afterPropertiesSet();
LdapTemplate ldapTemplate = new LdapTemplate(ctxSrc);
System.out.println("50");
ldapTemplate.afterPropertiesSet();
AndFilter andfilter = new AndFilter().and(new EqualsFilter("objectCategory", "person"))
.and(new EqualsFilter("objectClass", "user")).and(new EqualsFilter(SearchAttributes, userDn))
.and(new EqualsFilter("memberOf:1.2.840.113556.1.4.1941:",
"CN=Jirra-Acdolite-DG,OU=Jira Security Group,OU=Apps Security Group,OU=Security Groups,OU=Global,OU=BT,DC=barcadero,DC=com
"));
System.out.println(andfilter);
if (!ldapTemplate.authenticate(base, andfilter.encode(), password, new AuthenticationErrorCallback() {
public void execute(Exception e) {
System.out.println("exception");
}
})) {
System.out.println("False\n");
} else {
System.out.println("Success");
}
But we always getting the False value along with Group string. If any help on same thanks.
THe error message showing like below.
Feb 15, 2018 12:32:52 AM org.springframework.ldap.core.LdapTemplate authenticate
INFO: No results found for search, base:
CN=Jirra-Acdolite-DG,OU=Jira Security Group,OU=Apps Security Group,OU=Security Groups,OU=Global,OU=BT,DC=barcadero,DC=com
if you're trying to authenticate a user under a specific group , try to get all users under that group and make a search in it ( i done it like that)
AndFilter filter = new AndFilter();
filter.and(new EqualsFilter("memberOf:1.2.840.113556.1.4.1941:", groupDN));
filter.and(new EqualsFilter("objectClass", "user"));
return ldapTemplate.search(DistinguishedName.EMPTY_PATH, filter.encode(), new ContractAttributeMapperJSON());
}

How to override endpoint in AWS-SDK-CPP to connect to minio server at localhost:9000

I tried something like:
Aws::Client::ClientConfiguration config;
config.endpointOverride = Aws::String("localhost:9000");
It does not work.
It seems that AWS-SDK-CPP by default uses virtual hosting:
https://bucket-name/s3.amazonaws.com
However, to access Minio, we need path style access:
https://localhost:9000/minio/bucket-name
In AWS-SDK-JAVA, there is:
AmazonS3ClientBuilder.withPathStyleAccessEnabled(true)
is there something similar in AWS-SDK-CPP?
The switch between path style and virtual hosting is in S3Client constructor:
S3Client(const Aws::Client::ClientConfiguration& clientConfiguration = Aws::Client::ClientConfiguration(), bool signPayloads = false, bool useVirtualAdressing = true);
turn it off, as in:
Aws::Client::ClientConfiguration config;
config.endpointOverride = Aws::String("172.31.30.127:9000");
config.scheme = Aws::Http::Scheme::HTTP;
auto client = Aws::MakeShared<S3Client>("sample_s3_client", config, false, false);

Resources