Using Elasticsearch with jetty jersey - elasticsearch

I am using Elastic search, and it works well, but not when I try to use it with a webservice with jetty and jersey.
Here is an example of a function that I want to use :
public boolean insertUser(RestHighLevelClient client, User user) throws IOException
{
java.util.Map<String, Object> jsonMap = new HashMap<String, Object>();
jsonMap.put("username", user.username);
jsonMap.put("password", user.password);
jsonMap.put("mail", user.mail);
jsonMap.put("friends", user.friends);
jsonMap.put("maps", user.maps);
System.out.println("insertUser");
IndexRequest indexRequest = new IndexRequest("users", "doc",user.username)
.source(jsonMap);
try {
IndexResponse indexResponse = client.index(indexRequest);
System.out.println("insertUser 222");
if (indexResponse.getResult() == DocWriteResponse.Result.CREATED) {
System.out.println("user "+user.username+" créé");
}
else if (indexResponse.getResult() == DocWriteResponse.Result.UPDATED) {
System.out.println("user "+user.username+" update dans insertUser (pas normal)");
}
} catch (IOException e) {
e.printStackTrace();
return false;
}
return true;
}
This function works well when I try it inside a test class. But If i start my server like this :
Server server = new Server();
// Add a connector
ServerConnector connector = new ServerConnector(server);
connector.setHost("0.0.0.0");
connector.setPort(8081);
connector.setIdleTimeout(30000);
server.addConnector(connector);
DAO.ClientConnection("0.0.0.0",8081);
// Configure Jersey
ResourceConfig rc = new ResourceConfig();
rc.packages(true, "com.example.jetty_jersey.ws");
rc.register(JacksonFeature.class);
// Add a servlet handler for web services (/ws/*)
ServletHolder servletHolder = new ServletHolder(new ServletContainer(rc));
ServletContextHandler handlerWebServices = new ServletContextHandler(ServletContextHandler.SESSIONS);
handlerWebServices.setContextPath("/ws");
handlerWebServices.addServlet(servletHolder, "/*");
// Add a handler for resources (/*)
ResourceHandler handlerPortal = new ResourceHandler();
handlerPortal.setResourceBase("src/main/webapp/temporary-work");
handlerPortal.setDirectoriesListed(false);
handlerPortal.setWelcomeFiles(new String[] { "homepage.html" });
ContextHandler handlerPortalCtx = new ContextHandler();
handlerPortalCtx.setContextPath("/");
handlerPortalCtx.setHandler(handlerPortal);
// Activate handlers
ContextHandlerCollection contexts = new ContextHandlerCollection();
contexts.setHandlers(new Handler[] { handlerWebServices, handlerPortalCtx });
server.setHandler(contexts);
// Start server
server.start();
And when I enter a form, then call this webservice :
#POST
#Path("/signup")
#Produces(MediaType.APPLICATION_JSON)
// #Consumes(MediaType.APPLICATION_FORM_URLENCODED)
public SimpleResponse signup(#Context HttpServletRequest httpRequest,
#FormParam("username") String username,
#FormParam("email") String email,
#FormParam("password") String password,
#FormParam("passwordConfirm") String passwordConfirm) {
System.out.println("k");
//if (httpRequest.getSession().getAttribute("user") != null) { //httpRequest.getUserPrincipal() == null) {
try {
if (password.equals(passwordConfirm)) {
User user = new User("jeanOknewmail#gmail.com", "abc");
user.username = "jeanok";
user.maps = new ArrayList<String>();
user.friends = new ArrayList<String>();
System.out.println(user);
System.out.println("avant insert");
DAO.getActionUser().createIndexUser();
//System.out.println(DAO.getActionUser().getOneUser(DAO.client, "joe"));
System.out.println("rdctfygbhunji,k");
DAO.getActionUser().insertUser(DAO.client, user);
System.out.println("après insert");
return new SimpleResponse(true);
}
} catch (IOException e) {
e.printStackTrace();
}
//}
return new SimpleResponse(false);
}
I get lots of errors :
avax.servlet.ServletException: ElasticsearchStatusException[Unable to parse response body]; nested: ResponseException[method [PUT], host [http://0.0.0.0:8081], URI [/users/doc/jeanok?timeout=1m], status line [HTTP/1.1 404 Not Found]|];
...
Caused by:
ElasticsearchStatusException[Unable to parse response body]; nested: ResponseException[method [PUT], host [http://0.0.0.0:8081], URI [/users/doc/jeanok?timeout=1m], status line [HTTP/1.1 404 Not Found]|];
at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:598)
at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:501)
at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:474)
at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:335)
at DAO.UserDAO.insertUser(UserDAO.java:160)
Do you have any idea why the behaviour of my function isn't the same when I launch my server? And why this error? Thanks for your help

I wasn't connected to elastic search. My client was connected to the wrong port. Now it works

Related

Intermittent SocketTimeoutException with elasticsearch-rest-client-7.2.0

I am using RestHighLevelClient version 7.2 to connect to the ElasticSearch cluster version 7.2. My cluster has 3 Master nodes and 2 data nodes. Data node memory config: 2 core and 8 GB. I have used to below code in my spring boot project to create RestHighLevelClient instance.
#Bean(destroyMethod = "close")
#Qualifier("readClient")
public RestHighLevelClient readClient(){
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
new UsernamePasswordCredentials(elasticUser, elasticPass));
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
builder.setRequestConfigCallback(requestConfigBuilder -> requestConfigBuilder.setConnectTimeout(30000).setSocketTimeout(60000)
);
RestHighLevelClient restClient = new RestHighLevelClient(builder);
return restClient;
}
RestHighLevelClient is a singleton bean. Intermittently I am getting SocketTimeoutException with both GET and PUT request. The index size is around 50 MB. I have tried increasing the socket timeout value, but still, I receive the same error. Am I missing some configuration? Any help would be appreciated.
I got the issue just wanted to share so that it can help others.
I was using Load Balancer to connect to the ElasticSerach Cluster.
As you can see from my RestClientBuilder code that I was using only the loadbalancer host and port. Although I have multiple master node, still RestClient was not retrying my request in case of connection timeout.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, elasticPort))
.setHttpClientConfigCallback(httpClientBuilder ->httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider).setDefaultIOReactorConfig(IOReactorConfig.custom().setIoThreadCount(5).build()));
According to the RestClient code if we use a single host then it won't retry in case of any connection issue.
So I changed my code as below and it started working.
RestClientBuilder builder = RestClient.builder(new HttpHost(elasticHost, 9200),new HttpHost(elasticHost, 9201))).setHttpClientConfigCallback(httpClientBuilder -> httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider));
For complete RestClient code please refer https://github.com/elastic/elasticsearch/blob/master/client/rest/src/main/java/org/elasticsearch/client/RestClient.java
Retry code block in RestClient
private Response performRequest(final NodeTuple<Iterator<Node>> nodeTuple,
final InternalRequest request,
Exception previousException) throws IOException {
RequestContext context = request.createContextForNextAttempt(nodeTuple.nodes.next(), nodeTuple.authCache);
HttpResponse httpResponse;
try {
httpResponse = client.execute(context.requestProducer, context.asyncResponseConsumer, context.context, null).get();
} catch(Exception e) {
RequestLogger.logFailedRequest(logger, request.httpRequest, context.node, e);
onFailure(context.node);
Exception cause = extractAndWrapCause(e);
addSuppressedException(previousException, cause);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, cause);
}
if (cause instanceof IOException) {
throw (IOException) cause;
}
if (cause instanceof RuntimeException) {
throw (RuntimeException) cause;
}
throw new IllegalStateException("unexpected exception type: must be either RuntimeException or IOException", cause);
}
ResponseOrResponseException responseOrResponseException = convertResponse(request, context.node, httpResponse);
if (responseOrResponseException.responseException == null) {
return responseOrResponseException.response;
}
addSuppressedException(previousException, responseOrResponseException.responseException);
if (nodeTuple.nodes.hasNext()) {
return performRequest(nodeTuple, request, responseOrResponseException.responseException);
}
throw responseOrResponseException.responseException;
}
I'm facing the same issue, and seeing this I realized that the retry is happening on my side too in each host (I have 3 host and the exception happens in 3 threads). I wanted to post it since you might face the same issue or someone else might come to this post because of the same SocketConnection Exception.
Searching the official docs, the HighLevelRestClient uses under the hood the RestClient, and the RestClient uses CloseableHttpAsyncClient which have a connection pool. ElasticSearch specifies that you should close the connection once that you are done, (which sounds ambiguous the definition of "done" in an application), but in general in internet I have found that you should close it when the application is closing or ending, rather than when you finished querying.
Now on the official documentation of apache they have an example to handle the connection pool, which i'm trying to follow, I'll try to replicate the scenario and will post if that fixes my issue, the code can be found here:
https://hc.apache.org/httpcomponents-asyncclient-dev/httpasyncclient/examples/org/apache/http/examples/nio/client/AsyncClientEvictExpiredConnections.java
This is what i have so far:
#Bean(name = "RestHighLevelClientWithCredentials", destroyMethod = "close")
public RestHighLevelClient elasticsearchClient(ElasticSearchClientConfiguration elasticSearchClientConfiguration,
RestClientBuilder.HttpClientConfigCallback httpClientConfigCallback) {
return new RestHighLevelClient(
RestClient
.builder(getElasticSearchHosts(elasticSearchClientConfiguration))
.setHttpClientConfigCallback(httpClientConfigCallback)
);
}
#Bean
#RefreshScope
public RestClientBuilder.HttpClientConfigCallback getHttpClientConfigCallback(
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager,
CredentialsProvider credentialsProvider
) {
return httpAsyncClientBuilder -> {
httpAsyncClientBuilder.setSSLHostnameVerifier(NoopHostnameVerifier.INSTANCE);
httpAsyncClientBuilder.setDefaultCredentialsProvider(credentialsProvider);
httpAsyncClientBuilder.setConnectionManager(poolingNHttpClientConnectionManager);
return httpAsyncClientBuilder;
};
}
public class ElasticSearchClientManager {
private ElasticSearchClientManager.IdleConnectionEvictor idleConnectionEvictor;
/**
* Custom client connection manager to create a connection watcher
*
* #param elasticSearchClientConfiguration elasticSearchClientConfiguration
* #return PoolingNHttpClientConnectionManager
*/
#Bean
#RefreshScope
public PoolingNHttpClientConnectionManager getPoolingNHttpClientConnectionManager(
ElasticSearchClientConfiguration elasticSearchClientConfiguration
) {
try {
SSLIOSessionStrategy sslSessionStrategy = new SSLIOSessionStrategy(getTrustAllSSLContext());
Registry<SchemeIOSessionStrategy> sessionStrategyRegistry = RegistryBuilder.<SchemeIOSessionStrategy>create()
.register("http", NoopIOSessionStrategy.INSTANCE)
.register("https", sslSessionStrategy)
.build();
ConnectingIOReactor ioReactor = new DefaultConnectingIOReactor();
PoolingNHttpClientConnectionManager poolingNHttpClientConnectionManager =
new PoolingNHttpClientConnectionManager(ioReactor, sessionStrategyRegistry);
idleConnectionEvictor = new ElasticSearchClientManager.IdleConnectionEvictor(poolingNHttpClientConnectionManager,
elasticSearchClientConfiguration);
idleConnectionEvictor.start();
return poolingNHttpClientConnectionManager;
} catch (IOReactorException e) {
throw new RuntimeException("Failed to create a watcher for the connection pool");
}
}
private SSLContext getTrustAllSSLContext() {
try {
return new SSLContextBuilder()
.loadTrustMaterial(null, (x509Certificates, string) -> true)
.build();
} catch (Exception e) {
throw new RuntimeException("Failed to create SSL Context with open certificate", e);
}
}
public IdleConnectionEvictor.State state() {
return idleConnectionEvictor.evictorState;
}
#PreDestroy
private void finishManager() {
idleConnectionEvictor.shutdown();
}
public static class IdleConnectionEvictor extends Thread {
private final NHttpClientConnectionManager nhttpClientConnectionManager;
private final ElasticSearchClientConfiguration elasticSearchClientConfiguration;
#Getter
private State evictorState;
private volatile boolean shutdown;
public IdleConnectionEvictor(NHttpClientConnectionManager nhttpClientConnectionManager,
ElasticSearchClientConfiguration elasticSearchClientConfiguration) {
super();
this.nhttpClientConnectionManager = nhttpClientConnectionManager;
this.elasticSearchClientConfiguration = elasticSearchClientConfiguration;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(elasticSearchClientConfiguration.getExpiredConnectionsCheckTime());
// Close expired connections
nhttpClientConnectionManager.closeExpiredConnections();
// Optionally, close connections
// that have been idle longer than 5 sec
nhttpClientConnectionManager.closeIdleConnections(elasticSearchClientConfiguration.getMaxTimeIdleConnections(),
TimeUnit.SECONDS);
this.evictorState = State.RUNNING;
}
}
} catch (InterruptedException ex) {
this.evictorState = State.NOT_RUNNING;
}
}
private void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
public enum State {
RUNNING,
NOT_RUNNING
}
}
}

Endpoint is not connected in httpclient5-beta

Hi I m trying to use httpcomponents5 beta to make persistent connection, I have tried the example given in their site, the code is as follows,
final IOReactorConfig ioReactorConfig = IOReactorConfig.custom().setSoTimeout(Timeout.ofSeconds(45)).setSelectInterval(10000).setSoReuseAddress(true).setSoKeepAlive(true).build();
final SSLContext sslContext = SSLContexts.custom().loadTrustMaterial(new TrustAllStrategy()).build();
final PoolingAsyncClientConnectionManager connectionManager = PoolingAsyncClientConnectionManagerBuilder.create().setConnectionTimeToLive(TimeValue.of(1, TimeUnit.DAYS)).setTlsStrategy(new H2TlsStrategy(sslContext, NoopHostnameVerifier.INSTANCE)).build();
client = HttpAsyncClients.createMinimal(protocol, H2Config.DEFAULT, null, ioReactorConfig, connectionManager);
client.start();
final org.apache.hc.core5.http.HttpHost target = new org.apache.hc.core5.http.HttpHost("localhost", 8000, "https");
Future<AsyncClientEndpoint> leaseFuture = client.lease(target, null);
AsyncClientEndpoint asyncClientEndpoint = leaseFuture.get(60, TimeUnit.SECONDS);
final CountDownLatch latch = new CountDownLatch(1);
final AsyncRequestProducer requestProducer = AsyncRequestBuilder.post(target.getSchemeName()+"://"+target.getHostName()+":"+target.getPort()+locationposturl).addParameter(new BasicNameValuePair("info", requestData)).setEntity(new StringAsyncEntityProducer("json post data will go here", ContentType.APPLICATION_JSON)).setHeader("Pragma", "no-cache").setHeader("from", "http5").setHeader("Custom", customheaderName).setHeader("Secure", secureHeader).build();
locEndPoint.execute(requestProducer, SimpleResponseConsumer.create(), new FutureCallback<SimpleHttpResponse>() {
#Override
public void completed(final SimpleHttpResponse response) {
if (response != null) {
if (response.getCode() > -1) {
try {
System.out.println("http5:: COMPLETED : RESPONSE "+response.getBodyText());
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
latch.countDown();
}
#Override
public void failed(final Exception ex) {
System.out.println("http5:: FAILED : "+target+locationposturl);
LoggerUtil.printStackTrace(ex);
System.out.println("http5::Exception Request failed "+LoggerUtil.getStackTrace(ex));
latch.countDown();
}
#Override
public void cancelled() {
System.out.println("http5:: CANCELLED : "+target+locationposturl);
System.out.println(http5::Exception Request cancelled");
latch.countDown();
}
});
latch.await();
This code works without a problem for the first time,but when I send a subsequent requests it throws an exception as follows,
http5:: Exception occured java.lang.IllegalStateException: Endpoint is
not connected at
org.apache.hc.core5.util.Asserts.check(Asserts.java:38) at
org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManager$InternalConnectionEndpoint.getValidatedPoolEntry(PoolingAsyncClientConnectionManager.java:497)
at
org.apache.hc.client5.http.impl.nio.PoolingAsyncClientConnectionManager$InternalConnectionEndpoint.execute(PoolingAsyncClientConnectionManager.java:552)
at
org.apache.hc.client5.http.impl.async.MinimalHttpAsyncClient$InternalAsyncClientEndpoint.execute(MinimalHttpAsyncClient.java:405)
at
org.apache.hc.core5.http.nio.AsyncClientEndpoint.execute(AsyncClientEndpoint.java:81)
at
org.apache.hc.core5.http.nio.AsyncClientEndpoint.execute(AsyncClientEndpoint.java:114)
What may be the problem with endpoint, I m forcing endpoint to keep alive for a day, kindly shed some light on this

How to create JMX client which can interect with multiple jmx server using different serviceUrl

I am using Spring jmx to create jmx client which can interact with Cassandra cluster to get a mbean object attribute Livedicsspaceused.
So this Cassandra cluster had 3 node hence different serviceUrl (each having different ip address).
Now I realize that while creating MBeanServerConnectionFactoryBean bean I can specify only one service URl like below:
#Bean
MBeanServerConnectionFactoryBean getConnector() {
MBeanServerConnectionFactoryBean mBeanfactory = new MBeanServerConnectionFactoryBean();
try {
mBeanfactory.setServiceUrl("serviceUrl1");
} catch (MalformedURLException e) {
e.printStackTrace();
}
mBeanfactory.setConnectOnStartup(false);
return mBeanfactory;
}
Then in main I am accessing this as below:
objectName = newObjectName(QueueServicesConstant.MBEAN_OBJ_NAME_LIVE_DISC_USED);
long count = (Long)mBeanFactory.getObject().getAttribute(objectName, QueueServicesConstant.MBEAN_ATTR_NAME_COUNT);
How can i get this value in all three nodes?
You need 3 distinct connectors.
Or you can use something like a Jolokia Proxy to access multiple servers (using REST instead of JSR 160).
This is how I solved the problem ..Instead of using Spring-JMX, I am directly using javax.management apis..So my code below will get any one of the connector which will be sufficient to provide me correct attribute value however it will try to connect to ohther node if it fails to get connector from one server node.
#SuppressWarnings("restriction")
private Object getMbeanAttributeValue(String MbeanObectName,
String attributeName) throws IOException,
AttributeNotFoundException, InstanceNotFoundException,
MBeanException, ReflectionException, MalformedObjectNameException {
Object attributeValue = null;
JMXConnector jmxc = null;
try {
State state = metaTemplate.getSession().getState();
List<String> serviceUrlList = getJmxServiceUrlList(state
.getConnectedHosts());
jmxc = getJmxConnector(serviceUrlList);
ObjectName objectName = new ObjectName(MbeanObectName);
MBeanServerConnection mbsConnection = jmxc
.getMBeanServerConnection();
attributeValue = mbsConnection.getAttribute(objectName,
attributeName);
} finally {
if (jmxc != null)
jmxc.close();
}
return attributeValue;
}
// This will provide any one of the JMX Connector of cassandra cluster
#SuppressWarnings("restriction")
private JMXConnector getJmxConnector(List<String> serviceUrlList)
throws IOException {
JMXConnector jmxc = null;
for (String serviceUrl : serviceUrlList) {
JMXServiceURL url;
try {
url = new JMXServiceURL(serviceUrl);
jmxc = JMXConnectorFactory.connect(url, null);
return jmxc;
} catch (IOException e) {
log.error(
"getJmxConnector: Error while connecting to JMX sereice {} ",
serviceUrl, e.getMessage());
}
}
throw new IOException(
"Not able to connect to any of Cassandra JMX connector.");
}

Global exception handling in OWIN middleware

I'm trying to create a unified error handling/reporting in ASP.NET Web API 2.1 Project built on top of OWIN middleware (IIS HOST using Owin.Host.SystemWeb).
Currently I used a custom exception logger which inherits from System.Web.Http.ExceptionHandling.ExceptionLogger and uses NLog to log all exceptions as the code below:
public class NLogExceptionLogger : ExceptionLogger
{
private static readonly Logger Nlog = LogManager.GetCurrentClassLogger();
public override void Log(ExceptionLoggerContext context)
{
//Log using NLog
}
}
I want to change the response body for all API exceptions to a friendly unified response which hides all exception details using System.Web.Http.ExceptionHandling.ExceptionHandler as the code below:
public class ContentNegotiatedExceptionHandler : ExceptionHandler
{
public override void Handle(ExceptionHandlerContext context)
{
var errorDataModel = new ErrorDataModel
{
Message = "Internal server error occurred, error has been reported!",
Details = context.Exception.Message,
ErrorReference = context.Exception.Data["ErrorReference"] != null ? context.Exception.Data["ErrorReference"].ToString() : string.Empty,
DateTime = DateTime.UtcNow
};
var response = context.Request.CreateResponse(HttpStatusCode.InternalServerError, errorDataModel);
context.Result = new ResponseMessageResult(response);
}
}
And this will return the response below for the client when an exception happens:
{
"Message": "Internal server error occurred, error has been reported!",
"Details": "Ooops!",
"ErrorReference": "56627a45d23732d2",
"DateTime": "2015-12-27T09:42:40.2982314Z"
}
Now this is working all great if any exception occurs within an Api Controller request pipeline.
But in my situation I'm using the middleware Microsoft.Owin.Security.OAuth for generating bearer tokens, and this middleware doesn't know anything about Web API exception handling, so for example if an exception has been in thrown in method ValidateClientAuthentication my NLogExceptionLogger not ContentNegotiatedExceptionHandler will know anything about this exception nor try to handle it, the sample code I used in the AuthorizationServerProvider is as the below:
public class AuthorizationServerProvider : OAuthAuthorizationServerProvider
{
public override Task ValidateClientAuthentication(OAuthValidateClientAuthenticationContext context)
{
//Expcetion occurred here
int x = int.Parse("");
context.Validated();
return Task.FromResult<object>(null);
}
public override async Task GrantResourceOwnerCredentials(OAuthGrantResourceOwnerCredentialsContext context)
{
if (context.UserName != context.Password)
{
context.SetError("invalid_credentials", "The user name or password is incorrect.");
return;
}
var identity = new ClaimsIdentity(context.Options.AuthenticationType);
identity.AddClaim(new Claim(ClaimTypes.Name, context.UserName));
context.Validated(identity);
}
}
So I will appreciate any guidance in implementing the below 2 issues:
1 - Create a global exception handler which handles only exceptions generated by OWIN middle wares? I followed this answer and created a middleware for exception handling purposes and registered it as the first one and I was able to log exceptions originated from "OAuthAuthorizationServerProvider", but I'm not sure if this is the optimal way to do it.
2 - Now when I implemented the logging as the in the previous step, I really have no idea how to change the response of the exception as I need to return to the client a standard JSON model for any exception happening in the "OAuthAuthorizationServerProvider". There is a related answer here I tried to depend on but it didn't work.
Here is my Startup class and the custom GlobalExceptionMiddleware I created for exception catching/logging. The missing peace is returning a unified JSON response for any exception. Any ideas will be appreciated.
public class Startup
{
public void Configuration(IAppBuilder app)
{
var httpConfig = new HttpConfiguration();
httpConfig.MapHttpAttributeRoutes();
httpConfig.Services.Replace(typeof(IExceptionHandler), new ContentNegotiatedExceptionHandler());
httpConfig.Services.Add(typeof(IExceptionLogger), new NLogExceptionLogger());
OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
{
AllowInsecureHttp = true,
TokenEndpointPath = new PathString("/token"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
Provider = new AuthorizationServerProvider()
};
app.Use<GlobalExceptionMiddleware>();
app.UseOAuthAuthorizationServer(OAuthServerOptions);
app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions());
app.UseWebApi(httpConfig);
}
}
public class GlobalExceptionMiddleware : OwinMiddleware
{
public GlobalExceptionMiddleware(OwinMiddleware next)
: base(next)
{ }
public override async Task Invoke(IOwinContext context)
{
try
{
await Next.Invoke(context);
}
catch (Exception ex)
{
NLogLogger.LogError(ex, context);
}
}
}
Ok, so this was easier than anticipated, thanks for #Khalid for the heads up, I have ended up creating an owin middleware named OwinExceptionHandlerMiddleware which is dedicated for handling any exception happening in any Owin Middleware (logging it and manipulating the response before returning it to the client).
You need to register this middleware as the first one in the Startup class as the below:
public class Startup
{
public void Configuration(IAppBuilder app)
{
var httpConfig = new HttpConfiguration();
httpConfig.MapHttpAttributeRoutes();
httpConfig.Services.Replace(typeof(IExceptionHandler), new ContentNegotiatedExceptionHandler());
httpConfig.Services.Add(typeof(IExceptionLogger), new NLogExceptionLogger());
OAuthAuthorizationServerOptions OAuthServerOptions = new OAuthAuthorizationServerOptions()
{
AllowInsecureHttp = true,
TokenEndpointPath = new PathString("/token"),
AccessTokenExpireTimeSpan = TimeSpan.FromDays(1),
Provider = new AuthorizationServerProvider()
};
//Should be the first handler to handle any exception happening in OWIN middlewares
app.UseOwinExceptionHandler();
// Token Generation
app.UseOAuthAuthorizationServer(OAuthServerOptions);
app.UseOAuthBearerAuthentication(new OAuthBearerAuthenticationOptions());
app.UseWebApi(httpConfig);
}
}
And the code used in the OwinExceptionHandlerMiddleware as the below:
using AppFunc = Func<IDictionary<string, object>, Task>;
public class OwinExceptionHandlerMiddleware
{
private readonly AppFunc _next;
public OwinExceptionHandlerMiddleware(AppFunc next)
{
if (next == null)
{
throw new ArgumentNullException("next");
}
_next = next;
}
public async Task Invoke(IDictionary<string, object> environment)
{
try
{
await _next(environment);
}
catch (Exception ex)
{
try
{
var owinContext = new OwinContext(environment);
NLogLogger.LogError(ex, owinContext);
HandleException(ex, owinContext);
return;
}
catch (Exception)
{
// If there's a Exception while generating the error page, re-throw the original exception.
}
throw;
}
}
private void HandleException(Exception ex, IOwinContext context)
{
var request = context.Request;
//Build a model to represet the error for the client
var errorDataModel = NLogLogger.BuildErrorDataModel(ex);
context.Response.StatusCode = (int)HttpStatusCode.InternalServerError;
context.Response.ReasonPhrase = "Internal Server Error";
context.Response.ContentType = "application/json";
context.Response.Write(JsonConvert.SerializeObject(errorDataModel));
}
}
public static class OwinExceptionHandlerMiddlewareAppBuilderExtensions
{
public static void UseOwinExceptionHandler(this IAppBuilder app)
{
app.Use<OwinExceptionHandlerMiddleware>();
}
}
There are a few ways to do what you want:
Create middleware that is registered first, then all exceptions will bubble up to that middleware. At this point just write out your JSON out via the Response object via the OWIN context.
You can also create a wrapping middleware which wraps the Oauth middleware. In this case it will on capture errors originating from this specific code path.
Ultimately writing your JSON message is about creating it, serializing it, and writing it to the Response via the OWIN context.
It seems like you are on the right path with #1. Hope this helps, and good luck :)
The accepted answer is unnecessarily complex and doesn't inherit from OwinMiddleware class
All you need to do is this:
public class HttpLogger : OwinMiddleware
{
public HttpLogger(OwinMiddleware next) : base(next) { }
public override async Task Invoke(IOwinContext context)
{
await Next.Invoke(context);
Log(context)
}
}
Also, no need to create extension method.. it is simple enough to reference without
appBuilder.Use(typeof(HttpErrorLogger));
And if you wanna log only specific requests, you can filter on context properties:
ex:
if (context.Response.StatusCode != 200) { Log(context) }

Spring SAML extension for multiple IDP'S

we are planning to use spring saml extension as SP into our application.
But the requirement with our application is we need to communicate with more than 1 IDP's
Could any one please provide me/direct me to the example where it uses multiple IDP's
I also would like to know spring saml extension supports what kind of IDPS like OPenAM/Ping federate/ADFs2.0 etc...
Thanks,
--Vikas
You need to have a class to maintain a list of metadatas of each Idp's - say you putting those metadatas in some list which will be shared across application by static method. I have something like below
NOTE- I am not copying all class as it is that I am having, so might came across minor issues which you should be able to resolve on your own,
public class SSOMetadataProvider {
public static List<MetadataProvider> metadataList() throws MetadataProviderException, XMLParserException, IOException, Exception {
logger.info("Starting : Loading Metadata Data for all SSO enabled companies...");
List<MetadataProvider> metadataList = new ArrayList<MetadataProvider>();
org.opensaml.xml.parse.StaticBasicParserPool parserPool = new org.opensaml.xml.parse.StaticBasicParserPool();
parserPool.initialize();
//Get XML from DB -> convertIntoInputStream -> pass below as const argument
InputStreamMetadataProvider inputStreamMetadata = null;
try {
//Getting list from DB
List companyList = someServiceClass.getAllSSOEnabledCompanyDTO();
if(companyList!=null){
for (Object obj : companyList) {
CompanyDTO companyDTO = (CompanyDTO) obj;
if (companyDTO != null && companyDTO.getCompanyid() > 0 && companyDTO.getSsoSettingsDTO()!=null && !StringUtil.isNullOrEmpty(companyDTO.getSsoSettingsDTO().getSsoMetadataXml())) {
logger.info("Loading Metadata for Company : "+companyDTO.getCompanyname()+" , companyId : "+companyDTO.getCompanyid());
inputStreamMetadata = new InputStreamMetadataProvider(companyDTO.getSsoSettingsDTO().getSsoMetadataXml());
inputStreamMetadata.setParserPool(parserPool);
inputStreamMetadata.initialize();
//ExtendedMetadataDelegateWrapper extMetadaDel = new ExtendedMetadataDelegateWrapper(inputStreamMetadata , new org.springframework.security.saml.metadata.ExtendedMetadata());
SSOMetadataDelegate extMetadaDel = new SSOMetadataDelegate(inputStreamMetadata , new org.springframework.security.saml.metadata.ExtendedMetadata()) ;
extMetadaDel.initialize();
extMetadaDel.setTrustFiltersInitialized(true);
metadataList.add(extMetadaDel);
logger.info("Loading Metadata bla bla");
}
}
}
} catch (MetadataProviderException | IOException | XMLParserException mpe){
logger.warn(mpe);
throw mpe;
}
catch (Exception e) {
logger.warn(e);
}
logger.info("Finished : Loading Metadata Data for all SSO enabled companies...");
return metadataList;
}
InputStreamMetadataProvider.java
public class InputStreamMetadataProvider extends AbstractReloadingMetadataProvider implements Serializable
{
public InputStreamMetadataProvider(String metadata) throws MetadataProviderException
{
super();
//metadataInputStream = metadata;
metadataInputStream = SSOUtil.getIdpAsStream(metadata);
}
#Override
protected byte[] fetchMetadata() throws MetadataProviderException
{
byte[] metadataBytes = metadataInputStream ;
if(metadataBytes.length>0)
return metadataBytes;
else
return null;
}
public byte[] getMetadataInputStream() {
return metadataInputStream;
}
}
SSOUtil.java
public class SSOUtil {
public static byte[] getIdpAsStream(String metadatXml) {
return metadatXml.getBytes();
}
}
After user request to fetch metadata for their company's metadata, get MetaData for entityId for each IdPs -
SSOCachingMetadataManager.java
public class SSOCachingMetadataManager extends CachingMetadataManager{
#Override
public ExtendedMetadata getExtendedMetadata(String entityID) throws MetadataProviderException {
ExtendedMetadata extendedMetadata = null;
try {
//UAT Defect Fix - org.springframework.security.saml.metadata.ExtendedMetadataDelegate cannot be cast to biz.bsite.direct.spring.app.sso.ExtendedMetadataDelegate
//List<MetadataProvider> metadataList = (List<MetadataProvider>) GenericCache.getInstance().getCachedObject("ssoMetadataList", List.class.getClassLoader());
List<MetadataProvider> metadataList = SSOMetadataProvider.metadataList();
log.info("Retrieved Metadata List from Cassendra Cache size is :"+ (metadataList!=null ? metadataList.size(): 0) );
org.opensaml.xml.parse.StaticBasicParserPool parserPool = new org.opensaml.xml.parse.StaticBasicParserPool();
parserPool.initialize();
if(metadataList!=null){
//metadataList.addAll(getAvailableProviders());
//metadataList.addAll(getProviders());
//To remove duplicate entries from list, if any
Set<MetadataProvider> hs = new HashSet<MetadataProvider> ();
hs.addAll(metadataList);
metadataList.clear();
metadataList.addAll(hs);
//setAllProviders(metadataList);
//setTrustFilterInitializedToTrue();
//refreshMetadata();
}
if(metadataList!=null && metadataList.size()>0) {
for(MetadataProvider metadataProvider : metadataList){
log.info("metadataProvider instance of ExtendedMetadataDelegate: Looking for entityId"+entityID);
SSOMetadataDelegate ssoMetadataDelegate = null;
ExtendedMetadataDelegateWrapper extMetadaDel = null;
// extMetadaDel.getDelegate()
if(metadataProvider instanceof SSOMetadataDelegate)
{ssoMetadataDelegate = (SSOMetadataDelegate) metadataProvider;
((InputStreamMetadataProvider)ssoMetadataDelegate.getDelegate()).setParserPool(parserPool);
((InputStreamMetadataProvider)ssoMetadataDelegate.getDelegate()).initialize();
ssoMetadataDelegate.initialize();
ssoMetadataDelegate.setTrustFiltersInitialized(true);
if(!isMetadataAlreadyExist(ssoMetadataDelegate))
addMetadataProvider(ssoMetadataDelegate);
extMetadaDel = new ExtendedMetadataDelegateWrapper(ssoMetadataDelegate.getDelegate() , new org.springframework.security.saml.metadata.ExtendedMetadata());
}
else
extMetadaDel = new ExtendedMetadataDelegateWrapper(metadataProvider, new org.springframework.security.saml.metadata.ExtendedMetadata());
extMetadaDel.initialize();
extMetadaDel.setTrustFiltersInitialized(true);
extMetadaDel.initialize();
refreshMetadata();
extendedMetadata = extMetadaDel.getExtendedMetadata(entityID);
}
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
if(extendedMetadata!=null)
return extendedMetadata;
else{
return super.getExtendedMetadata(entityID);
}
}
private boolean isMetadataAlreadyExist(SSOMetadataDelegate ssoMetadataDelegate) {
boolean isExist = false;
for(ExtendedMetadataDelegate item : getAvailableProviders()){
if (item.getDelegate() != null && item.getDelegate() instanceof SSOMetadataDelegate) {
SSOMetadataDelegate that = (SSOMetadataDelegate) item.getDelegate();
try {
log.info("This Entity ID: "+ssoMetadataDelegate.getMetadata()!=null ? ((EntityDescriptorImpl)ssoMetadataDelegate.getMetadata()).getEntityID() : "nullEntity"+
"That Entity ID: "+that.getMetadata()!=null ? ((EntityDescriptorImpl)that.getMetadata()).getEntityID() : "nullEntity");
EntityDescriptorImpl e = (EntityDescriptorImpl) that.getMetadata();
isExist = this.getMetadata()!=null ? ((EntityDescriptorImpl)ssoMetadataDelegate.getMetadata()).getEntityID().equals(e.getEntityID()) : false;
if(isExist)
return isExist;
} catch (MetadataProviderException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
}
return isExist;
}
Add entry in ur Spring bean xml
<bean id="metadata" class="pkg.path.SSOCachingMetadataManager">
<constructor-arg name="providers" value="#{ssoMetadataProvider.metadataList()}">
</constructor-arg>
<property name="RefreshCheckInterval" value="-1"/>
<property name="RefreshRequired" value="false"/>
</bean>
Let me know incase of any concerns.
I have recently configured two IDPs for Spring SAML extension. Here we should follow one basic rule. For each IDP we want to add, we have to configure one IDP provider as well as one SP provider. We should configure the providers in a MetadataManager bean, CachingMetadataManager for example. Here are some code snippets to get the idea what I am trying to say about:
public void addProvider(String providerMetadataUrl, String idpEntityId, String spEntityId, String alias) {
addIDPMetadata(providerMetadataUrl, idpEntityId, alias);
addSPMetadata(spEntityId, alias);
}
public void addIDPMetadata(String providerMetadataUrl, String idpEntityId, String alias) {
try {
if (metadata.getIDPEntityNames().contains(idpEntityId)) {
return;
}
metadata.addMetadataProvider(extendedMetadataProvider(providerMetadataUrl, alias));
} catch (MetadataProviderException e1) {
log.error("Error initializing metadata", e1);
}
}
public void addSPMetadata(String spEntityId, String alias) {
try {
if (metadata.getSPEntityNames().contains(spEntityId)) {
return;
}
MetadataGenerator generator = new MetadataGenerator();
generator.setEntityId(spEntityId);
generator.setEntityBaseURL(baseURL);
generator.setExtendedMetadata(extendedMetadata(alias));
generator.setIncludeDiscoveryExtension(true);
generator.setKeyManager(keyManager);
EntityDescriptor descriptor = generator.generateMetadata();
ExtendedMetadata extendedMetadata = generator.generateExtendedMetadata();
MetadataMemoryProvider memoryProvider = new MetadataMemoryProvider(descriptor);
memoryProvider.initialize();
MetadataProvider metadataProvider = new ExtendedMetadataDelegate(memoryProvider, extendedMetadata);
metadata.addMetadataProvider(metadataProvider);
metadata.setHostedSPName(descriptor.getEntityID());
metadata.refreshMetadata();
} catch (MetadataProviderException e1) {
log.error("Error initializing metadata", e1);
}
}
public ExtendedMetadataDelegate extendedMetadataProvider(String providerMetadataUrl, String alias)
throws MetadataProviderException {
HTTPMetadataProvider provider = new HTTPMetadataProvider(this.bgTaskTimer, httpClient, providerMetadataUrl);
provider.setParserPool(parserPool);
ExtendedMetadataDelegate delegate = new ExtendedMetadataDelegate(provider, extendedMetadata(alias));
delegate.setMetadataTrustCheck(true);
delegate.setMetadataRequireSignature(false);
return delegate;
}
private ExtendedMetadata extendedMetadata(String alias) {
ExtendedMetadata exmeta = new ExtendedMetadata();
exmeta.setIdpDiscoveryEnabled(true);
exmeta.setSignMetadata(false);
exmeta.setEcpEnabled(true);
if (alias != null && alias.length() > 0) {
exmeta.setAlias(alias);
}
return exmeta;
}
You can find all answers to your question in the Spring SAML manual.
The sample application which is included as part of the product already includes metadata for two IDPs, use it as an example.
Statement on IDPs is included in chapter 1.2:
All products supporting SAML 2.0 in Identity Provider mode (e.g. ADFS
2.0, Shibboleth, OpenAM/OpenSSO, Efecte Identity or Ping Federate) can be used with the extension.

Resources