Multiple ClientBootstrap issue - websocket

I'm coding a tool for load testing of a websocket server. I need create a lot(tens of thousands) of client connections to the server.
So I have a some Client class. Inside this class I create new versions of:
ChannelPipelineFactory(with my handlers and the webscoket client handshaker)
ClientBootstrap
In the run() method I have the following code:
public void run() {
clientBootstrap.setPipelineFactory(clientChannelPipelineFactory);
ChannelFuture future = clientBootstrap.connect(
new InetSocketAddress(
clientConfiguration.getHost(),
clientConfiguration.getPort()
)
);
try {
future.awaitUninterruptibly().rethrowIfFailed();
WebSocketClientHandshaker handshaker = clientChannelPipelineFactory.getHandshaker();
channel = future.getChannel();
handshaker.handshake(channel).awaitUninterruptibly().rethrowIfFailed();
} catch (Exception e) {
log.error("Error in the client channel", e);
stop();
}
}
The channel that is returned by ChannelFuture is saved as field in the Client.
Then I do my work and trying to close all opened channles. The stop() method:
public void stop() {
log.debug(String.format("Close channel for client(%s)", id));
if (channel != null) {
if (channel.isWritable()) {
log.debug(String.format("Channel for client(%s) is writable", id));
ChannelFuture writeFuture = channel.write(new CloseWebSocketFrame());
writeFuture.addListener(ChannelFutureListener.CLOSE);
}
}
clientBootstrap.releaseExternalResources();
}
But, when the stop() is called on any clients it closes all channels!?
p.s.
Code that closes all channels(single threaded):
for (FSBBridgeServerClient client : clients) {
for (FSBBridgeServerClient subClient : clients) {
log.debug("c:" + subClient.getChannel());
log.debug("c:" + subClient.getChannel().isOpen());
}
client.stop();
}
Some debug log:
2012-04-04 17:19:29,441 DEBUG [main] ClientApp - c:[id: 0x2344b18f, /127.0.0.1:38366 => localhost/127.0.0.1:5544]
2012-04-04 17:19:29,441 DEBUG [main] ClientApp - c:true
2012-04-04 17:19:29,442 DEBUG [main] ClientApp - c:[id: 0x01c20eb7, /127.0.0.1:38367 => localhost/127.0.0.1:5544]
2012-04-04 17:19:29,442 DEBUG [main] ClientApp - c:true
2012-04-04 17:19:34,414 DEBUG [main] ClientApp - c:[id: 0x2344b18f, /127.0.0.1:38366 :> localhost/127.0.0.1:5544]
2012-04-04 17:19:34,414 DEBUG [main] ClientApp - c:false
2012-04-04 17:19:34,414 DEBUG [main] ClientApp - c:[id: 0x01c20eb7, /127.0.0.1:38367 :> localhost/127.0.0.1:5544]
2012-04-04 17:19:34,414 DEBUG [main] ClientApp - c:false

I think your problem is calling clientBootstrap.releaseExternalResources();.
According to the documentation ... this method simply delegates the call to ChannelFactory.releaseExternalResources().

Related

How to use RemoteFileTemplate<SmbFile> in Spring integration?

I've got a Spring #Component where a SmbSessionFactory is injected to create a RemoteFileTemplate<SmbFile>. When my application runs, this piece of code is called multiple times:
public void process(Message myMessage, String filename) {
StopWatch stopWatch = StopWatch.createStarted();
byte[] bytes = marshallMessage(myMessage);
String destination = smbConfig.getDir() + filename + ".xml";
if (log.isDebugEnabled()) {
log.debug("Result: {}", new String(bytes));
}
Optional<IOException> optionalEx =
remoteFileTemplate.execute(
session -> {
try (InputStream inputStream = new ByteArrayInputStream(bytes)) {
session.write(inputStream, destination);
} catch (IOException e1) {
return Optional.of(e1);
}
return Optional.empty();
});
log.info("processed Message in {}", stopWatch.formatTime());
optionalEx.ifPresent(
ioe -> {
throw new UncheckedIOException(ioe);
});
}
this works (i.e. the file is written) and all is fine. Except that I see warnings appearing in my log:
DEBUG my.package.MyClass Result: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>....
INFO org.springframework.integration.smb.session.SmbSessionFactory SMB share init: XXX
WARN jcifs.smb.SmbResourceLocatorImpl Path consumed out of range 15
WARN jcifs.smb.SmbTreeImpl Disconnected tree while still in use SmbTree[share=XXX,service=null,tid=1,inDfs=true,inDomainDfs=true,connectionState=3,usage=2]
INFO org.springframework.integration.smb.session.SmbSession Successfully wrote remote file [path\to\myfile.xml].
WARN jcifs.smb.SmbSessionImpl Logging off session while still in use SmbSession[credentials=XXX,targetHost=XXX,targetDomain=XXX,uid=0,connectionState=3,usage=1]:[SmbTree[share=XXX,service=null,tid=1,inDfs=false,inDomainDfs=false,connectionState=0,usage=1], SmbTree[share=XXX,service=null,tid=5,inDfs=false,inDomainDfs=false,connectionState=2,usage=0]]
jcifs.smb.SmbTransportImpl Disconnecting transport while still in use Transport746[XXX/999.999.999.999:445,state=5,signingEnforced=false,usage=1]: [SmbSession[credentials=XXX,targetHost=XXX,targetDomain=XXX,uid=0,connectionState=2,usage=1], SmbSession[credentials=XXX,targetHost=XXX,targetDomain=null,uid=0,connectionState=2,usage=0]]
INFO my.package.MyClass processed Message in 00:00:00.268
The process method is called from a Rest method, which does little else.
What am I doing wrong here?

Flux.subscribe finishes before the last element in processed

Strange behavior of Spring + Flux. I have Python server code (using Flask, but that's not important, treat it as pseudo-code) which is streaming response:
def generate():
for row in range(0,10):
time.sleep(1)
yield json.dumps({"count": row}) + '\n'
return Response(generate(), mimetype='application/json')
With that, I simulate processing some tasks from the list and sending me results as soon as they are ready, instead of waiting for everything to be done, mostly to avoid keeping that everything in memory first of the server and then of the client. Now I want to consume that with Spring WebClient:
Flux<Count> alerts = webClient
.post()
.uri("/testStream")
.accept(MediaType.APPLICATION_JSON)
.retrieve()
.bodyToFlux( Count.class )
.log();
alerts.subscribe(a -> log.debug("Received count: " + a.count));
Mono<Void> mono = Mono.when(alerts);
mono.block();
log.debug("All done in method");
Here is what I'm getting in log:
2019-07-03 18:45:08.330 DEBUG 16256 --- [ctor-http-nio-4] c.k.c.restapi.rest.Controller : Received count: 8
2019-07-03 18:45:09.323 INFO 16256 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.4 : onNext(com.ksftech.chainfacts.restapi.rest.Controller$Count#55d09f83)
2019-07-03 18:45:09.324 INFO 16256 --- [ctor-http-nio-2] reactor.Flux.MonoFlatMapMany.4 : onComplete()
2019-07-03 18:45:09.325 DEBUG 16256 --- [io-28088-exec-4] c.k.c.restapi.rest.Controller : All done in method
2019-07-03 18:45:09.331 INFO 16256 --- [ctor-http-nio-4] reactor.Flux.MonoFlatMapMany.4 : onNext(com.ksftech.chainfacts.restapi.rest.Controller$Count#da447dd)
2019-07-03 18:45:09.332 DEBUG 16256 --- [ctor-http-nio-4] c.k.c.restapi.rest.Controller : Received count: 9
2019-07-03 18:45:09.333 INFO 16256 --- [ctor-http-nio-4] reactor.Flux.MonoFlatMapMany.4 : onComplete()
Notice how last object is processed by subscribe after mono.block returns. I understand that Reactor is asynchronous, and once it sees no more objects, it releases Mono and calls my code in subscribe in parallel. Then it is a mercy of scheduler to see what runs first.
I came up with quite ugly kludge of having subscribe with completeConsumer, and using good old wait/notify. Then it works fine. But is there more elegant way of making sure my method waits until all elements of Flux are processed?
OK, I have studied this area and realized that Reactor is for asynchronous execution. If I need it synchronously, I have to use synchronization. And to have a code which executes after everything has been fed to subscribe, I need to use doOnComplete:
public class FluxResult {
public boolean success = true;
public Exception ex = null;
public void error() {success = false;}
public void error(Exception e) {success = false; ex = e;}
public synchronized void waitForFluxCompletion() throws InterruptedException {
wait();
}
public synchronized void notifyAboutFluxCompletion() {
notify();
}
}
.... // do something which returns Flux
myflux
.doFirst(() -> {
// initialization
})
.doOnError(e -> {
log.error("Exception", e);
})
.doOnComplete(() -> {
try {
// finalization. If we were accumulating objects, now flush them
}
catch (Exception e) {
log.error("Exception", e);
flux_res.error(e);
}
finally {
flux_res.notifyAboutFluxCompletion();
}
})
.subscribe(str -> {
// something which must be executed for each item
});
And then wait for object to be signaled:
flux_res.waitForFluxCompletion();
if (!flux_res.success) {
if (flux_res.ex != null) {

keeping connection alive to websocket when using ServerWebSocketContainer

I was trying to create a websocket based application where the server needs to keep the connection alive with the clients using heartbeat.
I checked the server ServerWebSocketContainer.SockJsServiceOptions class for the same, but could not use it. I am using the code from the spring-integration sample
#Bean
ServerWebSocketContainer serverWebSocketContainer() {
return new ServerWebSocketContainer("/messages").withSockJs();
}
#Bean
MessageHandler webSocketOutboundAdapter() {
return new WebSocketOutboundMessageHandler(serverWebSocketContainer());
}
#Bean(name = "webSocketFlow.input")
MessageChannel requestChannel() {
return new DirectChannel();
}
#Bean
IntegrationFlow webSocketFlow() {
return f -> {
Function<Message , Object> splitter = m -> serverWebSocketContainer()
.getSessions()
.keySet()
.stream()
.map(s -> MessageBuilder.fromMessage(m)
.setHeader(SimpMessageHeaderAccessor.SESSION_ID_HEADER, s)
.build())
.collect(Collectors.toList());
f.split( Message.class, splitter)
.channel(c -> c.executor(Executors.newCachedThreadPool()))
.handle(webSocketOutboundAdapter());
};
}
#RequestMapping("/hi/{name}")
public void send(#PathVariable String name) {
requestChannel().send(MessageBuilder.withPayload(name).build());
}
Please let me know how can I set the heartbeat options ensure the connection is kept alive unless the client de-registers itself.
Thanks
Actually you got it right, but missed a bit of convenience :-).
You can configure it like this:
#Bean
ServerWebSocketContainer serverWebSocketContainer() {
return new ServerWebSocketContainer("/messages")
.withSockJs(new ServerWebSocketContainer.SockJsServiceOptions()
.setHeartbeatTime(60_000));
}
Although it isn't clear for me why you need to configure it at all because of this:
/**
* The amount of time in milliseconds when the server has not sent any
* messages and after which the server should send a heartbeat frame to the
* client in order to keep the connection from breaking.
* <p>The default value is 25,000 (25 seconds).
*/
public SockJsServiceRegistration setHeartbeatTime(long heartbeatTime) {
this.heartbeatTime = heartbeatTime;
return this;
}
UPDATE
In the Spring Integration Samples we have something like stomp-chat application.
I have done there something like this to the stomp-server.xml:
<int-websocket:server-container id="serverWebSocketContainer" path="/chat">
<int-websocket:sockjs heartbeat-time="10000"/>
</int-websocket:server-container>
Added this to the application.properties:
logging.level.org.springframework.web.socket.sockjs.transport.session=trace
And this to the index.html:
sock.onheartbeat = function() {
console.log('heartbeat');
};
After connecting the client I see this in the server log:
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Writing SockJsFrame content='h'
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Cancelling heartbeat in session sogfe2dn
2015-10-13 19:03:06.574 TRACE 7960 --- [ SockJS-3] s.w.s.s.t.s.WebSocketServerSockJsSession : Scheduled heartbeat in session sogfe2dn
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Preparing to write SockJsFrame content='h'
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Writing SockJsFrame content='h'
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Cancelling heartbeat in session sogfe2dn
2015-10-13 19:03:16.576 TRACE 7960 --- [ SockJS-8] s.w.s.s.t.s.WebSocketServerSockJsSession : Scheduled heartbeat in session sogfe2dn
In the browser's console I see this after:
So, looks like heart-beat feature works well...

HbaseTestingUtility: could not start my mini-cluster

I was trying to test my Hbase Code using HbaseTestingUtility. Everytime I started my mini-cluster using the code snippet below, I was getting an exception.
public void startCluster()
{
File workingDirectory = new File("./");
Configuration conf = new Configuration();
System.setProperty("test.build.data", workingDirectory.getAbsolutePath());
conf.set("test.build.data", new File(workingDirectory, "zookeeper").getAbsolutePath());
conf.set("fs.default.name", "file:///");
conf.set("zookeeper.session.timeout", "180000");
conf.set("hbase.zookeeper.peerport", "2888");
conf.set("hbase.zookeeper.property.clientPort", "2181");
conf.addResource(new Path("conf/hbase-site1.xml"));
try
{
masterDir = new File(workingDirectory, "hbase");
conf.set(HConstants.HBASE_DIR, masterDir.toURI().toURL().toString());
}
catch (MalformedURLException e1)
{
logger.error(e1.getMessage());
}
Configuration hbaseConf = HBaseConfiguration.create(conf);
utility = new HBaseTestingUtility(hbaseConf);
// Change permission for dfs.data.dir, please refer
// https://issues.apache.org/jira/browse/HBASE-5711 for more details.
try
{
Process process = Runtime.getRuntime().exec("/bin/sh -c umask");
BufferedReader br = new BufferedReader(new InputStreamReader(process.getInputStream()));
int rc = process.waitFor();
if (rc == 0)
{
String umask = br.readLine();
int umaskBits = Integer.parseInt(umask, 8);
int permBits = 0777 & ~umaskBits;
String perms = Integer.toString(permBits, 8);
logger.info("Setting dfs.datanode.data.dir.perm to " + perms);
utility.getConfiguration().set("dfs.datanode.data.dir.perm", perms);
}
else
{
logger.warn("Failed running umask command in a shell, nonzero return value");
}
}
catch (Exception e)
{
// ignore errors, we might not be running on POSIX, or "sh" might
// not be on the path
logger.warn("Couldn't get umask", e);
}
if (!checkIfServerRunning())
{
hTablePool = new HTablePool(conf, 1);
try
{
zkCluster = new MiniZooKeeperCluster(conf);
zkCluster.setDefaultClientPort(2181);
zkCluster.setTickTime(18000);
zkDir = new File(utility.getClusterTestDir().toString());
zkCluster.startup(zkDir);
utility.setZkCluster(zkCluster);
utility.startMiniCluster();
utility.getHBaseCluster().startMaster();
}
catch (Exception e)
{
e.printStackTrace();
logger.error(e.getMessage());
throw new RuntimeException(e);
}
}
}
I got the exception as follows.
2013-09-10 15:26:26 INFO ClientCnxn:849 - Socket connection established to localhost/127.0.0.1:2181, initiating session
2013-09-10 15:26:26 INFO ZooKeeperServer:839 - Client attempting to establish new session at /127.0.0.1:45934
2013-09-10 15:26:26 INFO ZooKeeperServer:595 - Established session 0x141074cd6150002 with negotiated timeout 180000 for client /127.0.0.1:45934
2013-09-10 15:26:26 INFO ClientCnxn:1207 - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x141074cd6150002, negotiated timeout = 180000
2013-09-10 15:26:26 INFO HBaseRPC:289 - Server at localhost/127.0.0.1:42926 could not be reached after 1 tries, giving up.
2013-09-10 15:26:26 WARN AssignmentManager:1714 - Failed assignment of -ROOT-,,0.70236052 to localhost,42926,1378806982623, trying to assign elsewhere instead; retry=0
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:42926 after attempts=1
at org.apache.hadoop.hbase.ipc.HBaseRPC.handleConnectionException(HBaseRPC.java:291)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:259)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1305)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1261)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getHRegionConnection(HConnectionManager.java:1248)
at org.apache.hadoop.hbase.master.ServerManager.getServerConnection(ServerManager.java:550)
at org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:483)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1664)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1387)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1362)
at org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1357)
at org.apache.hadoop.hbase.master.AssignmentManager.assignRoot(AssignmentManager.java:2236)
at org.apache.hadoop.hbase.master.HMaster.assignRootAndMeta(HMaster.java:654)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:551)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:362)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:692)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:207)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:525)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupConnection(HBaseClient.java:416)
at org.apache.hadoop.hbase.ipc.HBaseClient$Connection.setupIOstreams(HBaseClient.java:462)
at org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1150)
at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1000)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:150)
at com.sun.proxy.$Proxy20.getProtocolVersion(Unknown Source)
at org.apache.hadoop.hbase.ipc.WritableRpcEngine.getProxy(WritableRpcEngine.java:183)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:335)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:312)
at org.apache.hadoop.hbase.ipc.HBaseRPC.getProxy(HBaseRPC.java:364)
at org.apache.hadoop.hbase.ipc.HBaseRPC.waitForProxy(HBaseRPC.java:236)
... 14 more
2013-09-10 15:26:26 WARN AssignmentManager:1736 - Unable to find a viable location to assign region -ROOT-,,0.70236052
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:27:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:28:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.oldlogs dst=null perm=null
2013-09-10 15:29:24 INFO audit:5677 - allowed=true ugi=aniket (auth:SIMPLE) ip=/127.0.0.1 cmd=listStatus src=/user/aniket/hbase/.archive dst=null perm=null
2013-09-10 15:29:42 ERROR MiniHBaseCluster:201 - Error starting cluster
java.lang.RuntimeException: Master not initialized after 200 seconds
at org.apache.hadoop.hbase.util.JVMClusterUtil.startup(JVMClusterUtil.java:206)
at org.apache.hadoop.hbase.LocalHBaseCluster.startup(LocalHBaseCluster.java:420)
at org.apache.hadoop.hbase.MiniHBaseCluster.init(MiniHBaseCluster.java:196)
at org.apache.hadoop.hbase.MiniHBaseCluster.<init>(MiniHBaseCluster.java:76)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:635)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:609)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:557)
at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:526)
at HBaseTesting.startCluster(HBaseTesting.java:131)
at HBaseTesting.main(HBaseTesting.java:62)
2013-09-10 15:29:42 INFO HMaster:1635 - Cluster shutdown requested
2013-09-10 15:29:42 INFO HRegionServer:1666 - STOPPED: Shutdown requested
2013-09-10 15:29:42 INFO HBaseServer:1651 - Stopping server on 42926
Could anyone help me with the solution.
Here's my solution.
I had to update my /etc/hosts file.
the 2 entries of interests are:
127.0.0.1 localhost
127.0.1.1 myhostname
I had to change the IP that myhostname is pointed to also point to
127.0.0.1.
Once my /etc/hosts was updated to look like this:
127.0.0.1 localhost
127.0.0.1 myhostname
The code started working.
(This is assuming a Linux server. replace /etc/hosts with the equivalent file for your operating system )
http://en.wikipedia.org/wiki/Hosts_(file)
Adding guava dependency to Gradle file worked for me.
compile group: 'com.google.guava', name: 'guava', version: '14.0'

Play framework - java.nio.channels.ClosedChannelException

Here is the scenario.
I am using Play framework. Inside a given handler, the play framework calls my API webservice and returns the API response to the client. The client is calling the handler through an Ajax call. Sometimes the response comes fine but often i am seeing error response on the client side. Checking the logs of play framework, i see a java.nio.channels.ClosedChannelException.
I am using Play 2.1.1.
My API Webservice is running on localhost:8888. Play framework is running on 9000.
The API service response is correct. Play frameworks also executes the Callback correctly as i can see the logs. The error happens after the ok() call has been made from Play.
Here are the error logs for a failed request -
[debug] application - find...
[debug] application - id = 647110558
[trace] c.jolbox.bonecp - Check out connection [9 leased]
[trace] c.jolbox.bonecp - Check in connection [9 leased]
[debug] application - socialUser = SocialUser(UserId(647110558,facebook),Arvind,Batra,Arvind Batra,Some(arvindbatra#gmail.com),null,AuthenticationMethod(oauth2),null,Some(OAuth2Info(CAAHNVOUuNZAEBAMa3CPLUEsZA2Tp5xWGXylO9HggBY0TCfwsIn4iGUdlRMpuNPLxYcObKO5ZBZCU0ghS9ymHZC3s9YXpsfPix9AM1EhNyETvDR85HHYg8j7JO0h2WzGZBsKJdbFPhPmkD6ZBZAq6KTT8RLSQrmpfnHQZD,null,null,null)),null)
[info] application - Calling interest for fff
[info] application - user is not null
[trace] c.jolbox.bonecp - Check out connection [9 leased]
[info] application - interest=fff, userInfo:models.EBUser#14a420e1
[info] application - http://localhost:8888/api/add_interest/1/fff
[debug] c.n.h.c.p.n.NettyAsyncHttpProvider - Using cached Channel [id: 0x9d1dee2d, /127.0.0.1:50316 => localhost/127.0.0.1:8888]
for uri http://localhost:8888/api/add_interest/1/fff
[debug] c.n.h.c.p.n.NettyAsyncHttpProvider -
Using cached Channel [id: 0x9d1dee2d, /127.0.0.1:50316 => localhost/127.0.0.1:8888]
for request
DefaultHttpRequest(chunked: false)
GET /api/add_interest/1/fff HTTP/1.1
Host: localhost:8888
Connection: keep-alive
Accept: */* User-Agent: NING/1.0
[debug] c.n.h.c.p.n.NettyAsyncHttpProvider -
Request DefaultHttpRequest(chunked: false)
GET /api/add_interest/1/fff HTTP/1.1
Host: localhost:8888
Connection: keep-alive
Accept: */*
User-Agent: NING/1.0
Response DefaultHttpResponse(chunked: true)
HTTP/1.1 200 OK
Content-Type: text/plain
Date: Thu, 04 Jul 2013 12:14:40 GMT
Transfer-Encoding: chunked
[debug] c.n.h.c.p.n.NettyConnectionsPool - Adding uri: http://localhost:8888 for channel [id: 0x9d1dee2d, /127.0.0.1:50316 => localhost/127.0.0.1:8888]
[info] application - {"status":"success"}
[info] application - {"status":"ok","exists":false}
[trace] play - Sending simple result: SimpleResult(200, Map(Content-Type -> application/json; charset=utf-8, Set-Cookie -> ))
[debug] play - java.nio.channels.ClosedChannelException
[trace] application - Exception caught in Netty
java.nio.channels.ClosedChannelException: null
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:409) ~[netty.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:127) ~[netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99) ~[netty.jar:na]
at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36) ~[netty.jar:na]
at org.jboss.netty.channel.Channels.write(Channels.java:725) ~[netty.jar:na]
at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71) ~[netty.jar:na]
[debug] c.n.h.c.p.n.NettyConnectionsPool - Entry count for : http://localhost:8888 : 2
Here is my sample code -
public static Result addInterestCallback(WS.Response response) {
if (response == null) {
return badRequest();
}
ObjectNode result = (ObjectNode) response.asJson();
try {
Logger.info(result.toString());
if (result.has("status")) {
String status = result.get("status").getTextValue();
if(status.equals("error")) {
result.put("error", "Oops, cannot process your request. Sorry.");
Logger.info("error");
return badRequest(result);
}
else if(status.equals("exists")) {
result.put("exists",true);
}
else {
result.put("exists",false);
}
result.put("status", "ok");
} else {
//do something
Logger.info("result has no status");
}
Logger.info(result.toString());
} catch (Exception e) {
e.printStackTrace();
}
return ok(result);
}
#BodyParser.Of(BodyParser.Json.class)
#SecureSocial.UserAwareAction
public static Result addInterest() {
JsonNode json = request().body().asJson();
String interestName = json.findPath("interestName").getTextValue();
Logger.info("Calling interest for " + interestName);
Identity user = (Identity) ctx().args.get(SecureSocial.USER_KEY);
if (user == null) {
ObjectNode result = Json.newObject();
result.put("error", "requires-login");
Http.Context ctx = Http.Context.current();
ctx.flash().put("error", play.i18n.Messages.get("securesocial.loginRequired"));
result.put("redirect", RoutesHelper.login().absoluteURL(ctx.request(), IdentityProvider.sslEnabled()));
return ok(result);
}
Logger.info("user is not null");
if(interestName == null) {
ObjectNode result = Json.newObject();
result.put("error", "Empty input");
return badRequest(result);
}
//get user
EBUser ebUser = Application.getEBUser();
Logger.info("interest="+interestName+", userInfo:" + ebUser.toString());
//Call addInterst API.
String apiEndpoint = Play.application().configuration().getString(AppConstants.EB_API_ENDPOINT);
String url = "";
try {
url = apiEndpoint + "add_interest/" + ebUser.getId() + "/" + URLEncoder.encode(interestName, "UTF-8");
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
ObjectNode result = Json.newObject();
result.put("error", "Cant parse interest properly");
Logger.info("error " + result.toString());
return badRequest(result);
}
Logger.info(url);
Promise<WS.Response> promiseOfAPI = WS.url(url).get();
Promise<Result> promiseOfResult = promiseOfAPI.map(
new Function<WS.Response, Result>() {
#Override
public Result apply(WS.Response response) throws Throwable {
return addInterestCallback(response);
}
});
return async(promiseOfResult);
}
Name of Handler is addInterest.
Any pointers on what could be happening here?
java.nio.channels.ClosedChannelException
This means you have closed the channel and then continued to use it.

Resources