How to run only one TimerTask? - spring

I use stomp to send data A every 2 seconds and data B every 10 minutes when the websocket connection is complete.
So I divided the timerTask and implemented it.
works fine though. In addition, when websockets are connected, the same timerTask is called and the number of connected websockets is called continuously.
I need a "don't call if Timertask is already running" workaround.
//method called by controller
public void publish(DateParam param, CustomLuluSession session) throws BizException {
sendTrend(param, session); //every 2 seconds
sendIssueKeyword(param, session); //every 10 minutes
}
/*
sendTrend and sendIssueKeyword
is implemented the same, only the set time is different, so I will register only one as an example.
*/
public void sendTrend(DateParam param, CustomLuluSession session) throws BizException {
TimerTask task = trendTimerTask(param, session);
final Timer timer = new Timer();
timer.schedule(task, 0, TWO_SECOND);
}
public TimerTask trendTimerTask(DateParam param, CustomLuluSession session) throws BizException {
TimerTask tempTask = new TimerTask() {
#Override
public void run() {
try {
if (getSession() == 0) this.cancel();
getTrendInfo(param, session);
} catch (BizException e) {
throw new RuntimeException(e);
}
}
};
return tempTask;
}
// Access the db to get data and send it to the client
public void getTrendInfo(DateParam param, CustomLuluSession session) throws BizException {
FindInquiryTrendResult inquiryTrend = inquiryFacade.findInquiryTrend(param, session);
messagingTemplate.convertAndSend(ROOM_ID, inquiryTrend);
}

Related

Storm SQS messages not getting acked

I have a topology with 1 spout reading from 2 SQS queues and 5 bolts. After processing when i try to ack from second bolt it is not getting acked.
I'm running it in reliable mode and trying to ack in the last bolt. I get this message as if the messages are getting acked. But it is not getting deleted from the queue and the overwritten ack() methods are not getting called. It looks like it calls the default ack method in backtype.storm.task.OutputCollector instead of the overridden method in my spout.
8240 [Thread-24-conversionBolt] INFO backtype.storm.daemon.task - Emitting: conversionBolt__ack_ack [-7578372739434961741 -8189877254603774958]
I have anchored message ID to the tuple in my SQS queue spout and emitting to first bolt.
collector.emit(getStreamId(message), new Values(jsonObj.toString()), message.getReceiptHandle());
I have ack() and fail() methods overridden in my queue spout.Default Visibility Timeout has been set to 30 seconds
Code snippet from my topology:
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("firstQueueSpout",
new SqsQueueSpout(StormConfigurations.getQueueURL()
+ StormConfigurations.getFirstQueueName(), true),
StormConfigurations.getAwsQueueSpoutThreads());
builder.setSpout("secondQueueSpout",
new SqsQueueSpout(StormConfigurations.getQueueURL()
+ StormConfigurations.getSecondQueueName(),
true), StormConfigurations.getAwsQueueSpoutThreads());
builder.setBolt("transformerBolt", new TransformerBolt(),
StormConfigurations.getTranformerBoltThreads())
.shuffleGrouping("firstQueueSpout")
.shuffleGrouping("secondQueueSpout");
builder.setBolt("conversionBolt", new ConversionBolt(),
StormConfigurations.getTranformerBoltThreads())
.shuffleGrouping("transformerBolt");
// To dispatch it to the corresponding bolts based on packet type
builder.setBolt("dispatchBolt", new DispatcherBolt(),
StormConfigurations.getDispatcherBoltThreads())
.shuffleGrouping("conversionBolt");
Code snippet from SQSQueueSpout(extends BaseRichSpout):
#Override
public void nextTuple()
{
if (queue.isEmpty()) {
ReceiveMessageResult receiveMessageResult = sqs.receiveMessage(
new ReceiveMessageRequest(queueUrl).withMaxNumberOfMessages(10));
queue.addAll(receiveMessageResult.getMessages());
}
Message message = queue.poll();
if (message != null)
{
try
{
JSONParser parser = new JSONParser();
JSONObject jsonObj = (JSONObject) parser.parse(message.getBody());
// ack(message.getReceiptHandle());
if (reliable) {
collector.emit(getStreamId(message), new Values(jsonObj.toString()), message.getReceiptHandle());
} else {
// Delete it right away
sqs.deleteMessageAsync(new DeleteMessageRequest(queueUrl, message.getReceiptHandle()));
collector.emit(getStreamId(message), new Values(jsonObj.toString()));
}
}
catch (ParseException e)
{
LOG.error("SqsQueueSpout SQLException in SqsQueueSpout.nextTuple(): ", e);
}
} else {
// Still empty, go to sleep.
Utils.sleep(sleepTime);
}
}
public String getStreamId(Message message) {
return Utils.DEFAULT_STREAM_ID;
}
public int getSleepTime() {
return sleepTime;
}
public void setSleepTime(int sleepTime)
{
this.sleepTime = sleepTime;
}
#Override
public void ack(Object msgId) {
System.out.println("......Inside ack in sqsQueueSpout..............."+msgId);
// Only called in reliable mode.
try {
sqs.deleteMessageAsync(new DeleteMessageRequest(queueUrl, (String) msgId));
} catch (AmazonClientException ace) { }
}
#Override
public void fail(Object msgId) {
// Only called in reliable mode.
try {
sqs.changeMessageVisibilityAsync(
new ChangeMessageVisibilityRequest(queueUrl, (String) msgId, 0));
} catch (AmazonClientException ace) { }
}
#Override
public void close() {
sqs.shutdown();
((AmazonSQSAsyncClient) sqs).getExecutorService().shutdownNow();
}
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("message"));
}
Code snipped from my first Bolt(extends BaseRichBolt):
public class TransformerBolt extends BaseRichBolt
{
private static final long serialVersionUID = 1L;
public static final Logger LOG = LoggerFactory.getLogger(TransformerBolt.class);
private OutputCollector collector;
#Override
public void prepare(Map stormConf, TopologyContext context,
OutputCollector collector) {
this.collector = collector;
}
#Override
public void execute(Tuple input) {
String eventStr = input.getString(0);
//some code here to convert the json string to map
//Map datamap, long packetId being sent to next bolt
this.collector.emit(input, new Values(dataMap,packetId));
}
catch (Exception e) {
LOG.warn("Exception while converting AWS SQS to HashMap :{}", e);
}
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
declarer.declare(new Fields("dataMap", "packetId"));
}
}
Code snippet from second Bolt:
public class ConversionBolt extends BaseRichBolt
{
private static final long serialVersionUID = 1L;
private OutputCollector collector;
#Override
public void prepare(Map stormConf, TopologyContext context,
OutputCollector collector) {
this.collector = collector;
}
#Override
public void execute(Tuple input)
{
try{
Map dataMap = (Map)input.getValue(0);
Long packetId = (Long)input.getValue(1);
//this ack is not working
this.collector.ack(input);
}catch(Exception e){
this.collector.fail(input);
}
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
}
Kindly let me know if you need more information. Somebody shed some light on why the overridden ack in my spout is not getting called(from my second bolt)...
You must ack all incoming tuples in all bolts, ie, add collector.ack(input) to TransformerBolt.execute(Tuple input).
The log message you see is correct: your code calls collector.ack(...) and this call gets logged. A call to ack in your topology is not a call to Spout.ack(...): Each time a Spout emits a tuple with a message ID, this ID gets registered by the running ackers of your topology. Those ackers will get a message on each ack of a Bolt, collect those and notify the Spout if all acks of a tuple got received. If a Spout receives this message from an acker, it calls it's own ack(Object messageID) method.
See here for more details: https://storm.apache.org/documentation/Guaranteeing-message-processing.html

RxJava cache last item for future subscribers

I have implemented simple RxEventBus which starts emitting events, even if there is no subscribers. I want to cache last emitted event, so that if first/next subscriber subscribes, it receive only one (last) item.
I created test class which describes my problem:
public class RxBus {
ApplicationsRxEventBus applicationsRxEventBus;
public RxBus() {
applicationsRxEventBus = new ApplicationsRxEventBus();
}
public static void main(String[] args) {
RxBus rxBus = new RxBus();
rxBus.start();
}
private void start() {
ExecutorService executorService = Executors.newScheduledThreadPool(2);
Runnable runnable0 = () -> {
while (true) {
long currentTime = System.currentTimeMillis();
System.out.println("emiting: " + currentTime);
applicationsRxEventBus.emit(new ApplicationsEvent(currentTime));
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
};
Runnable runnable1 = () -> applicationsRxEventBus
.getBus()
.subscribe(new Subscriber<ApplicationsEvent>() {
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
}
#Override
public void onNext(ApplicationsEvent applicationsEvent) {
System.out.println("runnable 1: " + applicationsEvent.number);
}
});
Runnable runnable2 = () -> applicationsRxEventBus
.getBus()
.subscribe(new Subscriber<ApplicationsEvent>() {
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
}
#Override
public void onNext(ApplicationsEvent applicationsEvent) {
System.out.println("runnable 2: " + applicationsEvent.number);
}
});
executorService.execute(runnable0);
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
executorService.execute(runnable1);
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
executorService.execute(runnable2);
}
private class ApplicationsRxEventBus {
private final Subject<ApplicationsEvent, ApplicationsEvent> mRxBus;
private final Observable<ApplicationsEvent> mBusObservable;
public ApplicationsRxEventBus() {
mRxBus = new SerializedSubject<>(BehaviorSubject.<ApplicationsEvent>create());
mBusObservable = mRxBus.cache();
}
public void emit(ApplicationsEvent event) {
mRxBus.onNext(event);
}
public Observable<ApplicationsEvent> getBus() {
return mBusObservable;
}
}
private class ApplicationsEvent {
long number;
public ApplicationsEvent(long number) {
this.number = number;
}
}
}
runnable0 is emitting events even if there is no subscribers. runnable1 subscribes after 3 sec, and receives last item (and this is ok). But runnable2 subscribes after 3 sec after runnable1, and receives all items, which runnable1 received. I only need last item to be received for runnable2. I have tried cache events in RxBus:
private class ApplicationsRxEventBus {
private final Subject<ApplicationsEvent, ApplicationsEvent> mRxBus;
private final Observable<ApplicationsEvent> mBusObservable;
private ApplicationsEvent event;
public ApplicationsRxEventBus() {
mRxBus = new SerializedSubject<>(BehaviorSubject.<ApplicationsEvent>create());
mBusObservable = mRxBus;
}
public void emit(ApplicationsEvent event) {
this.event = event;
mRxBus.onNext(event);
}
public Observable<ApplicationsEvent> getBus() {
return mBusObservable.doOnSubscribe(() -> emit(event));
}
}
But problem is, that when runnable2 subscribes, runnable1 receives event twice:
emiting: 1447183225122
runnable 1: 1447183225122
runnable 1: 1447183225122
runnable 2: 1447183225122
emiting: 1447183225627
runnable 1: 1447183225627
runnable 2: 1447183225627
I am sure, that there is RxJava operator for this. How to achieve this?
Your ApplicationsRxEventBus does extra work by reemitting a stored event whenever one Subscribes in addition to all the cached events.
You only need a single BehaviorSubject + toSerialized as it will hold onto the very last event and re-emit it to Subscribers by itself.
You are using the wrong interface. When you susbscribe to a cold Observable you get all of its events. You need to turn it into hot Observable first. This is done by creating a ConnectableObservable from your Observable using its publish method. Your Observers then call connect to start receiving events.
You can also read more about in the Hot and Cold observables section of the tutorial.

How to get active websocket sessions in tomcat programatically?

I am running tomcat 8 to terminate websocket connections. I want to get all active websocket sessions say for an endpoint. I know that if you have one session object you can call getOpenSessions() to get all sessions, but the point is that I do not have access to a session object from where I need to get all sessions in code.
Wow, no answers after nearly a year! I stumbled across this while searching for some clues to an issue I'm having on Tomcat 8 with getOpenSessions(). For my problem, I ended up doing the below, which would also solve this issue. In general, just have a static map that you populate in open and remove from in close:
#ServerEndpoint(value="/msg/{owner}", encoders=MessageEncoder.class, decoders=MessageEncoder.class)
public class WebSocketListener {
private static final Logger logger = LoggerFactory.getLogger(WebSocketListener.class);
private static Map<String, Session> sessions = new HashMap<String, Session>();
public WebSocketListener() {
System.out.println("created");
}
#OnOpen
public void open(Session session, #PathParam("owner") String owner) {
System.out.println("open "+owner);
sessions.put(session.getId(), session);
session.getUserProperties().put("owner", owner);
System.out.println("open");
if (session.getUserPrincipal() != null) {
session.getUserProperties().put("owner", owner);
}
else {
try {
session.close(new CloseReason(CloseReason.CloseCodes.CANNOT_ACCEPT, "Not authorized"));
} catch (IOException e) {
}
}
}
#OnClose
public void close(Session session) {
System.out.println("close");
sessions.remove(session.getId());
}
#OnError
public void onError(Throwable error) {
logger.error("",error);
}
#OnMessage
public void onMessage(final Session session, final Message message) {
System.out.println("onMessage");
String owner = (String)session.getUserProperties().get("owner");
Long appId = message.getAppId();
for (Session s:sessions.values()) {
System.out.println(s);
if (s.isOpen() && (message.isEcho() || s != session) && owner.equals(s.getUserProperties().get("owner")) && (appId == null || appId.equals(s.getUserProperties().get("appId")))) {
s.getAsyncRemote().sendObject(message);
}
}
}
}

change android UI according to a background thread results

I'm developing an android app that requires to make UI changes according to a background thread processing results, I tried the following code at first:
Thread run_time = new Thread (){
public void run(){
ConnectToServer connect = new ConnectToServer(null);
while(true){
String server_response = connect.getServerResponse();
if(!server_response.equals(null)){
setResponse(server_response);
response_received();
}
}
}
};
run_time.start();
but my App crashes because i tried to make a UI changes from that background thread, then I tried that way:
runOnUiThread(new Runnable() {
public void run(){
ConnectToServer connect = new ConnectToServer(null);
while(true){
String server_response = connect.getServerResponse();
if(!server_response.equals(null)){
setResponse(server_response);
response_received();
}
}
}
});
but i got that exception:
01-29 16:42:17.045: ERROR/AndroidRuntime(605): android.os.NetworkOnMainThreadException
01-29 16:42:17.045: ERROR/AndroidRuntime(605): at android.os.StrictMode$AndroidBlockGuardPolicy.onNetwork(StrictMode.java:1084)
01-29 16:42:17.045: ERROR/AndroidRuntime(605): at libcore.io.BlockGuardOs.recvfrom(BlockGuardOs.java:151)
01-29 16:42:17.045: ERROR/AndroidRuntime(605): at libcore.io.IoBridge.recvfrom(IoBridge.java:503)
01-29 16:42:17.045: ERROR/AndroidRuntime(605): at java.net.PlainSocketImpl.read(PlainSocketImpl.java:488)
01-29 16:42:17.045: ERROR/AndroidRuntime(605): at java.net.PlainSocketImpl.access$000(PlainSocketImpl.java:46)
and after search i found that I must run the code as AsyncTask to avoid these problems, but when attempting to use it i found that it's must be used with small tasks only not like a thread that runs in the background all the run_time.
So, what's the best day to run a thread or a task in the background in whole the run_time and also reflect it's changes to the UI.
EDIT:
For Long running network work you have a few options.
First and formost check the android docs on this topic:
http://developer.android.com/training/basics/network-ops/index.html
Next, I generally use Services for this type of thing:
https://developer.android.com/guide/components/services.html
I will point you at the vogella tutorial for this as well:
http://www.vogella.com/tutorials/AndroidServices/article.html
For communication from threads/asynctasks/services to the UI use Handlers:
Use Handlers:
static public class MyThread extends Thread {
#Override
public void run() {
try {
// Simulate a slow network
try {
new Thread().sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
downloadBitmap = downloadBitmap("http://www.devoxx.com/download/attachments/4751369/DV11");
// Updates the user interface
handler.sendEmptyMessage(0);
} catch (IOException e) {
e.printStackTrace();
} finally {
}
}
}
handler = new Handler() {
#Override
public void handleMessage(Message msg) {
// cal uiMethods here...
imageView.setImageBitmap(downloadBitmap);
// dialog.dismiss();
}
};
Taken from this tutorial:
http://www.vogella.com/tutorials/AndroidBackgroundProcessing/article.html
You can make this more interesting by defining constant_codes which corespond to the desired action:
private int DO_THIS = 0x0;
private int DO_THAT = 0x1;
// in your UI:
public void handleMessage(Message msg) {
// cal uiMethods here...
switch(msg.what()){
case(DO_THIS):
// do stuff
break;
case(DO_THAT):
// do other stuff
break;
}
}
// in your thread:
Message m = handler.obtainMessage(DO_THIS);
handler.sendMessage(m);
If the thread code (asynch task, service etc...) is separate from the UI you can use Broadcasts to pass the data between the two and then use Handlers from there to act on the UI thread.
you need to use handlers
here is documntation: https://developer.android.com/training/multiple-threads/communicate-ui.html
Use this code - it may contain compile time error you have to do it correct
public class MainActivity extends Activity {
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Connect connect = new Connect();
connect.execute();
}
class Connect extends AsyncTask<Void, String, Void>
{
#Override
protected Void doInBackground(Void... params)
{
ConnectToServer connect = new ConnectToServer(null);
while(true)
{
String server_response = connect.getServerResponse();
if(!server_response.equals(null))
{
publishProgress(server_response);
//setResponse(server_response);
response_received();
}
}
return null;
}
#Override
protected void onProgressUpdate(String... values) {
super.onProgressUpdate(values);
setResponse(values[0]);
}
#Override
protected void onPostExecute(Void result) {
super.onPostExecute(result);
}
}
}
You need "handlers" along with "loopers" for optimization
Example:
public void myMethod(){
Thread background = new Thread(new Runnable(){
#Override
public void run(){
Looper.prepare();
//Do your server process here
Runnable r=new Runnable() {
#Override
public void run() {
//update your UI from here
}
};
handler.post(r);
Looper.loop();
}
});
background.start();
}
And of course this is without using AsyncTask

storm processing data extremely slow

We have 1 spout and 1 bolt on single node. Spout reads the data from RabbitMQ and emits it to the only bolt which writes data to Cassandra.
Our data source generates 10000 messages per second and storm takes around 10 sec to process this, which is too slow for us.
We tried increasing the parallelism of topology but that doesn't make any difference.
What is ideal no of messages that can be processed on a single node machine with 1 spout and 1 bolt? and what are the possible ways to increase the processing speed of storm topology?.
Update :
This is the sample code, it doesent have code for RabbitMQ and cassandra, but gives same performance issue.
// Topology Class
public class SimpleTopology {
public static void main(String[] args) throws InterruptedException {
System.out.println("hiiiiiiiiiii");
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.setSpout("SimpleSpout", new SimpleSpout());
topologyBuilder.setBolt("SimpleBolt", new SimpleBolt(), 2).setNumTasks(4).shuffleGrouping("SimpleSpout");
Config config = new Config();
config.setDebug(true);
config.setNumWorkers(2);
LocalCluster localCluster = new LocalCluster();
localCluster.submitTopology("SimpleTopology", config, topologyBuilder.createTopology());
Thread.sleep(2000);
}
}
// Simple Bolt
public class SimpleBolt implements IRichBolt{
private OutputCollector outputCollector;
public void prepare(Map map, TopologyContext tc, OutputCollector oc) {
this.outputCollector = oc;
}
public void execute(Tuple tuple) {
this.outputCollector.ack(tuple);
}
public void cleanup() {
// TODO
}
public void declareOutputFields(OutputFieldsDeclarer ofd) {
// TODO
}
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
// Simple Spout
public class SimpleSpout implements IRichSpout{
private SpoutOutputCollector spoutOutputCollector;
private boolean completed = false;
private static int i = 0;
public void open(Map map, TopologyContext tc, SpoutOutputCollector soc) {
this.spoutOutputCollector = soc;
}
public void close() {
// Todo
}
public void activate() {
// Todo
}
public void deactivate() {
// Todo
}
public void nextTuple() {
if(!completed)
{
if(i < 100000)
{
String item = "Tag" + Integer.toString(i++);
System.out.println(item);
this.spoutOutputCollector.emit(new Values(item), item);
}
else
{
completed = true;
}
}
else
{
try {
Thread.sleep(2000);
} catch (InterruptedException ex) {
Logger.getLogger(SimpleSpout.class.getName()).log(Level.SEVERE, null, ex);
}
}
}
public void ack(Object o) {
System.out.println("\n\n OK : " + o);
}
public void fail(Object o) {
System.out.println("\n\n Fail : " + o);
}
public void declareOutputFields(OutputFieldsDeclarer ofd) {
ofd.declare(new Fields("word"));
}
public Map<String, Object> getComponentConfiguration() {
return null;
}
}
Update:
Is it possible that with shuffle grouping same tuple will be processed more than once? configuration used (spouts = 4. bolts = 4), the problem now is, with increase in no of bolts the performance is decreasing.
You should find out what is the bottleneck here -- RabbitMQ or Cassandra. Open the Storm UI and take a look at the latency times for each component.
If increasing parallelism didn't help (it normally should), there's definitely a problem with RabbitMQ or Cassandra, so you should focus on them.
In your code you only emit one tuple per call to nextTuple(). Try emitting more tuples per call.
something like:
public void nextTuple() {
int max = 1000;
int count = 0;
GetResponse response = channel.basicGet(queueName, autoAck);
while ((response != null) && (count < max)) {
// process message
spoutOutputCollector.emit(new Values(item), item);
count++;
response = channel.basicGet(queueName, autoAck);
}
try { Thread.sleep(2000); } catch (InterruptedException ex) {
}
We are successfully using RabbitMQ and Storm. The result gets stored in a different DB, but anyway. We first used basic_get in Spout, and had a terrible performance, but then we swiched to basic_consume, and performance is actually very good. So take a look at how you consuming messages from Rabbit.
Some important factors:
basic_consume instead of basic_get
prefetch_count (make it high enough)
If you want to increase performance, and you don't care about loosing messages - do not ack messages and set delivery_mode to 1.

Resources