I'm developing a simple Spring MVC application to download tweets from the streaming API and show them in a webpage. Users of the application can submit a Task with the keywords of the tweets that they want to download. This tasks are shared so everyone can start, stop, modify, change or cancel a task.
TwitterFetcher is the class responsible of download tweets. This class receives a Task and persists all tweets downloaded in a database.
#Service
public class TwitterFetcher {
#Autowired
private OAuthService oAuthService;
#Autowired
private TweetService tweetService;
private Task task;
private TwitterStream twitterStream;
public void start(Task task) {
/* Stop previous stream */
stop();
/* Get OAuth credentials */
OAuth oAuth = oAuthService.findOneEnabled();
if (oAuth == null) {
} else {
this.task = task;
Configuration oAuthConfiguration = getOAuthConfiguration(oAuth);
twitterStream = new TwitterStreamFactory(oAuthConfiguration).getInstance();
twitterStream.addListener(new TwitterListener());
String keywords = task.getBaseKeywords() + ", " + task.getExpandedKeywords();
FilterQuery filterQuery = new FilterQuery();
filterQuery.track(keywords.split(", "));
twitterStream.filter(filterQuery);
}
}
public void stop() {
if (twitterStream != null) {
twitterStream.shutdown();
}
}
private Configuration getOAuthConfiguration(OAuth oAuth) {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(false);
cb.setJSONStoreEnabled(true);
cb.setOAuthAccessToken(oAuth.getAccessToken());
cb.setOAuthAccessTokenSecret(oAuth.getAccessTokenSecret());
cb.setOAuthConsumerKey(oAuth.getConsumerKey());
cb.setOAuthConsumerSecret(oAuth.getConsumerSecret());
return cb.build();
}
private class TwitterListener implements StatusListener {
#Override
public void onStatus(Status status) {
/* Persist new tweet */
Tweet tweet = new Tweet();
tweet.setJson(DataObjectFactory.getRawJSON(status));
tweetService.save(tweet);
}
[Omitted code]
}
}
The basic functionality would be the next one:
A user start the fetcher from the website.
The fetcher receives a new tweet and it's saved in the DB
The fetcher keeps receiving tweets until a user stop it.
The application has a dashboard to control the fetchers and the tasks and the users must be able to interact with it while the fetcher is downloading.
My question is, Would the fetcher block the app or will be executed in a different thread? In the worst case, what I have to change to solve this? I'm still far from an usable app so I can't test it. Even so, I want to fix it right now if possible.
You can use ExecutorService to run the fetcher in a separate thread. I'd recommend using ThreadPool so you don't blow performance if too many users running the fetcher:
ExecutorService executor = Executors.newFixedThreadPool(maxThreads)
When a task is submitted through the executor it will return a Future object from which you can check for job completion
Future f = executor.submit(myTask);
boolean isDone = f.isDone();
Please read more about Java concurrency if you're not familiar: http://docs.oracle.com/javase/tutorial/essential/concurrency/index.html
Annotate your start() method with #Async.
#Async
public void start(Task task)
This will make the start method asynchronous and will not block the application.
You can check out a simple example here.
Related
I have researched too much about the ways for sending scheduled emails by .NET core Web API using background tasks. I know it's better that I should implement the background tasks in a windows service which runs separately with app domain.
But my requirement is from web client I will have a table with each row is a promotion event for customer, I can choose to active, pause, stop for each of them, then it will make call to API and from here.
I have to implement each background tasks for each of them that can run synchronous. I have to do that by Web API because end users don't have a place to host the service.
Actual solution:
After one day I came up with the solution which is using IHostedService with BlockingCollection to control the background tasks in runtime as below:
Code for background task using IHostedService:
namespace SimCard.API.Worker
{
internal class TimedHostedService : IHostedService, IDisposable
{
private CancellationTokenSource _tokenSource;
private readonly ILogger _logger;
private Timer _timer;
private readonly TasksToRun tasks;
private readonly IEmailService emailService;
public TimedHostedService(ILogger<TimedHostedService> logger, TasksToRun tasks, IEmailService emailService)
{
this.emailService = emailService;
this.tasks = tasks;
_logger = logger;
}
public Task StartAsync(CancellationToken cancellationToken)
{
tasks.Dequeue();
_logger.LogInformation("Timed Background Service is starting.");
_timer = new Timer(DoWork, null, TimeSpan.Zero,
TimeSpan.FromSeconds(5));
return Task.CompletedTask;
}
private void DoWork(object state)
{
emailService.SendEmail("ptkhuong96#gmail.com", "Test", "OK, Done now");
_logger.LogInformation("Mail sent!");
}
public Task StopAsync(CancellationToken cancellationToken)
{
_logger.LogInformation("Timed Background Service is stopping.");
_timer?.Change(Timeout.Infinite, 0);
return Task.CompletedTask;
}
public void Dispose()
{
_timer?.Dispose();
}
}
}
Here is the code for BlockingCollection:
namespace SimCard.API.Worker
{
public class TasksToRun : ITasksToRun
{
private readonly BlockingCollection<int> _tasks;
public TasksToRun() => _tasks = new BlockingCollection<int>();
public void Enqueue(int settings) => _tasks.Add(settings);
public void Dequeue() => _tasks.Take();
}
}
And the code in controller with get called from web client:
[HttpPost("/api/worker/start")]
public IActionResult Run()
{
tasks.Enqueue(15);
return Ok();
}
Code for Startup.cs:
services.AddHostedService<TimedHostedService>();
services.AddSingleton<TasksToRun, TasksToRun>();
Issue:
After click active button for the first event => controller will get called and one instance of this background task will run. How to pause that task and resume it?
If the first issue is solved, how can I create each background task for each event in the table, think about I may could create more and more event in the future, how can one event get actived, stopped, paused, resumed without affect to another one?
I'm really stuck with this requirement and don't know how to proceed further. If you have a different approach that can adapt my case, you could recommend me also.
Thank you very much for your support.
The code below is a Web API that prints on behalf of a SPA. For brevity I've omitted using statements and the actual printing logic. That stuff all works fine. The point of interest is refactoring of the printing logic onto a background thread, with the web api method enqueuing a job. I did this because print jobs sent in quick succession were interfering with each other with only the last job printing.
It solves the problem of serialising print jobs but raises the question of how to detect shutdown and signal the loop to terminate.
namespace WebPrint.Controllers
{
public class LabelController : ApiController
{
static readonly ConcurrentQueue<PrintJob> queue = new ConcurrentQueue<PrintJob>();
static bool running = true;
static LabelController()
{
ThreadPool.QueueUserWorkItem((state) => {
while (running)
{
Thread.Sleep(30);
if (queue.TryDequeue(out PrintJob job))
{
this.Print(job);
}
}
});
}
public void Post([FromBody]PrintJob job)
{
queue.Enqueue(job);
}
}
public class PrintJob
{
public string url { get; set; }
public string html { get; set; }
public string printer { get; set; }
}
}
Given the way I acquire a thread to servicing the print queue, it is almost certainly marked as a background thread and should terminate when the app pool tries to exit, but I am not certain of this, and so I ask you, dear readers, for your collective notion of best practice in such a scenario.
Well, I did ask for best practice.
Nevertheless, I don't have long-running background tasks, I have short-running tasks. They arrive asynchronously on different threads, but must be executed serially and on a single thread because the WinForms printing methods are designed for STA threading.
Matt Lethargic's point about possible job loss is certainly a consideration, but for this case it doesn't matter. Jobs are never queued for more than a few seconds and loss would merely prompt operator retry.
For that matter, using a message queue doesn't solve the problem of "what if someone shuts it down while it's being used" it merely moves it to another piece of software. A lot of message queues aren't persistent, and you wouldn't believe the number of times I've seen someone use MSMQ to solve this problem and then fail to configure it for persistence.
This has been very interesting.
http://thecodelesscode.com/case/156
I would look at your architecture at a higher level, doing 'long running tasks' such as printing should probably live outside of you webapi process entirely.
If this we myself I would:
Create a windows service (or what have you) that has all the printing logic in it, the job of the controller is then to just talk to the service either by http or some kind of queue MSMQ, RabbitMQ, ServiceBus etc.
If via http then the service should internally queue up the print jobs and return 200/201 to the controller as soon as possible (before printing happens) so that the controller can return to the client efficiently and release it's resources.
If via a queuing technology then the controller should place a message on the queue and again return 200/201 as quick as possible, the service can then read the messages at it's own rate and print one at a time.
Doing it this way removes overhead from your api and also the possibility of losing print jobs in the case of a failure in the webapi (if the api crashes any background threads may/will be effected). Also what if you do a deployment at the point of someone printing, there's a high chance the print job will fail.
My 2 cents worth
I believe that the desired behavior is not something that should be done within a Controller.
public interface IPrintAgent {
void Enqueue(PrintJob job);
void Cancel();
}
The above abstraction can be implemented and injected into the controller using the frameworks IDependencyResolver
public class LabelController : ApiController {
private IPrintAgent agent;
public LabelController(IPrintAgent agent) {
this.agent = agent;
}
[HttpPost]
public IHttpActionResult Post([FromBody]PrintJob job) {
if (ModelState.IsValid) {
agent.Enqueue(job);
return Ok();
}
return BadRequest(ModelState);
}
}
The sole job of the controller in the above scenario is to queue the job.
Now with that aspect out of the way I will focus on the main part of the question.
As already mentioned by others, there are many ways to achieve the desired behavior
A simple in memory implementation can look like
public class DefaultPrintAgent : IPrintAgent {
static readonly ConcurrentQueue<PrintJob> queue = new ConcurrentQueue<PrintJob>();
static object syncLock = new Object();
static bool idle = true;
static CancellationTokenSource cts = new CancellationTokenSource();
static DefaultPrintAgent() {
checkQueue += OnCheckQueue;
}
private static event EventHandler checkQueue = delegate { };
private static async void OnCheckQueue(object sender, EventArgs args) {
cts = new CancellationTokenSource();
PrintJob job = null;
while (!queue.IsEmpty && queue.TryDequeue(out job)) {
await Print(job);
if (cts.IsCancellationRequested) {
break;
}
}
idle = true;
}
public void Enqueue(PrintJob job) {
queue.Enqueue(job);
if (idle) {
lock (syncLock) {
if (idle) {
idle = false;
checkQueue(this, EventArgs.Empty);
}
}
}
}
public void Cancel() {
if (!cts.IsCancellationRequested)
cts.Cancel();
}
static Task Print(PrintJob job) {
//...print job
}
}
which takes advantage of async event handlers to process the queue in sequence as jobs are added.
The Cancel is provided so that the process can be short circuited as needed.
Like in Application_End event as suggested by another user
var agent = new DefaultPrintAgent();
agent.Cancel();
or manually by exposing an endpoint if so desired.
As always, AEM has brought new challenges to my life. This time, I'm experiencing an issue where an EventListener that listens for ReplicationEvents is working sometimes, and normally just the first few times after the service is restarted. After that, it stops running entirely.
The first line of the listener is a log line. If it was running, it would be clear. Here's a simplified example of the listener:
#Component(immediate = true, metatype = false)
#Service(value = EventHandler.class)
#Property(
name="event.topics", value = ReplicationEvent.EVENT_TOPIC
)
public class MyActivityReplicationListener implements EventHandler {
#Reference
private SlingRepository repository;
#Reference
private OnboardingInterface onboardingService;
#Reference
private QueryInterface queryInterface;
private Logger log = LoggerFactory.getLogger(this.getClass());
private Session session;
#Override
public void handleEvent(Event ev) {
log.info(String.format("Starting %s", this.getClass()));
// Business logic
log.info(String.format("Finished %s", this.getClass()));
}
}
Now before you panic that I haven't included the business logic, see my answer below. The main point of interest is that the business logic could take a few seconds.
While crawling through the second page of Google search to find an answer, I came across this article. A German article explaining that EventListeners that take more than 5 seconds to finish are sort of silently quarantined by AEM with no output.
It just so happens that this task might take longer than 5 seconds, as it's working off data that was originally quite small, but has grown (and this is in line with other symptoms).
I put a change in that makes the listener much more like the one in that article - that is, it uses an EventConsumer to asynchronously process the ReplicationEvent using a pub/sub model. Here's a simplified version of the new model (for AEM 6.3):
#Component(immediate = true, property = {
EventConstants.EVENT_TOPIC + "=" + ReplicationEvent.EVENT_TOPIC,
JobConsumer.PROPERTY_TOPICS + "=" + AsyncReplicationListener.JOB_TOPIC
})
public class AsyncReplicationListener implements EventHandler, JobConsumer {
private static final String PROPERTY_EVENT = "event";
static final String JOB_TOPIC = ReplicationEvent.EVENT_TOPIC;
#Reference
private JobManager jobManager;
#Override
public JobConsumer.JobResult process (Job job) {
try {
ReplicationEvent event = (ReplicationEvent)job.getProperty(PROPERTY_EVENT);
// Slow business logic (>5 seconds)
} catch (Exception e) {
return JobResult.FAILED;
}
return JobResult.OK ;
}
#Override
public void handleEvent(Event event) {
final Map <String, Object> payload = new HashMap<>();
payload.put(PROPERTY_EVENT, ReplicationEvent.fromEvent(event));
final Job addJobResult = jobManager.addJob(JOB_TOPIC , payload);
}
}
You can see here that the EventListener passes off the ReplicationEvent wrapped up in a Job, which is then handled by the JobConsumer, which according to this magic article, is not subject to the 5 second rule.
Here is some official documentation on this time limit. Once I had the "5 seconds" key, I was able to a bit more information, here and here, that talk about the 5 second limit as well. The first article uses a similar method to the above, and the second article shows a way to turn off these time limits.
The time limits can be disabled entirely (or increased) in the configMgr by setting the Timeout property to zero in the Apache Felix Event Admin Implementation configuration.
I have a CXF client configured in my Spring Boot app like so:
#Bean
public ConsumerSupportService consumerSupportService() {
JaxWsProxyFactoryBean jaxWsProxyFactoryBean = new JaxWsProxyFactoryBean();
jaxWsProxyFactoryBean.setServiceClass(ConsumerSupportService.class);
jaxWsProxyFactoryBean.setAddress("https://www.someservice.com/service?wsdl");
jaxWsProxyFactoryBean.setBindingId(SOAPBinding.SOAP12HTTP_BINDING);
WSAddressingFeature wsAddressingFeature = new WSAddressingFeature();
wsAddressingFeature.setAddressingRequired(true);
jaxWsProxyFactoryBean.getFeatures().add(wsAddressingFeature);
ConsumerSupportService service = (ConsumerSupportService) jaxWsProxyFactoryBean.create();
Client client = ClientProxy.getClient(service);
AddressingProperties addressingProperties = new AddressingProperties();
AttributedURIType to = new AttributedURIType();
to.setValue(applicationProperties.getWex().getServices().getConsumersupport().getTo());
addressingProperties.setTo(to);
AttributedURIType action = new AttributedURIType();
action.setValue("http://serviceaction/SearchConsumer");
addressingProperties.setAction(action);
client.getRequestContext().put("javax.xml.ws.addressing.context", addressingProperties);
setClientTimeout(client);
return service;
}
private void setClientTimeout(Client client) {
HTTPConduit conduit = (HTTPConduit) client.getConduit();
HTTPClientPolicy policy = new HTTPClientPolicy();
policy.setConnectionTimeout(applicationProperties.getWex().getServices().getClient().getConnectionTimeout());
policy.setReceiveTimeout(applicationProperties.getWex().getServices().getClient().getReceiveTimeout());
conduit.setClient(policy);
}
This same service bean is accessed by two different threads in the same application sequence. If I execute this particular sequence 10 times in a row, I will get a connection timeout from the service call at least 3 times. What I'm seeing is:
Caused by: java.io.IOException: Timed out waiting for response to operation {http://theservice.com}SearchConsumer.
at org.apache.cxf.endpoint.ClientImpl.waitResponse(ClientImpl.java:685) ~[cxf-core-3.2.0.jar:3.2.0]
at org.apache.cxf.endpoint.ClientImpl.processResult(ClientImpl.java:608) ~[cxf-core-3.2.0.jar:3.2.0]
If I change the sequence such that one of the threads does not call this service, then the error goes away. So, it seems like there's some sort of a race condition happening here. If I look at the logs in our proxy manager for this service, I can see that both of the service calls do return a response very quickly, but the second service call seems to get stuck somewhere in the code and never actually lets go of the connection until the timeout value is reached. I've been trying to track down the cause of this for quite a while, but have been unsuccessful.
I've read some mixed opinions as to whether or not CXF client proxies are thread-safe, but I was under the impression that they were. If this actually not the case, and I should be creating a new client proxy for each invocation, or use a pool of proxies?
Turns out that it is an issue with the proxy not being thread-safe. What I wound up doing was leveraging a solution kind of like one posted at the bottom of this post: Is this JAX-WS client call thread safe? - I created a pool for the proxies and I use that to access proxies from multiple threads in a thread-safe manner. This seems to work out pretty well.
public class JaxWSServiceProxyPool<T> extends GenericObjectPool<T> {
JaxWSServiceProxyPool(Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
super(new BasePooledObjectFactory<T>() {
#Override
public T create() throws Exception {
return factory.get();
}
#Override
public PooledObject<T> wrap(T t) {
return new DefaultPooledObject<>(t);
}
}, poolConfig != null ? poolConfig : new GenericObjectPoolConfig());
}
}
I then created a simple "registry" class to keep references to various pools.
#Component
public class JaxWSServiceProxyPoolRegistry {
private static final Map<Class, JaxWSServiceProxyPool> registry = new HashMap<>();
public synchronized <T> void register(Class<T> serviceTypeClass, Supplier<T> factory, GenericObjectPoolConfig poolConfig) {
Assert.notNull(serviceTypeClass);
Assert.notNull(factory);
if (!registry.containsKey(serviceTypeClass)) {
registry.put(serviceTypeClass, new JaxWSServiceProxyPool<>(factory, poolConfig));
}
}
public <T> void register(Class<T> serviceTypeClass, Supplier<T> factory) {
register(serviceTypeClass, factory, null);
}
#SuppressWarnings("unchecked")
public <T> JaxWSServiceProxyPool<T> getServiceProxyPool(Class<T> serviceTypeClass) {
Assert.notNull(serviceTypeClass);
return registry.get(serviceTypeClass);
}
}
To use it, I did:
JaxWSServiceProxyPoolRegistry jaxWSServiceProxyPoolRegistry = new JaxWSServiceProxyPoolRegistry();
jaxWSServiceProxyPoolRegistry.register(ConsumerSupportService.class,
this::buildConsumerSupportServiceClient,
getConsumerSupportServicePoolConfig());
Where buildConsumerSupportServiceClient uses a JaxWsProxyFactoryBean to build up the client.
To retrieve an instance from the pool I inject my registry class and then do:
JaxWSServiceProxyPool<ConsumerSupportService> consumerSupportServiceJaxWSServiceProxyPool = jaxWSServiceProxyPoolRegistry.getServiceProxyPool(ConsumerSupportService.class);
And then borrow/return the object from/to the pool as necessary.
This seems to work well so far. I've executed some fairly heavy load tests against it and it's held up.
I'm implementing an IBackingMap for my Trident topology to store tuples to ElasticSearch (I know there are several implementations for Trident/ElasticSearch integration already existing at GitHub however I've decided to implement a custom one which suits my task better).
So my implementation is a classic one with a factory:
public class ElasticSearchBackingMap implements IBackingMap<OpaqueValue<BatchAggregationResult>> {
// omitting here some other cool stuff...
private final Client client;
public static StateFactory getFactoryFor(final String host, final int port, final String clusterName) {
return new StateFactory() {
#Override
public State makeState(Map conf, IMetricsContext metrics, int partitionIndex, int numPartitions) {
ElasticSearchBackingMap esbm = new ElasticSearchBackingMap(host, port, clusterName);
CachedMap cm = new CachedMap(esbm, LOCAL_CACHE_SIZE);
MapState ms = OpaqueMap.build(cm);
return new SnapshottableMap(ms, new Values(GLOBAL_KEY));
}
};
}
public ElasticSearchBackingMap(String host, int port, String clusterName) {
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", clusterName).build();
// TODO add a possibility to close the client
client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(host, port));
}
// the actual implementation is left out
}
You see it gets host/port/cluster name as input params and creates an ElasticSearch client as a member of the class BUT IT NEVER CLOSES THE CLIENT.
It is then used from within a topology in a pretty familiar way:
tridentTopology.newStream("spout", spout)
// ...some processing steps here...
.groupBy(aggregationFields)
.persistentAggregate(
ElasticSearchBackingMap.getFactoryFor(
ElasticSearchConfig.ES_HOST,
ElasticSearchConfig.ES_PORT,
ElasticSearchConfig.ES_CLUSTER_NAME
),
new Fields(FieldNames.OUTCOME),
new BatchAggregator(),
new Fields(FieldNames.AGGREGATED));
This topology is wrapped into some public static void main, packed in a jar and sent to Storm for execution.
The question is, should I worry about closing the ElasticSearch connection or it is Storm's own business? If it is not done by Storm, how and when in the topology's lifecycle I should do that?
Thanks in advance!
Okay, answering my own question.
First of all, thanks again #dedek for suggestions and reviving the ticket in Storm's Jira.
Finally, since there's no official way to do that, I've decided to go for cleanup() method of Trident's Filter. So far I've verified the following (for Storm v. 0.9.4):
With LocalCluster
cleanup() gets called on cluster's shutdown
cleanup() DOESN'T get called when killing the topology, this shouldn't be a tragedy, very likely one won't use LocalCluster for real deployments anyway
With a real cluster
it gets called when the topology is killed as well as when the worker is stopped using pkill -TERM -u storm -f 'backtype.storm.daemon.worker'
it doesn't get called if the worker is killed with kill -9 or when it crashes or - sadly - when the worker dies due to an exception
In overall that gives more or less decent guarantee of cleanup() to get called, provided you'll be careful with exception handling (I tend to add 'thundercatches' to every of my Trident primitives anyway).
My code:
public class CloseFilter implements Filter {
private static final Logger LOG = LoggerFactory.getLogger(CloseFilter.class);
private final Closeable[] closeables;
public CloseFilter(Closeable... closeables) {
this.closeables = closeables;
}
#Override
public boolean isKeep(TridentTuple tuple) {
return true;
}
#Override
public void prepare(Map conf, TridentOperationContext context) {
}
#Override
public void cleanup() {
for (Closeable c : closeables) {
try {
c.close();
} catch (Exception e) {
LOG.warn("Failed to close an instance of {}", c.getClass(), e);
}
}
}
}
However would be nice if some day hooks for closing connections become a part of the API.