What does Actor[akka:\\play\deadLetters].tell() mean in a New Relic's trace of a Play Framework 2.0 web transaction? - heroku

I have a Play Framework 2.0 Java application hosted on Heroku, and I am monitoring it using the free-tier New Relic addon. For most of the transactions, a majority of the time is spent in what New Relic labels as Actor[akka:\\play\deadLetters].tell(). What is the application actually doing during this time?

As a simple description, Akka (http://en.wikipedia.org/wiki/Akka_(toolkit); http://akka.io/) is part of the Play framework as one of their integrations. As the application on Play is instrumented for monitoring there are HTTP requests made by Akka that are traced as a web transaction. In short, we measure it. As for what is is specifically doing, I recommend checking the Play documentation or the Akka link from the first sentence.
If you have a Java agent version older than 3.2.0, upgrading the Java agent will give you the following change:
akka.actor.ActorKilledException is now ignored by default
The ActorKilledException is commonly thrown in Play applications as a
control mechanism in normally functioning applications. In previous
versions, this exception inflated the reported error rate. These
exceptions are now ingored by default. You can override the default
ignore_errors list to provide your own exceptions or to omit the
ActorKilledException.
Let us know if this information is helpful or if you need additional assistance.
Jeanie Swan
New Relic Support

I'm not very familiar with how NewRelic collects data, however deadLetters is a special Actor that receives "all messages that were sent to a dead (or non existent) Actor". You can read more about dead letters in the official docs.
For example you can subscribe to these dead letters and print them (which should give you enough information to then track down their source and fix it). A typical case where many dead letters may be encountered is when you're sending messages to an Actor which has stopped, but someone is still sending messages to it - this you should be able to detect once printing the deadletters.

Related

Notifying golongpoll.SubscriptionManager of an event from kafka-go

I was writing a POC on long-polling using go.
I see the general package to be used is https://github.com/jcuga/golongpoll .
But assuming that I would want to publish an event to the golongpoll.SubscriptionManager from a general context, especially when there is a possibility that the long poll API request is being served by one machine, while the Kafka event for that particular consumer group is consumed by another instance in the cluster.
The examples given in the documentation did not talk of such a scenario at all, even though this seems like a common scenario. One way I can think of is have a distributed cache like Redis in between and have all the services poll this for a change? But that sounds a bit dumb to me.

SCDF. WSDL Source : Spring Cloud Task or Spring Cloud Stream or any other solution?

We have requirements for getting data from a SOAP web service, where same records are going to be exposed. Then the record is transformed and written do the DB.
We are the acitve side and at the certain intervals we are going to check if a new record has appeared.
Our main goal are:
to have a scheduler for setting intervals
to have a mechanizm to retry if something goes wrong (eg. lost connection)
to have a visual control of the process - check the places where something stuck (like dashboard in SCDF)
Since there is no sample wsdl source app, I guess the Task (or Stream ?) should be written by ourself. But what to use for repeating and scheduling...
I Need your advice in choosing the right approach.
I'm not tied to the SCDF solution if any other are more suitable.
If you intend to consume directly as SOAP messages from external services, you could either build a custom Spring Cloud Stream source or a simple Spring Batch/Spring Cloud Task application. Both the options provide the resiliency patterns, including retries.
However, if the upstream data is not real-time, you would choose the Task path because the streams are long-running and they never terminate. Tasks, on the other hand, run for a finite period of time, terminate, and free-up resources. There's also the option to use the platform-specific scheduler implementation to trigger to launch the Task on a recurring window periodically.
From the SCDF dashboard, you can design/build Composed Tasks, including the state transitions and the desired downstream operation.

MassTransit's PublisherConfirmation option no longer exists

Upgrading to MassTransit 4.x, overtop RabbitMQ. My application configuration was using PublisherConfirmation set to true, to ensure message delivery without the overhead of transactions. (In least, that was what the docs used to say.)
In MT 4.x., it appears that PublisherConfirmation no longer exists.
I haven't found any info (yet) on why this went away, or what replaces it moving forward. Essentially, I don't want fire-and-forget; if the message doesn't reach the queue I want an exception.
Any guidance would be appreciated.
To configure PublisherConfirmation using MT 4.x or later, that option is now configured on the host, instead of the bus.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.RabbitMqTransport/Configuration/IRabbitMqHostConfigurator.cs#L24

EasyNetQ / RabbitMQ consuming events in Web API

I have created Web API which allows messages to be sent to the Queue. My Web API is designed with CQRS and DDD in mind. I want my message consumer to always be waiting for any messages on the queue to receive. Currently the way its done, this will only read messages if I make a request to the API to hit the method.
Is there a way of either using console application or something that will always be running to consume messages at anytime given without having to make a request from the Web Api. So more of a automation task ?
If so, how do I go about with it i.e. if its console app how would I keep it always running (IIS ?) and is there way to use Dependency Injection as I need to consume the message then send to my repository which lives on separate solution. ?
or a way to make EasyNetQ run at start up ?
The best way to handle this situation in your case is to subscribe to bus events using AMPQ through EasyNetQ library. The recommended way of hosting it is by writing a windows service using topshelf library and subscribe to bus events inside that service on start.
IIS processes and threads are not reliable for such tasks as they are designed to be recycled on a regular basis which may cause some instabilities and inconsistencies in your application.
and is there way to use Dependency Injection as I need to consume the message then send to my repository which lives on separate solution.
It is better to create a separate question for this, as it is obviously off-topic. Also, it requires a further elaboration as it is not clear what specifically you are struggling with.

Async logger in Spring

I have this question, i am just throwing it out there. I am implementing a small logging feature for my spring based REST API server for logging all requests coming in.
I am expecting 1000s of users to use this API so with a blocking i/o logger, it's going to slow down everything. I have two approaches to solve the problem:
1. Have a async logger using an in-memory arrylist. then use the spring scheduler to flush this out to a log file periodically.
2. Use JMS and send the logs to the queue. Let the queue handle the logging asynchronously.
Has anyone done this before with spring. Though i am for option 2, are there better ways of doing this? Need some expert advice. Thanks everyone !
More info - I think synchronous logging will be a bottle neck because this REST API is consumed by a front end RoR app. So one session of the user will definitely result in 100s of API calls occuring very frequently. I am logging the actual request along with the JSON sent in the POSTs.
Has anyone done this before with spring.
Not so strangely, yes - Asynchronous Logging Using Spring
The article mentions that if you don't want any log events to be lost, JMS would be the way to go - otherwise sticking to Async makes sense for high volume logging.
If you really want to build your own logger, I suggest you to take a look at akka, it is much easier than JMS to set up.
You can use it locally (use of all CPU cores of your local machine), or even with remote agents.

Resources