I have a channel in slack, to which a CI tool sends notification. The CI tool sends notification for failure for every operation and there is no way to filter it out. But I know that important notifications come from 12 AM to 2 AM. Is there a way that I can apply a filter daily on that channel between two time intervals ?
Yes. you can call the API method conversations.history, which will return messages from a channel. By settings the parameters oldest and latest accordingly you will only get messages from a specified timeframe.
Note that those parameters are provided as absolute timestamps (e.g. 1234567890.123456), so you need to calculate them for the current day.
Related
I have a system where we process text messages. Each message gets split up into sentences, and each sentence gets processed individually and the results of each sentence get published to a topic. This all happens asynchronously.
I want to be able to aggregate the results for the sentences.
The problem is that I want the window to end when the total number of sentences have been reached, or when a total amount of time has passed. Basically Tumbling time windows, but can end when a total number of results have been received.
Secondarily I want to be able to know when that window ends so that I can process the aggregation as an atomic event.
It's possible but you have to implement a custom processor - your requirements are simply to specific for the high-level API to cater for.
Your processor would store messages into a state store and use punctuate to periodically check if the window expired. It would also keep a running counter and check if the max number of results have been received. If either condition is met, it does the aggregation, removes messages from the state store and sends the results downstream.
You'd have to think about what to do on restart (failover/re-balancing). When starting up, the processor should inspect its state store and calculate the current running count and the window expiry time.
Now Apache Kafka offers you a way to wait closing the window. Here piece of code;
suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded()))
For more, check it out.
I have implemented a relay server on top of WebSocket. Sender will send many small binary messages to the server and they are then relayed to all the connected clients.
What I am interested in is the time between sender send the message and the reader receives the message. Right now I have already setup the Test Plan with a thread group of 25 receivers and another group of 1 sender and they can receive and send the message respectively.
The aggregate report is considering the send message and read message as two different labels. How should I configure the Test Plan to record my desired time?
p.s. I am using this jmeter WebSocket sampler plugin
https://bitbucket.org/pjtr/jmeter-websocket-samplers
Thanks in advance.
The aggregate report is considering the send message and read message
as two different labels.
Sure it is, because there are two separate thread groups, according to you.
You need to sync & order the sampler results somehow, so I see two ways here:
1) Write raw sampler results (simple writer, aggregate report & summary report are capable of doing that), then use external tool (say, a table processor, Excel or similar) to process them, do that simple math and show your desired timings.
Or streamline the results to the timeseries DB (e.g. Influx) with Backend Listener and proceed from there - like, do the math and/or visualize (say, with Grafana)
2) Second option seems to be syncing Thread Groups to each other with InterThreadCommunication plugin.
But that seems trickier to me, and what's more, may influence the timing readings (depending on way you do it) so the results got twisted.
Thus I personally would prefer passive metrics readings and post-calculations upon them (which could be turned pretty much "live" too, if you want, with BackendListener+Influx+Grafana bundle, or similar)
Currently I'm working on a SaaS with support for multiple tenants that can enable push notifications for their user-bases.
I'm thinking of using a message queue to store all pushes and send them with a separate service. That new service would need to read from the queue and send the push notifications.
My question now is: Do I need to come up with a complex sending strategy? I know that with GCM has a limit of 1000 devices per request, so this needs to be considered. I also can't wait for x pushes to fly in as this might delay a previous push from being sent. My next thought was to create a global array and fill it with pushes from the queue. A loop would then fetch that array every say 1 second and send pushes. This way pushes would get sent for sure and I wouldn't exceed the 1000 devices limit.
So ... although this might work I'm not sure if an infinite loop is the best way to go. I'm wondering if GCM / FCM even has a request limit? If not, I wouldn't need to aggregate the pushes in the first place and I could ditch the loop. I could simply fire a request for each push that gets pulled from the queue.
Any enlightenment on this topic or improvement of my prototypical algorithm would be great!
Do I need to come up with a complex sending strategy?
Not really. GCM/FCM is pretty simple enough. Just send the message towards the GCM/FCM server and it would queue it on it's own, then (as per it's behavior) send it as soon as possible.
I know that with GCM has a limit of 1000 devices per request, so this needs to be considered.
I think you're confusing the 1000 devices per request limit. The 1000 devices limit refers to the number of registration tokens you add in the list when using the registration_ids parameter:
This parameter specifies a list of devices (registration tokens, or IDs) receiving a multicast message. It must contain at least 1 and at most 1000 registration tokens.
This means you can only send to 1000 devices with the same message payload in a single request (you can then do a batch request (1000/each request) if you need to).
I'm wondering if GCM / FCM even has a request limit?
AFAIK, there is no such limit. Ditch the loop. Whenever you successfully send a message towards the GCM/FCM server, it will enqueue and keep the message until such time that it is available to send.
Is there a way to setup a spring integration channel in such a way that lets say it only sends the messages to output channel once it has accumulated 50 incoming messages. To look at it from polling perspective, I want the polling process to be based on the number of messages instead of a fixed time interval .. somehow poll the previous channel possibly multiple times but only accept messages once it has enough to process
Use an <aggregator/> with a release-strategy-expression="size == 50" and a correlation-strategy-expression="'foo'" (and expire-groups-on-completion="true). The expire-groups setting allows the next group ('foo') to form.
Follow the aggregator with a simple <splitter /> (no expressions, just in/out channels).
The aggregator will accumulate messages until 50 arrive and then release them as a collection, and the splitter will split the collection back to single messages.
If you want to release based on size or elapsed time (release a short group if x seconds elapse) then configure a MessageGroupStoreReaper.
Twillio failed 150 SMS due to lack of funds in the middle of a campaign. Is there a way to resend those 150 messages in bulk? Thanks!
If you don't have a queue on your side, the easiest way is to use the API to find the list and resend as appropriate:
You can use the SMS Messages List Resource - http://www.twilio.com/docs/api/rest/sms#list - to get a list of messages within a certain date range from a certain number.
From there, you'll get back a list which you can iterate over. For each of those, check the "status" parameter for the "failed" value - http://www.twilio.com/docs/api/rest/sms#sms-status-values
I would recommend making a list of those, looking at them yourself to make sure the numbers are what you expect and then reload them send via your normal means.
On another front, we have auto-recharging specifically to prevent scenarios specifically like this. If that's not turned on, you should enable it so this doesn't happen again.
Disclosure: Twilio Employee here