purging mechanism / deletion of messages from slack in bulk - slack

Is there anyway to delete bulk of messages from the slack application in one go, without paying for any package?
I need to introduce a mechanism which will delete the messages filed in the slack after certain period of time and this mechanism will work after certain interval

There is no API method for bulk operations. You have to keep track of a list of messages and invoke chat.delete for each on your own (in the code of your Slack App).

Related

Using transactional bus inside consumer

I have REST API gateway which calls one of the microservices with MassTransit request client. This request is not durable and is meant to live for a short time - essentially it's just replacement of "traditional" synchronous (via HTTP/GRPC/etc) gateway-microservice communication.
On microservice side I have consumer which under the hood uses DbContext and Transaction (EFC) to perform some work in database. After the work is done it should publish "WorkDoneEvent" (to be consumed later by other microservices) and return result of the work to api gateway. Event must be published atomically along with transaction used to perform the work. It does not matter if ApiGateway will receive response / will retry request - as soon as transaction is commited both work result and sending "WorkDoneEvent" must be guaranteed.
Normally this is done with transactional outbox which first saves published event to database within same transaction as the work is done. (And then some process constantly "polls" outbox and tries send message to the broker, when done it removes message from outbox). As far as I know.
MassTransit seems to have transactional outbox built in: https://masstransit-project.com/advanced/middleware/transactions.html#transactional-bus.
However in docs it clearly states:
Never use the TransactionalBus or TransactionalEnlistmentBus when writing consumers. These tools are very specific and should be used only in the scenarios described.
And this is exactly what I want to do...
Why I should not do it?
I'd suggest using the InMemoryOutbox, which is part of MassTransit. It's significantly lighter weight, is designed to work in a consumer, and will not publish your events until after the consumer has completed (but prior to acknowledging the message at the broker). The only consideration is that your consumer should be idempotent (which needs to be the case in your approach as well) and if the operation was already performed on a retry, it should republish the events.
There are videos, articles, and a sample to go along with it.

Process a stream of sessions on aws

Is there a way to implement somethong like Flink's session-window on aws with lambda and some way of managing messages?
We have a stream of small events with a session id. We cannot guarantee the order of the arriving events and we don't always have a session-finished event. We know that session ids are unique. We also know that when a session is finished it won't be restarted. We also know that when the session is active we will receive a message every minute or so. We need to process the entire session as a whole.
We want to wait for a silent time of X minutes, and if no messages arrive we will process the entire session as a whole.
This is exactly what Flink's silent window does, is there a way to do the same thing purely using aws lambda and it's triggers?
There can be 10s of millions of sessions at the same time
It's not possible with an AWS Lambda.
Lambdas are stateless, they are able to process messages one by one, but cannot offer any processing over a sequence of messages, which would be required for the kind of windowing logic you describe.
Maybe an option for you would be Kinesis Data Analytics? Under the hood, this one is actually Flink, although it's provided as a managed service by AWS, so maybe you'll get there the "lambda-like" experience you're looking for?

Should we store Events in a database? (Event Driven Design)

We have several services that publishes and subscribes to Domain Events. What we usually do is log events whenever we publish and log events whenever we process events. We basically use this to apply choreography pattern.
We are not doing Event Sourcing in these systems, and there's no programmatic use for them after publishing/processing. That's the main driver we opted not to store these in a durable container, like a database or event store.
Question is, are we missing some fundamental thing by doing this?
Is storing Events a must?
I consider queued messages as system messages, even if they represent some domain event in an event-driven architecture (pub/sub messaging).
There is absolutely no hard-and-fast rule about their storage. If you would like to keep them around you could have your messaging mechanism forward them to some auditing endpoint for storage and then remove them after some time (if necessary).
You are not missing anything fundamental by not storing them.
You're definitely not missing out on anything (but there is a catch) especially if that's not a need by the business. An Event-Sourced System would definitely store all the events generated by the system into a database (or any other event-store)
The main use of an event store is to be able to restore the state of the system to the current state in case of a failure by replaying messages. To make this process of recovery faster we have snapshots.
In your case since these events are just are only relevant until the process is completed, it would not make sense to store them until you have a failure. (this is the catch) especially in a Distributed Transaction case scenario.
What I would suggest?
Don't store the event themselves but log the relevant details about these events and maybe use an ELK stack or Grafana to store these logs.
Use either the Saga Pattern or the Routing Slip pattern in case of a Distributed Transaction and log them as well.
In case a failure occurs while processing an event, put that event into an exception queue and handle it. If it's a part of a distributed transaction make sure either they all have the same TransactionId or they have a CorrelationId so you can lookup for logs and save your system.
For reliably performing your business transactions in a distributed archicture you somehow need to make sure that your events are published at least once.
So a service that publishes events needs to persist such an event within the same transaction that causes it to get created.
Considering you are publishing an event via infrastructure services (e.g. a messaging service) you can not rely on it being available all the time.
Also, your own service instance could go down after persisting your newly created or changed aggregate but before it had the chance to publish the event via, for instance, a messaging service.
Question is, are we missing some fundamental thing by doing this? Is storing Events a must?
It doesn't matter that you are not doing event sourcing. Unless it is okay from the business perspective to sometimes lose an event forever you need to temporarily persist your event with your local transaction until it got published.
You can look into the Transactional Outbox Pattern to achieve reliable event publishing.
Note: Logging/tracking your events somehow for monitoring or later analyzing/reporting purpose is a different thing and has another motivation.

Do smart contracts on NEAR have events or do I need to poll the chain to get data?

Do smart contracts have events now that I can set up listeners for or do I need to poll the chain manually to get data about them?
There are no events right now on NEAR but you could do the following
https://github.com/near-examples/erc-20-token/blob/master/contract/events.ts
and in Rust
https://github.com/near/docs/issues/362
Instead of native events we have a way to poll for changes in the contract's state. For example the events above for fungible tokens are implemented by using that.
Polling for events can be done via RPC https://docs.near.org/docs/api/rpc-experimental#example-of-data-changes and also we are finishing the indexing infrastructure so can later just run indexing node that will provide all this events (https://github.com/nearprotocol/nearcore/pull/2651)

parse.com – Cancel scheduled push programmatically

I need to cancel some of scheduled push notifications via API.
Not via web console.
For example, it will be good to have ability to cancel all of scheduled push notifications for specific deviceToken (filter by deviceToken will be good enough, because I don’t need to cancel specific pushes).
REST API doc have nothing on this topic.
This is not possible at the moment, the only two ways are to either use the Push Dashboard in a browser, which can be unhandy if you schedule a huge amount of notifications or to implement your own queuing system for Push notifications.
The latter would involve creating a new table for your notifications and a background job that will send out all notifications that are due to be sent. Once sent, remove them from that table.
Other than that, you're out of luck at the moment.

Resources