Bot Persist state of user during conversation - python-telegram-bot

Every time that I restart my app (bot), conversation states are lost.
Is it possible to persist user state in a storage (like redis or a database) during conversations? How to do that?

This feature is currently in master branch now. There are 2 main classes: DictPersistance and PicklePersistance.
Of course, you can write your own implementation with Redis or Database using BasePersistance class.
See Gihub PR.

Related

Redis publisher also receiving the message

I am building a scalable chat application using Go and Redis w/ websockets.
I need to publish a new message using redis pub-sub model to other websocket servers to inform all the users (saved in memory of other servers) about the new joined user.
But the issue is, the publisher(also a redis client) receives the same message. Is there a direct way to solve this?
Workaround:
Check if the user for new user in the received event (for publisher) is in the list of current local users everytime.
WHY NEGATIVE VOTES? I'm so pissed at stack-overflow these days. People have no tolerance or too much arrogance

How to get the identity of a person invoking a transaction inside a transaction in Hyperledger composer?

I want to save the identity of the user invoking a transaction in hyperledger composer. is there a way of getting the user identity inside a transaction without passsing it as a transaction parameter?
Depends on how your users are managed. Typically an organization has a couple of fabric users that invoke transactions on the blockchain. This user can be determined by the ledger. However if you authenticate users at the application level then invoke with the same fabric client there is no way of drilling down to know which user from within an organization invoked the transaction without passing the user as part of the transaction
Answering my question, Hyperledger composer has a global method getCurrentParticipant that can be called inside the transaction to get the participant invoking the transaction. Also it has getCurrentIdentity which can be used to get the identity of the current participant. for more information

Database architecture for micro services

As I heard most of the time that in micro services architecture, for every single micro service we have to create individual database.
But if I have to maintain foreign key constraint across the different databases which is not possible. Like I have a user table in authentication micro service and I want to use it in my catalog service(userid column from user table)
So how can it be resolve.
Thanks in Advance
You can maintain a shadow copy (with only useful information for eg. just the userid column) of user table in catalog service via event sourcing(for e.g. you can use rabbit MQ or apache kafka for async messaging).
Catalog service will use the user information in read only mode. This solution is however effective only when user information doesn't change frequently. Otherwise async communication can be inefficient and costly.
In that case you can implement API calls from catalog service to user service for any validations to be done on user data.
Use the Saga Pattern to maintain data consistency across services.
A saga is a sequence of local transactions. Each local transaction
updates the database and publishes a message or event to trigger the
next local transaction in the saga. If a local transaction fails
because it violates a business rule then the saga executes a series of
compensating transactions that undo the changes that were made by the
preceding local transactions.

Micro-services architecture, need advise

We are working on a system that is supposed to 'run' jobs on distributed systems.
When jobs are accepted they need to go through a pipeline before they can be executed on the end system.
We've decided to go with a micro-services architecture but there one thing that bothers me and i'm not sure what would be the best practice.
When a job is accepted it will first be persisted into a database, then - each micro-service in the pipeline will do some additional work to prepare the job for execution.
I want the persisted data to be updated on each such station in the pipeline to reflect the actual state of the job, or the its status in the pipeline.
In addition, while a job is being executed on the end system - its status should also get updated.
What would be the best practice in sense of updating the database (job's status) in each station:
Each such station (micro-service) in the pipeline accesses the database directly and updates the job's status
There is another micro-service that exposes the data (REST) and serves as DAL, each micro-service in the pipeline updates the job's status through this service
Other?....
Help/advise would be highly appreciated.
Thanx a lot!!
To add to what was said by #Anunay and #Mohamed Abdul Jawad
I'd consider writing the state from the units of work in your pipeline to a view (table/cache(insert only)), you can use messaging or simply insert a row into that view and have the readers of the state pick up the correct state based on some logic (date or state or a composite key). as this view is not really owned by any domain service it can be available to any readers (read-only) to consume...
Consider also SAGA Pattern
A Saga is a sequence of local transactions where each transaction updates data within a single service. The first transaction is initiated by an external request corresponding to the system operation, and then each subsequent step is triggered by the completion of the previous one.
http://microservices.io/patterns/data/saga.html
https://dzone.com/articles/saga-pattern-how-to-implement-business-transaction
https://medium.com/#tomasz_96685/saga-pattern-and-microservices-architecture-d4b46071afcf
If you would like to code the workflow:
Micorservice A which accepts the Job and command for update the job
Micorservice B which provide read model for the Job
Based on JobCreatedEvents use some messaging queue and process and update the job through queue pipelines and keep updating JobStatus through every node in pipeline.
I am assuming you know things about queues and consumers.
Myself new to Camunda(workflow engine), that might be used not completely sure
accessing some shared database between microservices is highly not recommended as this will violate the basic rule of microservices architecture.
microservice must be autonomous and keep it own logic and data
also to achive a good microservice design you should losely couple your microservices
Multiple microservices accessing the database is not recommended. Here you have the case where each of the service needs to be triggered, then they update the data and then some how call the next service.
You really need a mechanism to orchestrate the services. A workflow engine might fit the bill.
I would however suggest an event driven system. I might be going beyond with a limited knowledge of the data that you have. Have one service that gives you basic crud on data and other services that have logic to change the data (I would at this point would like to ask why you want different services to change the state, if its a biz req, its fine) Once you get the data written just create an event to which services can subscribe and react to it.
This will allow you to easily add more states to your pipeline in future.
You will need a service to manage the event queue.
As far as logging the state of the event was concerned it can be done easily by logging the events.
If you opt for workflow route you may use Amazon SWF or Camunda or really there quite a few options out there.
If going for the event route you need to look into event driven system in mciroservies.

Data sharing with microservices

I am implementing an event-driven microservice architecture. Imagine the following scenario:
Chat service: Ability to see conversations and send messages. Conversations can have multiple participants.
Registration-login service: Deals with the registration of new users, and login.
User service: Getting/updating user profiles.
The registration-login service emits the following event with the newly created user object:
registration-new
login-success
logout-success
The chat service then listens on registration-new and stores some fields of user in its own redis cache. It also listens on login-success and stores the token, and on logout-success to delete the token.
The user service has the following event: user-updated. When this is fired, a listener in the chat service updates the data corresponding to the user id in redis. Like the chat service, the user service also listens on login-success and logout-success and does the same thing as what the chat service does.
My question is the following: is this a good way to do this? It feels a bit counterintuitive to be sharing data everywhere. I need some advice on this. Thank you!
Seems that there's no other way. Microservices architecture puts lots of stress in avoiding data sharing so as to not create dependencies. That means that each microservice will have some data duplicated. That also means that there must exist a way of getting data from other contexts. The preferred methods strive for eventual consistency, such as sending messages to event sourcing or AMQP systems and subscribing to them. You can also use synchronous methods (RPC calls, distributed transactions). That creates additional technologic dependencies, but if you cannot accept eventual consistency it could be the only way.

Resources