Rasa Chatbot: Handling repeated scenario - rasa-nlu

I am working in follow up bot, each user has many tasks, when user ask about his/her tasks, the Bot will fetch tasks using API then the bot will displayed the tasks one by one and going to ask the user if he/she able to finish it today. if user said yes the the the task will marked as completed if no the bot will ask the user about finished date.
I tired many solution in Action by iterate over tasks and dispatch template but after dispatching the loop stop and never go back again.
class ActionRequestTasks(Action):
def name(self):
return "action_request_tasks"
#staticmethod
def json2obj(data):
return json.loads(data, object_hook=lambda d: namedtuple('X', d.keys())(*d.values()))
def run(self, dispatcher, tracker: DialogueStateTracker, domain):
response = requests.get('url', headers=headers)
tasks_wrapper = self.json2obj(response.text)
data = tasks_wrapper.Data
first_message = "You have {} delayed tasks, I will help you to go through all of them".format(len(data))
dispatcher.utter_message(first_message)
for task in data:
task_message = "Task Title {}\nComplete percentage {}\nStart Date {}\nFinish Date{}".format(task.Title,
task.PercentComplete,
task.StartDate,
task.FinishDate)
dispatcher.utter_message(task_message)
dispatcher.utter_template("utter_able_to_finish", tracker)
return []

This sounds like the perfect application for a Form. You can make the API call in the required_slots() method, then use validation to fill the slots dependent on the user's response. The form will run until all slots are filled, then you can decide what to do with the slots in the submit() method (for instance, updating the task status for each one via another request).
I recommend reading the docs on Form setup and also checking out the code for formbot to see a working implementation

Related

How can discord bot send message and attachment to a specific user after a job done

I am having a discord bot which takes job requests from user DM.
The job may take a few minutes and produce a file which should be sent back to the user.
I can fetch message.author.id from the async def on_message(message), and the job will be execute once the message received. Due to the job takes a while, the bot would report error as it is hanging on the job process. Therefore I have used subprocess to run the job, but now, I have not got any idea how the job could communicate to the bot and let it know as completed and send the message and attachment to the specific user.
And how about if the user is offline, would the message and attachment get delivered??
#client.event
async def on_message(message):
if message.author == client.user:
return
if message.content == None or message.content == "":
return
msg_ = message.content
author_ = str(message.author)
return_code = subprocess.call("start JOB.py msg_ author_", shell=True)
I think one potential way of doing it is keeping track of which jobs are running/have been triggered by users. Either through a file or database. The file and DB options would allow you to possibly decouple the job running/discord part as well.
You could then use a task that runs periodically to check the dict/file/db for finished jobs and then send a message to the user that it's done. It can then either delete the job or set a "message_sent" flag or equivalent so it doesn't send the notification multiple times.
The job script as part of it's cleanup/finish can then mark itself as complete within the file/db so that the "notify when job finished task" can pick that up and inform the user.

Creating an automatic closing time for registrations with a discord bot

I've been learning how to make a discord bot for a while and for the most part I've grasped the fundamentals already and I'm working my way through the more advanced concepts.
At the moment, I'm stuck on formulating the logic on making a time out with registrations.
So for example, I set up a tournament with a set deadline for registrations, what would be a good approach to close the registrations?
At the moment I already have the deadline saved in the database and whenever users register via a command, it checks if the current date is > the deadline date. But this I want the bot to be able to send a message by itself and prompt in the channel that "Registrations are closed".
I realized wait_for only waits for a single command. If I put that in a loop, I have to set how many registrations I should wait for (but with this I can use reacts).
A scheduler would have to loop every few minutes/hours and check if current datetime is > deadline which isn't very accurate.
The good thing is, there can only be one tournament at a time running.
You can fetch current event from your database in on_ready, wait for that event and then send a message to channel (You also can store channel_id in your database).
Check the example below.
#bot.event
async def on_ready():
event_data = something # fetch current event data from your database
if event_data[‘end_time’] > datetime.utcnow().tinestamp():
await asyncio.sleep(event_data[‘end_time'] - datetime.utcnow().timestamp()) # wait for event
channel = bot.get_channel(event_data[‘channel_id’])
if channel is not None:
await channel.send(“Event has been finished!”)
# you can delete this event from your database here
You also can use scheduler to schedule a task instead of asyncio.wait(). For example APScheduler.

Dagster failure notification systems

Is there a way in dagster to receive notifications when certain events occur, such as failures? For example, is there an integration with a tool like sentry available?
There is a datadog integration that lets users send events to datadog. From the docs:
#solid(required_resource_keys={'datadog'})
def datadog_solid(context):
dd = context.resources.datadog
dd.event('Man down!', 'This server needs assistance.')
dd.gauge('users.online', 1001, tags=["protocol:http"])
dd.increment('page.views')
dd.decrement('page.views')
dd.histogram('album.photo.count', 26, tags=["gender:female"])
dd.distribution('album.photo.count', 26, tags=["color:blue"])
dd.set('visitors.uniques', 999, tags=["browser:ie"])
dd.service_check('svc.check_name', dd.WARNING)
dd.timing("query.response.time", 1234)
# Use timed decorator
#dd.timed('run_fn')
def run_fn():
pass
run_fn()
#pipeline(mode_defs=[ModeDefinition(resource_defs={'datadog': datadog_resource})])
def dd_pipeline():
datadog_solid()
result = execute_pipeline(
dd_pipeline,
{'resources': {'datadog': {'config': {'api_key': 'YOUR_KEY', 'app_key': 'YOUR_KEY'}}}},
)
Adding first class user-configurable hooks for certain events (ie failure) is currently work in progress.
Not sure if this simply was not available yet when the accepted answer was written, but the current Dagster version (0.9.16) has a better mechanism to solve the question at hand.
They now have a hook system, where you can annotate a function to be triggered when either a pipeline has completed successfully or when it has failed.
Code example from the documentation:
#success_hook(required_resource_keys={'slack'})
def slack_on_success(context):
message = 'solid {} succeeded'.format(context.solid.name)
context.resources.slack.send_message(message)
#success_hook
def do_something_on_success(context):
do_something()

Return initial data on subscribe event in django graphene subscriptions

I'm trying to response to user on subscribe. By example, in a chatroom when an user connect to subscription, the subscription responses him with data (like a welcome message), but only to same user who just connect (no broadcast).
How can I do that? :(
Update: We resolve to use channels. DjangoChannelsGraphqlWs does not allow direct back messages.
Take a look at this DjangoChannelsGraphQL example. Link points to the part which is there to avoid "user self-notifications" (avoid user being notified about his own actions). You can use the same trick to send notification only to the user who made the action, e.g. who just subscribed.
Modified publish handler could look like the following:
def publish(self, info, chatroom=None):
new_msg_chatroom = self["chatroom"]
new_msg_text = self["text"]
new_msg_sender = self["sender"]
new_msg_is_greetings = self["is_greetings"]
# Send greetings message only to the user who caused it.
if new_msg_is_greetings:
if (
not info.context.user.is_authenticated
or new_msg_sender != info.context.user.username
):
return OnNewChatMessage.SKIP
return OnNewChatMessage(
chatroom=chatroom, text=new_msg_text, sender=new_msg_sender
)
I did not test the code above, so there could be issues, but I think it illustrates the idea quite well.

Basic Sidekiq Questions about Idempotency and functions

I'm using Sidekiq to perform some heavy processing in the background. I looked online but couldn't find the answers to the following questions. I am using:
Class.delay.use_method(listing_id)
And then, inside the class, I have a
self.use_method(listing_id)
listing = Listing.find_by_id listing_id
UserMailer.send_mail(listing)
Class.call_example_function()
Two questions:
How do I make this function idempotent for the UserMailer sendmail? In other words, in case the delayed method runs twice, how do I make sure that it only sends the mail once? Would wrapping it in something like this work?
mail_sent = false
if !mail_sent
UserMailer.send_mail(listing)
mail_sent = true
end
I'm guessing not since the function is tried again and then mail_sent is set to false for the second run through. So how do I make it so that UserMailer is only run once.
Are functions called within the delayed async method also asynchronous? In other words, is Class.call_example_function() executed asynchronously (not part of the response / request cycle?) If not, should I use Class.delay.call_example_function()
Overall, just getting familiar with Sidekiq so any thoughts would be appreciated.
Thanks
I'm coming into this late, but having been around the loop and had this StackOverflow entry appearing prominently via Google, it needs clarification.
The issue of idempotency and the issue of unique jobs are not the same thing. The 'unique' gems look at the parameters of job at the point it is about to be processed. If they find that there was another job with the same parameters which had been submitted within some expiry time window then the job is not actually processed.
The gems are literally what they say they are; they consider whether an enqueued job is unique or not within a certain time window. They do not interfere with the retry mechanism. In the case of the O.P.'s question, the e-mail would still get sent twice if Class.call_example_function() threw an error thus causing a job retry, but the previous line of code had successfully sent the e-mail.
Aside: The sidekiq-unique-jobs gem mentioned in another answer has not been updated for Sidekiq 3 at the time of writing. An alternative is sidekiq-middleware which does much the same thing, but has been updated.
https://github.com/krasnoukhov/sidekiq-middleware
https://github.com/mhenrixon/sidekiq-unique-jobs (as previously mentioned)
There are numerous possible solutions to the O.P.'s email problem and the correct one is something that only the O.P. can assess in the context of their application and execution environment. One would be: If the e-mail is only going to be sent once ("Congratulations, you've signed up!") then a simple flag on the User model wrapped in a transaction should do the trick. Assuming a class User accessible as an association through the Listing via listing.user, and adding in a boolean flag mail_sent to the User model (with migration), then:
listing = Listing.find_by_id(listing_id)
unless listing.user.mail_sent?
User.transaction do
listing.user.mail_sent = true
listing.user.save!
UserMailer.send_mail(listing)
end
end
Class.call_example_function()
...so that if the user mailer throws an exception, the transaction is rolled back and the change to the user's flag setting is undone. If the "call_example_function" code throws an exception, then the job fails and will be retried later, but the user's "e-mail sent" flag was successfully saved on the first try so the e-mail won't be resent.
Regarding idempotency, you can use https://github.com/mhenrixon/sidekiq-unique-jobs gem:
All that is required is that you specifically set the sidekiq option
for unique to true like below:
sidekiq_options unique: true
For jobs scheduled in the future it is possible to set for how long
the job should be unique. The job will be unique for the number of
seconds configured or until the job has been completed.
*If you want the unique job to stick around even after it has been successfully processed then just set the unique_unlock_order to
anything except :before_yield or :after_yield (unique_unlock_order =
:never)
I'm not sure I understand the second part of the question - when you delay a method call, the whole method call is deferred to the sidekiq process. If by 'response / request cycle' you mean that you are running a web server, and you call delay from there, so all the calls within the use_method are called from the sidekiq process, and hence outside of that cycle. They are called synchronously relative to each other though...

Resources