I would like to find out whether one can use a script to query a user some data through kannel.
Usecase
Consider a scenario where I need to carry out a registration process
via sms. I need to query for the name, birthdate, gender and so forth.
Suggestions highly appreciated.
Progress
I have tried doing so using the sms-service;
# SMS SERVICE
group = sms-service
keyword = limo
get-url = "http://localhost:3000/client?sender=%p&text=%k"
accept-x-kannel-headers = true
max-messages = 3
concatenation = true
In this case I am relying on get-url to extract the sender's msisdn. Upon receipt of the keyword "limo", I would like to start prompting the user for their name, birthdate, gender etc...in a step wise manner.
When a user sends "limo", I will respond using a question, for example, "Reply with your name?". The user may text back "Willy". I would like to retrieve and store this in the data base and prompt for the birthdate which i will in turn also store in the database.
The challenge is to extract these responses effectively from smsbox.log as well as handle the session.
Yes, this is possible. Setting up Kannel for this is pretty much straight forward (https://jasonrogena.github.io/2014/01/18/kannel-and-the-huawei-e160.html). You will, however, have to manage your sessions on your own as Kannel is stateless.
Update
You don't need to extract anything from the log files (processing log files is highly discouraged). Everything you need in this scenario can be passed into your script (specified by the get-url variable)
I think the GET request variables you have specified in our get-url are a bit fucked up. Use "http://localhost:3000/client?phone=%p&text=%a" instead.
Handling sessions is pretty easy:
In your database, have a table that will store sessions. Make the sender's phone number the primary key for this table. The table should probably have another column for storing the last registration step (in your case, you can declare it as an ENUM('name','birthdate','gender')). I'll refer to this column as the last_step column.
When your script is called by Kannel, check if the sender's number is in the session table.
If the sender is not in the session table, add a new row with the primary_key = the phone number and last_step = 'name'. You should probably then send the sender a text message asking them to provide their name.
If the sender is in the session table, check what the value of last_step is for that particular number is. If last_step = 'name', your code should assume that the sender has just sent their name. Store the received SMS as the name in your DB. Update last_step = 'birthdate' then send an SMS to the sender asking them to provide their birth date.
Follow this logic until the user finishes the registration process.
Related
I am migrating sql server database into REDCap. I am new to REDCap and I am still investigating the features of REDCap. I am building a survey form that will collect all the data into REDCap. Once the data is there in the REDCap, I want to send emails based on the date in the future. For example, if there are fields in the instrument as below
Email
Expiry Date
test#gmail.com
12/12/2021
I want to send automated email to that email address(test#gmail.com) on that date date (12/12/2021). Basically, it has to look at the data and send out reminders to the email address on the expiry date.
I looked at alerts and notifications. I can write the conditional logic to send the reminder upon data entry. But, here in my case, the data is already stored.
I looked at the scheduling module. Scheduling module is generating events on the calendar but not sending emails automatically.
Is there a way I can achieve this?
Which version of REDCap is your institution on? Since version 9.9.1 you can have an alert send either before or after a date field in your project. So the alert can be configured to be triggered by data import, and the time to send would be, say, 5 days before the [expiry_date].
Here is the changelog entry:
Improvement: A new send-time option has been added when setting up Automated Survey Invitations and Alerts & Notifications. When defining
when the ASI/Alert should be sent, the option “Send after a lapse of
time” has a new setting added so that, if desired, the user may set
the time lapse relative to the value of a date or datetime field in
the project. In previous versions, the time lapse setting could only
be set relative to the time in which the ASI/Alert was triggered. That
is still an option, but now users may also opt to send the ASI/Alert a
certain amount of time either before or after the date/time of a
specific field. This new setting will allow users to have greater
control with regard to setting when ASIs/Alerts will be sent without
getting too complicated in their setup, such as having to use complex
logic (with datediff, etc.).
As the changelog says, another method is to use datediff logic in the trigger, which you will need to use if you are not on v9.9.1 or later (you should also encourage your institution to upgrade since there are important security patches since then). When an alert has a datediff function in its logic, REDCap will check it every four hours (unless the frequency has been changed by your administrators). This means you can send the alert 5 days before the expiry date with this logic:
(existing logic) and datediff("today", [expiry_date], "d", true) = -5
The true parameter here returns the signed value, so that if the first date is later than the second, it will return a negative value. false returns an absolute number.
This will be true on the exact day when [expiry_date] is 5 days in the future.
I wanna code a telegram bot, so when I gonna receive messages from a user I should know about last message he/she sent to me and in which step does he/she located. So I should store sessions of the user (I understood this when I searched) but I don't know what exactly should I do?
I know I need a table in a db that stores UserId, ChatId but I don't know these:
How to make a root for steps and store them in db (I mean how do I understand where the user is located now)
What are other columns that I need to store as a session?
How many messages should I store in the database? And do I need one row for each message?
If you just have to store session in your database you don't need to store messages. Maybe you could want to store also messages but it's not necessarily related.
Let's assume you have a "preferences" menu in your bot where the user can write his input. You ask for the name, age, gender etc.
How do your know when the user writes the input of it's about the name or the gender etc?
You save sessions in your db. When the bot receives the message you check in what session the user is in to run the right function.
An easy solution could be a sql database.
The primary key column is the telegram user ID ( you additionally can add a chat id column if it's intended to work both in private and group chats) and a "session" column TEXT where you log user steps. The session column can be NULL by default. If the bot expects the gender (because the user issued /gender command) you can update the column "session" with the word "gender" so when the message arrives you know how to handle it checking the gender column of that user id and as soon as you runned the right function, you update to NULL again the column "session".
you can create a db with these columns.
UserID, ChatID, State, Name, Age, Gender ...
on each incoming update you will check if user exists on you db then check the user's State and respond appropriately and update the state at the end.
My website allows users to communicate using conversations.
In the conversation-inbox page a user can see all the users that have contacted him, including a preview of the latest message from the specific user. The page is order by the date of the previewed message.
It looks roughly something like this:
UserA "Some message.." 2016-3-3
UserB "Other message.." 2016-3-2
UserC "..." 2016-2-15
etc..
I was wondering what is the correct combination of the Redis data structures to use to model this efficiently.
At first I thought about having a sorted set of the users (i.e. UserA, UserB, UserC), but this would mean I would have to have a loop to get the latest message from each user.
Is there a better way, avoiding the loop?
Thanks!
You'll need two data structures for each user's inbox: a Hash and a Sorted Set.
The Sorted Set's scores can be all set to 0 as we'll be using lexicographical ordering anyway (but there's no harm in setting them to the actual timestamp of the message, at least in the context of this answer). The members of the Sorted Set should be constructed in the following manner:
<date in YYYYMMDD>:<from user>:<message>
This will let you easily pull that view and page through it with ZREVRANGE.
But that's only half of the story - when userX is sent a new message from userA, you'll need some way of finding and removing userA's previous message from userX's inbox - that's why you need the Hash.
The Hash is used for looking up the latest message from a given user to userX. For each of userX's friends, keep in the Hash a field that is the sending user's ID (e.g. userA) and whose value is the inbox's Sorted Set member that represents the latest message from that user (same "syntax" as above). When a new message arrives, first fetch the previous message from the Hash, remove it from the Set, and then add the new message to the Set and update the Hash's field.
To make sure that Hash and Sorted Set are consistent, I recommend that you look into wrapping them together in a transaction. You can use a MULTI/EXEC block, but my preference is a Lua script.
I implementing RESTful API service and i have a question about saving related records.
For example i have users table and related user_emails table. User emails should be unique.
On client side i have a form with user data fields and a number of user_email fields (user can add any number of fields independently). When the user saves the form i must first make query to create record in users table to get her ID, and only then i can make query to save user emails (because in now i have id of record which come with response after saving user data). But if user enters not unique email in any field then the request will fail. So I create a record in the users table but not create record in user_emails table.
What are the approaches to implement validation of all this data before saving?
This is nor related restful api but transactional processing on the backend. If you are using Java, with JPA you can persist both element in the same transaction then you can notice if there is a problem and rollback the entire transaction returning a response.
I would condense it down to a single request, if you could. Just for performance's sake, if nothing else. Use the user_email as your key, and have the request return some sort of status result: if the user_email is unique, it'll respond with a success message. Otherwise, it'd indicate failure.
It's much better to implement that check solely on the server side and not both with the ID value unless you need to. It'll offer better performance to do that, and it'll let you change your implementation later more easily.
As for the actual code you use, since I'm not one hundred percent on what you're actually asking, you could use a MERGE if you're using SQL Server. That'd make it a bit easier to import the user's email and let the database worry about duplicates.
I'm building off of a previous discussion I had with Jon Skeet.
The gist of my scenario is as follows:
Client application has the ability to create new 'PlaylistItem' objects which need to be persisted in a database.
Use case requires the PlaylistItem to be created in such a way that the client does not have to wait on a response from the server before displaying the PlaylistItem.
Client generates a UUID for PlaylistItem, shows the PlaylistItem in the client and then issue a save command to the server.
At this point, I understand that it would be bad practice to use the UUID generated by the client as the object's PK in my database. The reason for this is that a malicious user could modify the generated UUID and force PK collisions on my DB.
To mitigate any damages which would be incurred from forcing a PK collision on PlaylistItem, I chose to define the PK as a composite of two IDs - the client-generated UUID and a server-generated GUID. The server-generated GUID is the PlaylistItem's Playlist's ID.
Now, I have been using this solution for a while, but I don't understand why/believe my solution is any better than simply trusting the client ID. If the user is able to force a PK collison with another user's PlaylistItem objects then I think I should assume they could also provide that user's PlaylistId. They could still force collisons.
So... yeah. What's the proper way of doing something like this? Allow the client to create a UUID, server gives a thumbs up/down when successfully saved. If a collision is found, revert the client changes and notify of collison detected?
You can trust a client generated UUID or similar global unique identifier on the server. Just do it sensibly.
Most of your tables/collections will also hold a userId or be able to associate themselves with a userId through a FK.
If you're doing an insert and a malicious user uses an existing key then the insert will fail because the record/document already exists.
If you're doing an update then you should validate that the logged in user owns that record or is authorized (e.g. admin user) to update it. If pure ownership is being enforced (i.e. no admin user scenario) then your where clause in locating the record/document would include both the Id and the userId. Now technically the userId is redundant in the where clause because the Id will uniquely find one record/document. However adding the userId makes sure the record belongs to the user that's doing the update and not the malicious user.
I'm assuming that there's an encrypted token or session of some sort that the server is decrypting to ascertain the userId and that this is not supplied by the client otherwise that's obviously not safe.
A nice solution would be the following: To quote Sam Newman's "Building Microservices":
The calling system would POST a BatchRequest, perhaps passing in a
location where a file can be placed with all the data. The Customer
service would return a HTTP 202 response code, indicating that the
request was accepted, but has not yet been processed. The calling
system could then poll the resource waiting until it retrieves a 201
Created indicating that the request has been fulfilled
So in your case, you could POST to server but immediately get a response like "I will save the PlaylistItem and I promise its Id will be this one". Client (and user) can then continue while the server (maybe not even the API, but some background processor that got a message from the API) takes its time to process, validate and do other, possibly heavy logic until it saves the entity. As previously stated, API can provide a GET endpoint for the status of that request, and the client can poll it and act accordingly in case of an error.