Suggested directory / module structure for multi-channel Yii2 app - refactoring

I started my scheduling app with a basic app template for my app, it worked pretty well. Initially, it was only for IVR, now it started to add multiple channels (ivr, web, sms, etc). Each channel handles the user request differently and communicates the request in the requested channel, but all it does is scheduling. So, I started off with the following structure,
backend
common
frontend
modules
ivr
web
appt
For me, appt is going to be the core module and will be used by all channel modules. ivr and web modules handles request from different channels. If someone requests for appt, how can I pass the request data to appt module and send the appt's response back to user?
If it is an app without modules, it would be an instantiation of model and assigning form collected values to model, validate them, save and send the response in the request synchronously. How I can do the same, if my module depends on another module to perform validation and other business logic?
Can anyone shed some light on my approach? Should I go with module and just go back with the plain single app structure?
Edit:
My main confusion is whether IVR should go as a module or it should go as separate app as backend, frontend & console app does with Yii2 advanced template. The other modules that I'm considering are subscribe, script, notifier. Some modules are common to all channels and some others are specific to IVR.
When I considered 'appointment (appt)' as module and attempted to make appt from web module, I got stuck at how to instantiate models & retrieve error message.
I think modules should be loosed coupled, but for me, it appears to tightly integrated with other modules. What do you suggest from the following?
every purpose / feature as module and should be able to handle requests / response for all channels (ivr, web, etc)
every channel should be projected as module and should interact with feature modules. Does this over complate the module system?
Bring IVR as a separate app and import the necessary modules in it and add modules as needed?

Your question is a bit of a general nature and therefore are not able to give you a precise answer, however with Yii2 can pass information in multiple ways. You can redirect calls to redirect but I think the best solution is to instantiate in your module appt the services that you need. including the correct namespace with use ----\modules\ivr ... and then use the most appropriate methods of the modules that you need.
For the response to be sent to users must return obviously the information provided in different formats and so you must watch to the correct format for each channel type, but having successfully organized the different forms you will need, I think, limit yourself to make or specialize a render / echo appropriate.
Reply
Compared to Your I would keep in mind the following aspects: the various channels may well be separate modules strongly be decoupled between them.
The problem of integration and the coupling between the various components it would go to manage with a single module (appt) who cares to integrate and coordinate user requests and relay the various tasks on the specific functions of the channel.
It must also take account of the opportunity to manage the various features through a proper object-oriented architecture that would allow a further centralization of coordination actions between module functions and especially creandio an abstract channel from which specialize functions Common Channel (and relationship with the system components).
I look with particular interest in the fact of concenpire the IVR module as a module too different from the channels. Certainly it has some intrinsic speficità but try to work more in terms of modularity homogenous.
The discussion is still very wide. and we should be more specific to the needs of interaction and telecommunications that will set us as objective functional application.
I hope this will be useful

Related

How to run multiple different bots at different endpoints within the same project without using skills?

Do you know how to add more bots within the same project?
I think it should be possible because of the commend in CoreBot from BotBuilderSample.
Comment
However I have no idea how to do it.
Have you managed to do it and can share steps that's need to be done to accomplish it?
startup class
Thanks,
Jan
Defining multiple bots in the same project is not possible. You can only attach one implementation to the IBot interface. Additionally when the project is deployed, its URL will only be recognized as one bot. You can extend the capabilities of your bot by adding more controllers but it will still register as a single bot.
If you still want to do this somehow, I'd suggest creating a menu context as the initial response of your bot. Give the user an option of choosing what functionality (different bot) they are going to use. From that point just separate the flows throughout your bot.

Articulate storyline 360 launch xapi course with adlnet/xAPIWrapper

One of my clients sent me a xAPI course which is created using Articulate Storyline 360 and published as Tin Can API for LMS. I am able to launch the course using method mentioned in below link:
Incorporating a Tin Can LRS into an LMS
So using the above method the launch URL looks like:
http://my.lms.com/TCActivityProvider/story.html
?endpoint=http://my.lms.com/lrs/endpoint/
&auth=Basic OjFjMGY4NTYxNzUwOGI4YWY0NjFkNzU5MWUxMzE1ZGQ1
&actor={"name": ["First Last"], "mbox": ["mailto:firstlast#mycompany.com"]}
&activity_id=61XkSYC1ht2_course_id
&registration=760e3480-ba55-4991-94b0-01820dbd23a2
Using the above URL which has an endpoint and credentials information, the course gets launched successfully and sents xAPI statements to LRS automatically.
But I don't want to send the parameters like auth, actor or endpoint in the URL for security reasons.
I googled for an alternative method and found the adlnet/xapi-launch and adlnet/xAPIWrapper library.
I explored the above two libraries but am confused about how it can be integrated into the LMS?
Does Articulate Storyline 360 support adlnet/xAPIWrapper?
The adlnet/xAPIWrapper is just a library that makes it easier to communicate with the LRS and requires you to determine how the endpoint and authentication credentials will be passed to the library. In other words it isn't necessarily intended to be used via LMS launch (it will work there, but doesn't have special handling for it). The xapi-launch specification you found as far as I know (at this time) has effectively zero adoption.
The other alternative would be to use cmi5 which is another specification that includes the concepts of packaging, import and launch for content that communicates via xAPI. It uses a different credential handshake process that is similar to both the launch guidelines you linked and the xapi-launch method. It uses query string parameters for the endpoint, but the LRS credentials are accessed via a separate, single use request. It has better adoption (though still early at this time), has been peer reviewed, is under the ADL umbrella, and is on more of a standards path. See https://xapi.com/cmi5/ for more about cmi5. I don't believe Articulate has yet implemented cmi5 in their products (at this time) as they are waiting for more indication for market desire, you should contact them about your interest in it if you feel it is a suitable option.

Creating an API for LUIS.AI or using .JSON files in order to train the bot for non-technical users

I have a bot that uses .NET, MS Bot Framework and LUIS.ai for its smarts.
All's fine, except that I need to provide a way for non-technical users to train the bot and teach it new things, i.e. new intents in LUIS.ai.
In other words, suppose that right now the bot can answer messages like "hey bot where can i get coffee" and "where can I buy some clothes" with simple phrases containing directions. Non-technical users need to be able to train it to answer "where can I get some food" too.
Here's what I have considered:
Continuing to use LUIS.ai. Doesn't work because LUIS.ai doesn't have an API. The best it has is the GUI to refine existing intents, and the upload app/phrase list feature. The process can be semi-automated if the JSON file with the app can be generated by some application that I write; however, there still needs to be backend code that handles the new intents, and that has to be implemented by a C# coder.
Could it work if I switch from C# to Node.js? Then theoretically I would be able to auto-generate code files / intent handlers.
Azure Bot Service. Seems it doesn't have a non-technical interface and is just a browser-based IDE.
Ditching Bot Framework entirely and using third-party tools such as motion.ai. Doesn't work because there's no "intellect" as the one provided by LUIS.ai.
Using Form Flow that's part of Bot Framework. If my GUI bot builder application can generate JSON files, these files can be used by Bot Framework to build a bot automatically. Doesn't work because there's no intellect as in LUIS.ai.
Keep using Bot Framework, but ditch LUIS and build a separate web service based on a node.js language processing library for determining intents. May or may not work, may be less smart than LUIS, and could be an overkill.
Override the method in LuisDialog that selects the intent from the LuisResponse, in order to use the my own way to decide the intent (but how?).
At this point I'm out of ideas and any pointers will be greatly appreciated.
First of all, LUIS.ai provides an API that you can use to automatize the training. Moreover, here is Luis Trainer written entirely in Python against the API that just does that.
The easiest one, probably is the one you are describing in #1: you can automatize the training (as explaining above) but you will still have to deploy a new version of the bot if new intents are being provided. One thing is letting users to train an existing model with new utteraces and another completely and different thing is to let them create the model :)
It might be hard to skip having to write the backend code (I wouldn't automatize that at all)
Here is a potential idea (not sure if it will work though). You would need 2 Luis models.
One with your current model, that users will be able to train with new utterances.
The second model, is one exclusively intended to be "expanded" with new intents by users.
If you separate this in that way, you might be able to look into a "plugin" architecture for the second LUIS model. So, your app, somehow, loads dinamically an assembly where the second model lives.
Once you you have that in place, you can focus on writing the backend code for your second Luis Model without having to worry about the bot/first model. You should be able to replace the assembly with the second Luis Model and be able in the bot to detect if there is new version of that assembly and replace the current one in the app domain.
As I said, is just an idea as I'm brainstorming with you. Sounds a bit complex, and it's not addressing all your concerns; as you still will need to write code (which in any case, you will eventually have to do)
I am working through a challenge project (training) to automate the creation of Chat Bots specifically targeted against a Luis.ai model using plain old javascript and web services to Luis.
I looked at the Bot Framework and it's just too cumbersome to automate (I want X number of customers to create a Chat Bot without coding). I also want to add my own type of 'Cards' (html widgets) that do more and can be easily configured by someone with zero coding skills.
Calls to the Luis.ai/Cognitive Services API are made in my code behind and the json response returned to my own rules engine. On the following URL click the LUIS API link on the page to open the Luis API Console where you can test, and train your Model. All the endpoints you will need are here...
https://dev.projectoxford.ai/docs/services/
Based on the various endpoints on that page, you can use WebClient in asp.net to pull back the response. So in my testing I have buttons on a page to push utterances up to the model, pull back entities, create hierarchical entities and so on. Have a look at http://onlinebotbuilder.com to see how an intent of product dynamically inserted a shopping cart.
When your tool is built and utterances start to arrive, Luis.ai will store them and via the Suggest tab (at Luis.ai) it will ask you for guidance...Unfortunately I don't think you could give that control over to your customers, unless they are experts in your domain (they understand which utterance belongs to which intent). You don't need to take your app down, just train it periodically to improve the Model based on your customers input...soon enough you will have your model working well based on your intents.
Hope that helps.

When does it make sense to use Actors in a web application

I have been browsing Typesafe Activator templates. Often in the Typesafe templates (Reactive Stocks / Maps / *) I see an actors folder within the app folder. Obviously, this is supposed to hold actors, but how would one add actors to a Play application. I know that Play is an MVC framework, and that means:
Models act as templates for structured data, and are used for interaction with the database and passed to views
Views are web pages, and typically can have data injected into them.
Controllers contain business logic, and may bridge models and views
If this is so, what are Actors for? What do they add that Models, Views, and Controllers do not provide? What would be some practical uses of Actors while developing reactive web applications?
EDIT
While reviewing Play documentation, I have found that Actors may be useful for scheduling. Are there any other uses for Actors?
The Play Framework uses controllers to handle the requests. The controller fetches the data from the models and transforms/filters/modifies it, so that the views can handle the data. The controller also contains the whole business logic.
The way the Play Framework is designed, those controllers should handle those requests either really quick, or return an asynchronous Promise. This is where the actors come in handy. If you have requests which take a long time to compute, you can use an actor to handle the request asynchronous. Have a look at the documentation.

Building A Social Network

So, I'm starting out building a social network web app. I'm looking into how to fit the parts of my stack together and I'm looking for some guidance about what various frameworks will allow me to do. My current stack idea is to have:
Firebase JSON API: serving user, post, comment, and all the other data
EmberFire: to plug that API into EmberJS
EmberJS: my front-end MVC (because I'm new to MVC and Ember seems the most accessible)
What I'm stumbling on at the moment is how I'm going to implement users with this stack. I've looked at basic authentication stuff but I haven't found anything that would allow me to allow certain actions and views for certain users and not others - the basics of a social network really.
Is it sensible to be doing this stuff in front-end MVC? If so what should I be using to do authentication/personalisation? If not, should I just be doing a PHP/SQL setup? I'd rather avoid that because my skills are all front-end.
If you are just getting started, Firebase is a great service to learn on due to their 'back end as a service' model - you will spend more time building/modeling your data and less time running/installing. Not that you won't want to learn more about that later, but it lets you focus on one piece at a time.
From an access perspective, JS/NoSQL vs PHP/MySQL isn't going to be the issue. They each have their own security requirements - it's more that PHP/MySQL has had more time to establish those rules. Additionally, Firebase being a hosted service has it's own set of requirements.
Firebase security rules are a little weird when you first look at them, but they begin to make some sense after you sit with them for a bit. The Firebase docs are actually a pretty great resource. https://www.firebase.com/docs/security/
Basically, if you use a Firebase 'login provider' it makes Firebase act as both a database and a authentication manager, and the combo helps keep users 'fenced' to where you want them. You can use data from other paths, variables, validation rules, etc. You can even make a 'custom login provider' if you need to integrate with an existing one.
Finally, on the client end, your view can respond to whatever Firebase returns - if a user does 'hack' their way through to a page they shouldn't be on client side, no data is returned anyways and no submitted information would be allowed because of the rules.

Resources