What does using or not using RASA core imply? - rasa-core

In context of my bachelor thesis it is my task to create a chatbot that will act as some kind of helpdesk. This means the chatbot will have to be able to interact with some external layers of code/API's. I hope this is enough context to answer my question.
Until now I've pretty much been working on the NLU component of my chatbot, which is now working pretty fine already. I'm ready for the next step which would be connecting this NLU to the next layer in the system that will further the process the intent and entities, do some auxillary stuff, and formulate a response base on what the original intent/entities were and what it got back from doing the auxillary stuff (which will be interacting with the API).
I've read up on RASA Core and I know what it does. It'll train a model given some example conversations and use that model to guess which actions it should perform or which response it should give based on the intent/entities it receives. To me this seems like something I would like to use, however my professor advised against this but he's not entirly sure. His opinion is that RASA core doesn't give us enough freedom to make the chatbot interact with those additional software layers/API's. This is where my questions come in:
Does using RASA Core make it more difficult to interact with other software layers/API's?
Is RASA Core essential to creating a chatbot or can you in a realistic fashion create one without using RASA Core (or another similar framework)? Especially since RASA Core seems to offer a lot of functionallity, mainly the fact it provides you with a framework that will make the chatbot know what to do and when. It seems difficult to do this by myself.
If I decide to not use it, what is best starting point to continue my project?
Since this is my first question on this forum I hope I didn't make my questions too long or confusion, if so let me know!
Hopefully someone will be able to shine some light on this situation.

Does using RASA Core make it more difficult to interact with other software layers/API's?
No. Rasa Core also follows an "API first" approach, which means everything should be accessible through the API.
Is RASA Core essential to creating a chatbot
If you are building a FAQ bot (question-answer pairs) then you might not need Core. But if actually want to build a bot which understands some context, and acts differently based on the history of the conversation, then you should use Rasa Core. Also, Rasa Core includes support for several channels (Slack, Socket.io, Telegram, ...) out of the box, which should make it even easier to hook up your bot which different endpoints.
If I decide to not use it, what is best starting point to continue my project?
Probably the HTTP API of Rasa NLU, so that you can integrate the queries in your application. The Rasa blog also contains a lot of blog posts about NLU, e.g. the Rasa NLU in Depth series which might help you to understand the Rasa NLU better (link to part 1 of the series).

Related

Laravel Livewire application without POST forms

I'm building a new Laravel 8 application and given the reactivity features available with Livewire package, that essentially turn a backend developer into a full-stack developer (no advanced Javascript knowledge needed), I don't use any POST actions or request handling logic in my scripts. Every CRUD operation is handled with modal windows and AJAX requests. So my question is: Are there some drawbacks in this approach? Are there some limitation that will emerge in the future from the fact that my scripts don't directly handle HTTP request?
Thanks for your opinions.
FYI I'm not familiar with Laravel or Livewire. I'll use the term "platform" below as a general word to encapsulate technologies and libraries, etc, such as what you describe.
Platforms tend to focus on the high-value scenarios that most people need - so as long as what you need the platform to do aligns with what it can do, you're fine (e.g. simple CRUD). But, if you need to do something that pushes the boundaries of what the platform can do then you'll run into issues: it may not be possible; it's possible but really inefficient / a pig to work on; distorts your architecture and decision making.
Platforms like this are good in that they hide complexity, which is great until you need to access it and look under the hood. This applies to everything from debugging to developing features using approaches that the platform / platform designers haven't allowed for.
As a new developer, learning how to do things "the long way" (e.g. hand-code AJAX calls) is great as a learning experience. By doing that you can better appreciate how platforms like the ones you mention work - because you understand the underlying principles. So, a disadvantage is that new developers won't get that experience through working on this solution - they'll have to do that as a side project (which is not "evil", but it is a consideration).

is there any other way to interact with Ethereum's smart contracts via UI besides Etherscan?

I'm aware of Etherscan's capability for interactions with smart contracts on the Ethereum network, but I wonder if there is any other way to read and write from smart contracts.
I'd expect an improved UI/UX usability, allowing input validation, adding documentation on top of the contract etc, yet I couldn't find any other service providing it.
You could use https://remix.ethereum.org/
There is no service that I know that can provide documentation on top of the contract.
But, it's possible to develop one. Are you interested in how it can be done?
The only one I know of is Remix. This is a great tool for smart contract testing and interaction
And if you are planning to develop your own UI with an API. This is not the exact solution but check out drizzle. It has some good built in features which will get you started on the front-end parts and showing blockchain data
Both tools presented below load the ABI automatically from the contract address.
eth95.dev
There is one that looks like old Windows 95 app. Pretty cool.
https://eth95.dev/
mycrypto.com
https://app.mycrypto.com/interact-with-contracts

What are the steps of using historical chat data in RASA

There's a crucial part in the process that says, the best place for the chatbot to learn is from real users, what if I already have that data, and would like to test the model on it.
Think of Interactive Learning, but in scale and possibly automated. Does such a feature already exist within RASA?
Think of Interactive Learning, but in scale and possibly automated.
Are you referring to something like reinforcement learning? Unfortunately something like that currently doesn't exist. Measuring the success of conversations is a tough problem (e.g. some users might give you positive feedback when the bot solved their problem, while others would simply leave the conversation). Something like external business metrics could do the trick (e.g. whether the user turned out buying something from you within the next 24h), but it's still hard. Another problem is that you probably want to have some degree of control over how your chatbot interacts with your users. Training the bot on user conversations without any double checking could potentially lead to problems (e.g. Microsoft once had an AI trained on Twitter data, which didn't turn out well).
Rasa is offering Rasa X for learning from real conversations. The community edition is a free, closed source product which helps you monitor and annotate real user conversations quickly.
Disclaimer: I am a software engineer working at Rasa.

Tracking api/even changes between different microservice versions before deployment

I work devops for a fairly large company that is in process of transitioning to microservices. This is a new area for most people involved and some of the governing requests seem like bad practice to me but I don't have the expertise to convince otherwise.
The request is to generate a report before deploying that would list any new api/events (Kafka is our messaging service) in a microservice.
The path that's being recommended is for devs to follow a style guide and then scrape the source code during CI/CD pipeline to generate a report that can be compared to previous reports and identify any new apis.
This seems backwards and unsustainable but I've been unable to find another solution that would satisfy their requests. I've recommended deploying to dev first, then using a tracing tool to identify any api changes, or event subscriptions, but they insist on having the report before deploying.
I'm hoping for any advice on best practice to accomplish this.
Tracing and detecting version changes is definitely over engineering. Whats simpler like #zenwraight has mentioned, is to version your APIs. While tracing through services to explore the different versions and schema could be a potential solution, it requires a lot more investment upfront and if thats not the bread and butter of the company, I would rather use a vendor product that might support something like this.
If discovery is a mechanism that is needed, I would recommend something that publishes internal API docs using a tool like Swagger so that you can search if there's an API you can consume.
And finally to support moving to different versions, I would recommend having an API onboarding process for the services so that teams can notify other teams that are using specific versions their services are coming to the end of their lifecycle and they will need to migrate to newer ones.

How do I move from C#/ASP to Ruby?

I have recently designed a web application that I would like to write in Ruby. Coming from a ASP background I designed it with method and fields and linked them together (in my diagram and UML) like I would do it in C#.
However, now that I've moved from a single app to MVC I have no idea where my code goes or how the pieces are linked.
For example, my application basically collects information from various sources for users, and when they log in the information is presented to them with "new" information (information collected since last login) is tagged specially in the interface.
In C# I would have a main loop that waits let's say 5 minutes and does the collection, then when a client tries to connect it would spawn a new thread that generates the page with the new information. Now that I'm moving to Ruby I'm not sure how to achieve the same result.
I understand that the controller connects the model to the view and I thus assume this is where my code goes yet I've haven't seen a tutorial that talks about doing what I've mentioned. If someone could point me to one or tell me precisely what I need to do to turn my pseudocode into production code I'd be extremely grateful and probably will still have hair: D
EDIT: Somehow I forgot to mention that I'll be using the Rails framework. I don't really like Ruby but RoR is so nice together that I think I can put up with it.
The part of your application that is retrieving the data at certain interval shouldn't be, strictly speaking, part of the web application. In Unix world (including Rails), it would be implemented either as a daemon process, or a cron job. On Windows, I presume that Windows service is the right tool.
Regarding C# -> Ruby transition, if that's purely for Rails, I'd listen to the George's advice and give ASP.NET MVC a shot, as it resembles Rails logic pretty closely (some would call it a ripoff, I guess ;)). However, learning a new language, especially so different than C# as Ruby is, is always a good idea and a way to improve yourself as a developer.
I realize you want to move to Ruby; but you may want to give ASP.NET MVC a shot. It's the MVC framework on the ASP.NET platform.
Coming from ASP, you're going to have to do a lot of conversion to change your code to become more modular. Much more than any one post on Stack Overflow will do justice.
MVC is made up into 'tiers':
Model - Your Data
View - What the user Sees
Controller - Handles requests and communicates with the View and Model.
Pick up a book on ASP.NET MVC 1.0, and do some research on the MVC pattern. It's worth it.
Whatever Ruby web framework you plan to use (Rails, merb, Sinatra), it sounds like the portion that collects this data would typically be handled by a background task. Your models would be representations of this data, and the rest of your web app would be pretty standard.
There are some good Railscast episodes on performing tasks in the background:
Rake in Background
Starling and Workling
Custom Daemon
Delayed Job
There are other options for performing tasks in the background (such as using a message queue and the ActiveMessaging plugin) but these screen casts will at least give you a feel for how background jobs are generally approached in Rails.
If you perform these tasks on a regular schedule, there are tools for that as well.
I hope this is of some help.
Check out Rails for .NET Developers. I've heard good things about this book and it sounds like it's just what you're looking for.

Resources