Rasa custom actions events are lost - rasa-core

I'm trying to implement custom actions and have added action_get_answer to domain.yml.
actions:
- utter_greet
- utter_cheer_up
- utter_did_that_help
- utter_happy
- utter_goodbye
- actions.GetAnswer
Added the action in actions.py :
class GetAnswer(Action):
def name(self) -> Text:
return "action_get_answer"
def run(self, dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
dispatcher.utter_message("action_get_answer")
return []
Ran the action server:
$ rasa run actions
Upon running rasa server :
$ rasa x
i get this error and GetAnswer is not triggered -
ERROR rasa.core.processor - Encountered an exception while running
action 'action_get_answer'. Bot will continue, but the actions events
are lost. Make sure to fix the exception in your custom code.
How do I make this work?
Thanks

I am also working on RASA X. I have created custom actions and it is successfully called. But first I want to know, does your stories.md file contain that action ? means when to call that action.
Here I am giving what I have implemented :
In stories.md file :-
## story1
* play
- action_ask_question
In domain.yml file :-
...
actions:
- action_ask_question
...
In action.py file :-
class ActionAskQuestion(Action):
def name(self):
return "action_ask_question"
def run(self, dispatcher, tracker, domain):
dispatcher.utter_message("Action called.")
return []
If you have any question, comment it.

I also had this error while using custom actions in Rasa (not Rasa X).
Error image
I sloved the problem by adding action endpoints to endpoints.yml file
action_endpoint:
url: "http://localhost:5055/webhook"
run actions server using one command-line
rasa run actions
or (if you had not installed rasa)
python -m rasa_sdk --actions actions
and run rasa shell using another command-line (with endpoint configurations)
rasa shell --endpoints endpoints.yml

Related

How to reference another yml file from the main github action yaml file?

I'm defining a github action script that's referencing to another yaml file, hoping to put the configuration into a more organised way.
Here is my job file, named as deploy.yml in the path of ./.github/workflows/, where the first . is the root folder of my project.
....
jobs:
UnitTest:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- uses: ./.github/workflows/unittest.yml
In the same ./.github/workflows/ folder, I created another file called unittest.yml as below:
name: "UnitTest"
description: "Perform Unit Test"
runs:
# using: "composite"
- name: Dependency
run: |
echo "Dependency setup commands go here"
- name: UnitTest
run: make test.unit
However, when I tried to test the script locally using act with command act --secret-file .secrets --container-architecture linux/amd64, I received the following error:
[Deploy/UnitTest] ✅ Success - Main actions/checkout#v3
[Deploy/UnitTest] ⭐ Run Main ./.github/workflows/unittest.yml
[Deploy/UnitTest] ❌ Failure - Main ./.github/workflows/unittest.yml
[Deploy/UnitTest] file does not exist
[Deploy/UnitTest] 🏁 Job failed
I have tried to put just the file name unittest.yml or ./unittest.yml or myrepo_name/.github/workflows/unittest.yml or put the file into a subfolder like step 2 of this document illustrated, but all no luck.
Based on examples of runs for composition actions, I would imagine this should work.
Would anyone please advise?
P.S. You might have noticed the commented line of using: "composite" in the unittest.yml. If I uncomment the line, I'll receive error:
Error: yaml: line 3: did not find expected key
Composite actions are not referenced by YAML file, but a folder. In that folder, you are expected to have an action.yml describing the action.
This is why you're getting the error with using: composite, you're defining a workflow (because it's in ./github/workflows), but you are using action syntax.
I would advise this folder structure:
.github/
|-- workflows/
| -- deploy.yml
unittest-action/
|-- action.yml
With this structure, you should be able to reference the action with
- uses: actions/checkout#v3
- uses: ./unittest-action
Please see the docs for more information.
Depending on your use-case and setup, you might also want to consider reusable workflows.
You can define a reusable workflow in your .github/workflows directory like so:
# unittest.yml
on: workflow_call
jobs:
deploy:
# ...
and then call it like so:
jobs:
UnitTest:
uses: ./.github/workflows/unittest.yml
Note how the reusable workflow is an entire job. This means, you can't do the checkout from the outside and then just run the unit test in the reusable job. The reusable job (unittest.yml) needs to do the checkout first.
Which one to pick?
Here's a blog post summarising some of the differences between composite actions and reusable workflows, like:
reusable workflows can contain several jobs, composite actions only contain steps
reusable workflows have better support for using secrets
composite actions can be nested, but as of Jul '22, reusable workflows can't call other reusable workflows

How to solve error `Missing Authentication Token`

I have published my NestJS app to the AWS Lambda
When I try to open the root URL
https://xxx/
it shows "Hello World" correctly
But when I open up :
https://xxx/sales/subscription
it shows Missing Authentication Token message
Has anyone experienced this kind of issue before?
I have fixed the issue, I'm sharing the solution here, hopefully, it helps anyone having the same issue.
So, apparently Missing Authentication Token means the route does not exist.
The app was deployed to AWS Lambda using the serverless framework.
I fixed the issue by simply opening the serverless.yaml file, and then registered the route in the functions section.
Before :
functions:
main: # The name of the lambda function
# The module 'handler' is exported in the file 'src/lambda'
handler: src/lambda.handler
events:
- http:
method: any
path: /
After :
functions:
main: # The name of the lambda function
# The module 'handler' is exported in the file 'src/lambda'
handler: src/lambda.handler
events:
- http:
method: any
path: /
- http:
method: any
path: /sales/subscription

Extracting faq sub-intent from a custom action

I am using the following configuration: Rasa Version : 2.2.9 Rasa SDK Version : 2.2.0 Rasa X Version : None Python Version : 3.7.6 Operating System : Linux-5.4.0-71-generic-x86_64-with-debian-bullseye-sid
I would like to get a faq or chitchat sub-intent (not intent) form a custom action. When I use this command :
tracker.latest_message['intent'].get('name')
I get the intent faq… I would like to get a sub-intent like faq/ask_weather or faq/ask_name instead or even ask_weather or ask name.
Can you help me?
We do this in our demo here:
full_intent = (
tracker.latest_message.get("response_selector", {})
.get("faq", {})
.get("full_retrieval_intent")
)
If you're looking for the full intent of a "chitchat" retrieval intent, you'll have to replace "faq" in the above with "chitchat", and so on, for whatever prefix.

Jenkins Pipeline emailext: How to access build object in pre-send script

I'm using Jenkins ver. 2.150.1 and have some freestyle jobs and some pipeline jobs.
In both job types I am using the emailext plugin, with template and pre-send scripts.
It seems that the build variable, which is available in the freestyle projects, is null in the pipeline projects.
The pre-send script is the following (just an example, my script is more complex):
msg.setSubject(msg.getSubject() + " [" + build.getUrl() + "]")
There is no problem with the msg variable.
In the freestyle job, this script adds the build url to the mail subject.
In the pipeline job, the following is given in the job console:
java.lang.NullPointerException: Cannot invoke method getUrl() on null object
The invocation of emailext in the pipeline job is:
emailext body: '${SCRIPT, template="groovy-html.custom.pipeline.sandbox.template"}',
presendScript: '${SCRIPT, template="presend.sandbox.groovy"}',
subject: '$DEFAULT_SUBJECT',
to: 'user#domain.com'
I would rather find a general solution to this problem (i.e. Access the build variable in a pipeline pre-send script), but would also appreciate any workarounds to my current needs:
Access job name, job number, and workspace folder in a pipeline pre-send script.
I have finally found the answer -
Apparently for presend script in pipeline jobs, the build object does not exist, and instead the run object does. At the time I posted this question this was still undocumented!
Found the answer in this thread
Which got the author to update the description in the wiki:
run - the build this message belongs to (may be used with FreeStyle or Pipeline jobs)
build - the build this message belongs to (only use with FreeStyle jobs)
You can access the build in a script like this:
// findUrl.groovy
def call(script) {
println script.currentBuild.rawBuild.url
// or if you just need the build url
println script.env.BUILD_URL
}
and would call the script like this from the pipeline:
stage('Get build URL') {
steps {
findUrl this
}
}
The currentBuild gives you a RunWrapper object and the rawBuild a Run. Hope this helps.

Grails test-app updating function being tested and test print out problems

I am running grails 2.3.3 in a GGTS.
I am successfully running a single unit test for a service function within the Spring GGTS.
I am hoping to be able to use this unit test to develop the particular function - such an approach will really speed up my development going forward.
This means I need to make changes to the service function that is being tested and then retest over and over again (no doubt a sad reflection on my coding skills!). The problem is when I make a change to the logic or any log.debug output it does not come through in the test. In other words the test continues to run against the original service function and not the updated one.
In order for me to force it to use the updated function the only way I have found that will do this is to restart the GGTS!
Is there a command I can use in GGTS to force a test on the most recent version of the function I am testing?
Here are the commands I am using within the GTTS:
test-app unit: UtilsService
I do run a clean after a function update without any success:
test-app -clean
I am also struggling with getting additional output from within the test function - introducing 'println' or 'log.debug' commands results in a failure of the test.
It would be useful to know of a good link to documentation about the test syntax - I have looked at grails section 12 about testing in general.
Here is the test file:
package homevu1
import grails.test.mixin.TestFor
import spock.lang.Specification
/**
* See the API for {#link grails.test.mixin.services.ServiceUnitTestMixin} for usage instructions
*/
#TestFor(UtilsService)
class UtilsServiceSpec extends Specification {
// to test utilSumTimes for example use the command :
// test-app utilSumTimes
// test-app HotelStay
def setup() {
}
def cleanup() {
}
void "test something"() {
when:
def currSec = service.utilSumTimeSecs( 27, 1, false)
//println "currSec" , currSec
then:
//println "currSec" , currSec
assert currSec == "26"
}
}
If I uncomment either of the println lines these comments are not displayed and the test fails.
Welcome any suggestions.
-mike
I've to get this working now by running grail from a command prompt (in MS Windows).
In the command prompt I moved to the root folder/directory of the grails project - in my case:
cd C:\grails\workspace\rel_3.1.0\HomeVu
Then I type grails to start a grails command line session.
The unit test command I used being:
test-app -unit UtilsService -echoOut -echoErr
That said I still am unable to successfully put any print commands in the test file - but I can use the assert to determine any problems.
Also output from the last log.debug line of the grails code of the service function fails to appears. Perhaps there is some output buffering issue with MS Windows here.
At least I can now do some rapid function development, by making changes to the service/function code and instantly test is against a set of known requirement conditions.
Hope this helps others.
-mike

Resources