Rasa NLU failing to classify intent - rasa-nlu

I'm running rasa-nlu on a docker container.
trying to train it on my data and then performing requests to the http server, which always result as follow:
"intent": { "confidence": 1.0, "name": "None" }
I'm running a config file as follows:
{
"name": null,
"pipeline": "mitie",
"language": "en",
"num_threads": 4,
"max_training_processes": 1,
"path": "./models",
"response_log": "logs",
"config": "config.json",
"log_level": "INFO",
"port": 5000,
"data": "./data/test/demo-rasa.json",
"emulate": null,
"log_file": null,
"mitie_file": "./data/total_word_feature_extractor.dat",
"spacy_model_name": null,
"server_model_dirs": null,
"token": null,
"cors_origins": [],
"aws_endpoint_url": null,
"max_number_of_ngrams": 7,
"duckling_dimensions": null,
"entity_crf_BILOU_flag": true,
"entity_crf_features": [
["low", "title", "upper", "pos", "pos2"],
["bias", "low", "word3", "word2", "upper", "title", "digit", "pos", "pos2", "p
attern"],
["low", "title", "upper", "pos", "pos2"]]
}
What's the reason for that behaviour?
The models folder contains the trained
model inside another nested folder, is it ok?
Thanks.

I already saw your GitHub issue, thanks for providing a bit more information here. You're still leaving a lot of details about the Docker container ambiguous.
I and a few others got a pull request merged into the rasa repo available here on Docker Hub. There are several different builds now available and the basic usage instructions can be found below or on the main repo README.
General Docker Usage Instructions
For the time being though, follow the below steps:
docker run -p 5000:5000 rasa/rasa_nlu:latest-mitie
The demo data should be loaded already able to be parsed against using the below command:
curl 'http://localhost:5000/parse?q=hello'
Trying to troubleshoot your specific problem
As for your specific install and why it is failing, my guess is that your trained data either isn't there or is a name that rasa doesn't expect. Run this command to see what models are available:
curl 'http://locahost:5000/status'
your response should be something like:
{
"trainings_queued" : 0,
"training_workers" : 1,
"available_models" : [
"test_model"
]
}
If you have a model listed under available_models you can load/parse it with the below command replacing test_model with your model name.
curl 'http://localhost:5000/parse?q=hello&model=test_model'

Actually, I found that using Mitie always fails, thus, the model wasn't getting updated. Thanks for the info though.
Using Mitie-Sklearn fixed the issue.
Thank you.

There are some issues with MITIE Pipeline on Windows :( , training on MITIE takes a lot of time and spaCy trains the model very quickly. (2-3 minutes depending on your Processor and RAM).
Here's how I resolved it:
[Note: I am using Python 3.6.3 x64 Anaconda and Windows 8.1 O.S]
Install the following packages in this order:
Spacy Machine Learning Package: pip install -U spacy
Spacy English Language Model: python -m spacy download en
Scikit Package: pip install -U scikit-learn
Numpy package for mathematical calculations: pip install -U numpy
Scipy Package: pip install -U scipy
Sklearn Package for Intent Recognition: pip install -U sklearn-crfsuite
NER Duckling for better Entity Recognition with Spacy: pip install -U duckling
RASA NLU: pip install -U rasa_nlu==0.10.4
Now, in RASA v0.10.4 they use Twisted Asynchronous server which is not WSGI compatible. (More information on this here.)
Now make the config file as follows:
{
"project": "Travel",
"pipeline": "spacy_sklearn",
"language": "en",
"num_threads": 1,
"max_training_processes": 1,
"path": "C:\\Users\\Kunal\\Desktop\\RASA\\models",
"response_log": "C:\\Users\\Kunal\\Desktop\\RASA\\log",
"config": "C:\\Users\\Kunal\\Desktop\\RASA\\config_spacy.json",
"log_level": "INFO",
"port": 5000,
"data": "C:\\Users\\Kunal\\Desktop\\RASA\\data\\FlightBotFinal.json",
"emulate": "luis",
"spacy_model_name": "en",
"token": null,
"cors_origins": ["*"],
"aws_endpoint_url": null
}
Now run the server, like the following template:
http://localhost:5000/parse?q=&project=
You will get a JSON response something like this, like the LUISResult class of BotFramework C#.

Related

Electron-builder macOS notarization problem with puppeteer library: Not all binaries are signed

I am currently struggling with notarizing my app with electron builder for macOS! The app uses puppeteer which causes the error that the ".localChromium" folder does not get signed! I already tried a lot of things but I was not able to fix this problem.
Here is my configuration for the package.json file:
"build": {
"asar": true,
"asarUnpack": "node_modules/puppeteer/.local-chromium/**/*",
"publish": [
{
"provider": "generic",
"url": "http://www.someProvider.com"
}
],
"appId": "SomeApp",
"afterSign": "notarize.js",
"mac": {
"icon": "build/logo.png",
"category": "public.app-category.productivity",
"target": [
"dmg", "zip"
],
"signIgnore": "/node_modules/puppeteer/.local-chromium/",
"gatekeeperAssess": false
}
This is just the lastest configuration I tried! (I read about the signIgnore property on a GitHub post where someone mentioned a similar problem and was able to fix it with this, but this hasn't changed anything - I tried multiple paths in case this one is a wrong expression). I also tried to set the "hardendedRuntime" property to true.
To use puppeteer-core is not an option!
These are some errors I receive - they all state that the content in the .localChromium folder isn't signed:
Does anyone know how to fix this problem?
I solved this by using puppeteer-in-electron. Just replace import puppeteer from 'puppeteer' with import puppeteer from 'puppeteer-core'. That way .local-chromium wont be included with your electron app because it will just use the chromium that is built in along with electron. You will also need to remove puppeteer from package.json

How do I specify the tag to use for an image within a VS Code container definition JSON?

VS Code fails to start if I specify a tag when using a preexisting docker image, as in:
{
"name": "VsCode Image Tag Issue.",
"context": "..",
"image": "rirlswift/vscode-remote-nvidia-issue:latest",
"runArgs": [
"--name","docker-opengl"
"-v", "/tmp/.X11-unix:/tmp/.X11-unix:rw",
"-e", "DISPLAY",
"--rm"
],
"postCreateCommand": "glxgears"
}
Here is the issue mark down from the issue I've submitted to the VS Code remote extensions team. 2431
Is there possibly a syntax for image that I have missions?
Update: Looks like a fix is pending from VsCode:
2431
I have verified using insiders edition
Version: 1.45.0-insider

what is structure of BotConfiguration.bot in bot project using bot framwork SDK4.0 and how to add it the project?

I am referring to the QnA bot sample from the git hub link QnAbot, but when I follow the steps, I am not able to figure it out the BotConfiguration.bot. I want to see the sample of this .bot file. However, from in the sample code directory, I did not find it.
Can someone tell me how to create a simple QnA bot using SDK4.0?
I am using C# .net core bot template.
Thanks.
There is an easier way to generate Bot Configuration file without typing all these commands.
a) Install Bot Framework Emulator
b) Launch the emulator, Navigate to "File" and select "New Bot Configuration".
c) After key in all the information needed, just save the file at your desired location.
p/s: The sample botConfiguraton.bot file can refer to the official Microsoft documentation.
In order to auto generate a bot file you have to use botbuilder-tools. For some reason the instructions are missing in that samples readme I will work on getting that updated ASAP.
You can install the tools by running this command in command line:
npm install -g chatdown msbot ludown luis-apis qnamaker botdispatch luisgen
You will need to have there installed:
Node.js version 8.5 or higher
.NET Core SDK version 2.1.403 or higher
Then you will have to run the msbot init command with the options you need, a list of options can be found here
and an example command would look like this:
msbot init --name TestBot --endpoint http://localhost:9499/api/messages
Then you will need to add the qnamaker service, there is information about adding this and other services here
an example of the command you would run to add the qnamaker service would look like this:
msbot connect qna --name "<NAME>" --kbId <KNOWLEDGE BASE ID> --subscriptionKey <SUBSCRIPTION KEY> --endpointKey <ENDPOINT-KEY> --hostname "https://myqna.azurewebsites.net"
When you are done you will have a .bot file that will look like this:
{
"name": "qnamaker2",
"services": [
{
"type": "endpoint",
"name": "qnamaker2",
"endpoint": "http://localhost:3978/api/messages",
"appId": "",
"appPassword": "",
"id": "0"
},
{
"type": "qna",
"name": "{YOUR QnA APP NAME}",
"kbId": "{YOUR KNOWLEDGEBASE ID}",
"subscriptionKey": "{YOUR SUBSCRIPTION KEY}",
"endpointKey": "{your endpoint key}",
"hostname": "{YOUR HOSTNAME}",
"id": "74"
}
],
"padlock": "",
"version": "2.0"
}

Chronos docker parameters ignored

I am trying to test docker port mapping by specifying parameters in the chronos job definition. The parameters options doesn't seem to take any effect on the docker run.
Job definition as follows:
{
"schedule": "R0//P",
"name": "testjob",
"container": {
"type": "DOCKER",
"image": "path_to_image",
"network": "BRIDGE",
"parameters" : [
{"key" : "-p", "value": "8888:4400"}
]
},
"cpus": "4",
"mem": "512",
"uris": ["path to dockercfg.tar.gz"],
"command" : "./command-to-execute"
}
1) Docker run on the node doesn't take parameters into consideration. Any suggestions on the correct way to include parameters as part of docker run will be highly appreciated?
2) The Docker Image I am trying to run has ENTRYPOINT specified in it. So technically, the ENTRYPOINT should run when the docker runs the container. With the way Chronos is set up, I am forced to provide "command" option in the job JSON (skipping command option during job submission throws back error). When the container is actually scheduled on the target node then instead of using the ENTRYPOINT from the dockerfile, it actually tries to run the command specified in the job definition JSON.
Can someone provide a way for using Chronos to run ENTRYPOINT instead of command from Chronos job JSON definition?
Notes:
Setting command to blank doesn't help.
ENTRYPOINT can be specified as a command in JSON job definition and that should fix the problem with command. But don't have access to ENTRYPOINT for all the containers.
***Edit 1: Modified question with some more context and clarity
You should have a look at the official docs regarding how to run a Docker job.
A docker job takes the same format as a scheduled job or a dependency job and runs on a Docker container. To configure it, an additional container argument is required, which contains a type (required), an image (required), a network mode (optional), mounted volumes (optional) and whether Mesos should always (force)Pull(Image) the latest image before executing or not (optional).
Concerning 1)
There's no way IMHO to set the parameters like you're trying to. Also, port mappings are specified differently (with Marathon), but as I understand the docs it's not possible at all. And probably not necessary for a batch job.
If you want to run longrunning services, use Marathon.
Concerning 2)
Not sure if I understand you correctly, but normally this would be implemented via specifying an ENTRYPOINT in the Dockerfile
Concerning 3)
Not sure if I understand you correctly, but I think you should be able to omit the command property with Docker jobs.
To use the docker container entrypoint you must set "shell" false, and command has to be blank. If command is different than blank, chronos will pass it as an argument to the entrypoint. Your json would be like below.
I don't know if you should use the "uris" field, it is deprecated, and, if it is what I think it is, it seems not required anymore to start docker apps.
About the docker parameters, I think the problem is with the key name that you used. It seems you must omit the - symbol. Try as below.
{
"schedule": "R0//P",
"name": "testjob",
"shell": false,
"container": {
"type": "DOCKER",
"image": "path_to_image",
"network": "BRIDGE",
"parameters" : [
{"key" : "p", "value": "8888:4400"}
]
},
"cpus": "4",
"mem": "512",
"command" : ""
}

How to show Vagrant box version used in a particular directory

I have multiple Vagrant boxes, and would like to see what version of what box is running in which directory.
vagrant box list returns a global list of boxes:
puphpet/centos65-x64 (virtualbox, 1.2.1)
puphpet/centos65-x64 (virtualbox, 2.0)
vagrant global-status shows directories with providers:
id name provider state directory
--------------------------------------------------
a427238 default virtualbox poweroff /path/to/dir1
fa21751 default virtualbox running /path/to/dir2
But how can I see which Vagrant box version is used in which directory?
This data is possible to retrieve but is not exposed, as far as I know, through the Vagrant CLI. Take a look in ~/.vagrant.d/data/machine-index/index for Linux or macOS and I would assume it'd be something like C:\Users\whoever\.vagrant.d\data\machine-index on Windows.
You'll get some unformatted JSON which contains details on every machine Vagrant knows about. If you run the JSON through a pretty-printer/beautifier you'll get one of these for every machine:
"d62342a255436211725abe8fd3c313ea": {
"local_data_path": "/Users/whoever/mymachine/.vagrant",
"name": "default",
"provider": "virtualbox",
"state": "poweroff",
"vagrantfile_name": null,
"vagrantfile_path": "/Users/whoever/mymachine",
"updated_at": null,
"extra_data": {
"box": {
"name": "ubuntu/xenial64",
"provider": "virtualbox",
"version": "20170706.0.0"
}
}
},
And the box information associated with your machine is right there. The ubuntu/xenial64 box on the virtualbox provider version 20170706.0.0.
This is kind of an old thread, but I ran into this situation recently that matched the original request, and I discovered an answer that is not listed here:
The vagrant box outdated command lists the current box version number when it tests to see if there is a newer version of the box.
The caveat is that the vagrant box outdated command needs access to the internet to check the current version, which it also outputs.
I only discovered this after I had written this bash script that uses jq to search for the current directory in the ~/.vagrant.d/data/machine-index/index file. I make no guarantees that this will work in your environment:
$ cat ~/scripts/vagrant_box_info.sh
#!/bin/bash
CUR_DIR=`pwd`
JQ_CMD='.machines|to_entries|map(select(.value.vagrantfile_path|test("'$CUR_DIR'$")))[].value.extra_data'
cat ~/.vagrant.d/data/machine-index/index | jq "$JQ_CMD"
$ ~/scripts/vagrant_box_info.sh
{
"box": {
"name": "geerlingguy/centos7",
"provider": "virtualbox",
"version": "1.2.15"
}
}
$
building on kevin's answer, if you are using jq you can get most of what you need by parsing the json with:
cat ~/.vagrant.d/data/machine-index/index | jq ".machines |to_entries[] | .value | .vagrantfile_path,.extra_data"
which gets me:
"/Users/myuser/kds2/chef/vagrant/test_bridged_192"
{
"box": {
"name": "opscode-ubuntu-14.04",
"provider": "virtualbox",
"version": "0"
}
}
"/Users/myuser/kds2/chef/vagrant/testzero"
{
"box": {
"name": "opscode-ubuntu-14.04",
"provider": "virtualbox",
"version": "0"
}
}
"/Users/myuser/kds2/wk/issues/fb230.bare_monit/vag"
{
"box": {
"name": "opscode-ubuntu-14.04",
"provider": "virtualbox",
"version": "0"
}
}
Warning: if you have removed the vm via something like rm -rf .vagrant manually, the index file may not reflect that.
However, if for example you vagrant box remove opscode-ubuntu-14.04 then vagrant will realize that the box is not actually being used, will allow removal of the box and will update the index file accordingly.

Resources