Correct way to give path to lookup table in rasa - rasa-nlu

I want to add a look up table for rasa nlu training. I have created a cities.txt file in my data folder and added it in my nlu.md file as
## lookup:location
data/cities.txt
when I try to train the model using rasa train nlu I get an error
ValueError: Could not load lookup table data/cities.txt. Please make sure you've provided the correct path.
What is the correct way for giving the path?
Edit
I am using a Windows system:
cmd prompt => C:\Users\1000277196\Desktop\ChatBot> rasa train nlu

Are you sure that the path is correct relative to where you are running the rasa train command? If you are in the directory my-project, the file should be at my-project/data/cities.txt

Related

Does LightGBM cli support "feature_name" config?

I'm using libsvm as the input data format, and want to know how I could pass column name to LightGBM cli.
I found that LightGBM python API support a parameter feature_name, does cli version support the same field? I cannot find it through doc.
https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.train.html#lightgbm.train
This question was cross-posted to the LightGBM issues board, https://github.com/microsoft/LightGBM/issues/3731 and answered there.
Yes, feature names in the CLI are supported by using the header parameter, documented at https://lightgbm.readthedocs.io/en/latest/Parameters.html#header. Add feature names as the first line in your data file and use that parameter to tell the LightGBM CLI to look in the header for feature names.

Which entity rxtractor should use? Rasa

In my training data, I want to extract the following values from user input.
phone number,location,reference_number. So which entity extractor I need to use?
I think you should go with Duckling entity extractor. It can extract numbers, locations, etc. However if you are running this you have to run it in a separately and connect it through config.yml file.
Refer Rasa components and forum for more information

How to set application name and installation path while installing by AdminApp?

Just tried this thing for a day or two, able to do the installation using AdminApp command, but still got some problem about application name and installation path.
Reference I look from IBM said AppName is defined by display name, I think that mean it use the display name field from web.xml? But the result that I see is the application name will look like "Test_AP.war16dfd74ab1a", not exactly the same.
The installation path parameter part is also ambiguous, I don't think I see a thing looks like it.
https://www.ibm.com/support/knowledgecenter/SSEQTP_8.5.5/com.ibm.websphere.base.iseries.doc/ae/rxml_taskoptions.html?view=embed#rxml_taskoptions__cmd10
import time
AdminApp.install('C:/Users/Development/Desktop/Test_AP.war', '-cell WIN-9DAB2SINode01Cell')
AdminConfig.save()
result = AdminApp.isAppReady('Test_AP')
while (result == "false"):
### Wait 5 seconds before checking again
time.sleep(5)
result = AdminApp.isAppReady('Test_AP')
print("Starting application...")
What I try to achieve is to use this script to install my service onto WAS with exact application name(same as my WAR name or display name in web.xml) and designated installation path, then start the service. But now I'm stuck at com.ibm.ws.scripting.ScriptingException: WASX7280E: An application with name "Test_AP_WAR" does not exist. Can anyone tell me how to do this?
Simple Solution
You need to include an -appname argument such as:
AdminApp.install('C:/Users/Development/Desktop/Test_AP.war', ['-cell WIN-9DAB2SINode01Cell' -appname SimpleTestConnection5_war])
Otherwise WAS will generate an application name for you.
Ideal Solution
Create an AdminApp.install command using the Admin Console. Install an enterprise application using the WAS Admin Console. On the last page, you will see a help box in the upper right corner. Click on view administrative scripting and you will be provided with an AdminApp.install command. This command can be edited with a custom location and appname.

CoreNLP: Load "out-of-box" and custom NER model on Windows OS

I am looking to load a custom build NER model as well as one of the "out-of-box" Stanford CoreNLP NER models on a Windows 10 computer. I would like to apply both models to my text.
I have accomplished this for a CentOS system and authored this question "Load Custom NER Model Stanford CoreNLP".
I understand that I can use -serverproperties with a properties file to load a custom NER model. When you do this that is the only model to load and you would have to specify which "out-of-box" NER models you would like to load in addition to your custom model. I have done this on my CentOS system but cannot accomplish it on my Windows computer.
The difficulty comes in specifying the filepath to the "out-of-box" NER models. I use this type of path for my custom model C:\path\to\custom_model.ser.gz but I do not have a file path to the "out-of-box" NER models as their paths are for a Linux OS.
How do I properly direct CoreNLP to the "out-of-box" NER models in my server.prop file?
The ner.model file path can take in a comma separated list with multiple model paths. I honestly am not familiar with Windows, so I am not really sure what would happen if you supplied a DOS style path in your list for ner.model .
But assuming that doesn't work, you could always make a jar and place your custom model in that jar with a Unix path, then place that jar into your CLASSPATH when running your application.
I was able to solve my own problem. This is what I used in the server.prop file:
ner.model = C:\\path\\to\\custom_model.ser.gz,edu/stanford/nlp/models/ner/english.all.3class.distsim.crf.ser.gz
The issue I was having was that I was putting a space after the comma separating the models. I would get the "Unable to load as url, path, or file" error because it was adding the space to the file path. ~face to palm~

Using PDI transformation in Pentaho BI server as data source for report parameters

Any advice on how to use PDI transformation as data source for report parameters in BI server's console?
I've uploaded the prpt reports to BI server but the I get a message "Error parsing parameter information". The .prpt and .ktr files are both in the same directory.
Actually, just realized that the issue could be solved by adding Transformation (KTR) as a resource. In this case, one can use the File-Resources menu selection. In the dialog select the transformation you wish to import and pick text/xml format. Give the resource a name and save it. You must save your PRPT file again (File-Save).
The caveat here is that transformation should be in the same folder as PRPT file. Then in the data sources, don't select transformation via folder path, but use the name of the resource that was assigned during the previous step (there is no drop down menu for looking thorough the files). You have to know exact name of the resource in order to do so.
Check the logs carefully. I suspect it's not finding the KTR. When you select the KTR in the prpt it usually (annoyingly) saves the whole path, so it's probably the full path to the ktr as defined on your dev box.
This does work, so do persevere!

Resources