Version
#microsoft/botframework-cli/4.15.0 win32-x64 node-v16.15.0
Describe the bug
This bug is faced when trying to create the Orchestrator from the .lu files already defined in LUIS and using the pretrained model obtained from the base model for the Orchestrator
To Reproduce
Steps to reproduce the behavior:
bf orchestrator:add -t luis
bf orchestrator:basemodel:get --out model
bf orchestrator:create --in . --model model --out generated
Failed to parse LUIS or JSON file on intent/entity labels
Failed to parse config.json, error={}
The error seems to be only with the pretrained model as all the files prior to it are processed as per the command line output.
There are token recognition errors, but does not stop the process to proceed at point 3.
The content in config.json :
{
"VocabFile": "vocab.txt",
"ModelFile": "model.onnx",
"Name": "pretrained.20200924.microsoft.dte.00.06.en.onnx",
"Framework": "onnx",
"Publisher": "Microsoft",
"ModelType": "dte_bert",
"Layers": 6,
"EmbedderVersion": 1,
"MinRequiredCoreVersion": "1.0.0"
}
How can I solve this to poceed to create Orchestrator?
Related
I'm trying to quantize a seq2seq model (M2M100) using optimum library provided by Huggingface. As per this guide, I'm trying to quantize the encoder and decoder one by one but that requires me to overwrite the model name. Following the documentation in the guide, I used the code:
encoder_quantizer = ORTQuantizer.from_pretrained(model_dir, file_name="encoder_model.onnx")
This code is throwing the following error:
TypeError: from_pretrained() got an unexpected keyword argument 'file_name'
I tried examining ORTQuantizer.from_pretrained and got the following:
<function optimum.onnxruntime.quantization.ORTQuantizer.from_pretrained(model_name_or_path: Union[str, os.PathLike], feature: str, opset: Optional[int] = None) -> 'ORTQuantizer'>
Clearly, from_pretrained here doesn't have a file_name parameter as has been indicated in the guide. Can someone please help me debug this error? Thanks!
Posting the solution I found while exploring optimum GitHub repo. The problem is that installing optimum via pip is downloading v1.3 which did not have the fix for quantizing seq2seq models. Instead install the package directly from GitHub using the command below. It worked fine afterwards.
python -m pip install git+https://github.com/huggingface/optimum.git
I am running an EC2 instance on aws which I need to import to pulumi by running the following command pulumi import -f resources.json.
I have the following JSON file as my resource file resource.json.
{
"resources": [{
"type": "aws:ec2/instance:Instance",
"name": "my-server",
"id": "i-08f92a261a2ef6650",
"ami": "ami-0fdb3f3ff5d7c40db"
}
]
}
But get the following error:
aws:ec2:Instance (my-server):
error: aws:ec2/instance:Instance resource 'my-server' has a problem: Missing required argument: "instance_type": one of `instance_type,launch_template` must be specifie
d. Examine values at 'Instance.InstanceType'.
error: aws:ec2/instance:Instance resource 'my-server' has a problem: Missing required argument: "launch_template": one of `ami,instance_type,launch_template` must be sp
ecified. Examine values at 'Instance.LaunchTemplate'.
error: aws:ec2/instance:Instance resource 'my-server' has a problem: Missing required argument: "ami": one of `ami,launch_template` must be specified. Examine values at
'Instance.Ami'.
error: Preview failed: one or more inputs failed to validate
This is a bug, please see this GitHub issue:
https://github.com/pulumi/pulumi-aws/issues/1605
and this one: https://github.com/pulumi/pulumi/issues/7160#issuecomment-912093311
The best workaround at this stage is to define the EC2 instance in your code, and then set the import resource property as defined here
I'm trying to create a new cluster in Databricks on Azure using databricks-cli.
I'm using the following command:
databricks clusters create --json '{ "cluster_name": "template2", "spark_version": "4.1.x-scala2.11" }'
And getting back this error:
Error: {"error_code":"INVALID_PARAMETER_VALUE","message":"Missing required field: size"}
I can't find documentation on this issue, would be happy to receive some help.
I found the right answer here.
The correct format to run this command on azure is:
databricks clusters create --json '{ "cluster_name": "my-cluster", "spark_version": "4.1.x-scala2.11", "node_type_id": "Standard_DS3_v2", "autoscale" : { "min_workers": 2, "max_workers": 50 } }'
Just to add to the answer that #MorShemesh gave, you can also use a path to a JSON file instead of specifying the JSON at the command line.
databricks clusters create --json-file /path/to/my/cluster_config.json
If you are managing lots of clusters this might be an easier approach.
databricks clusters create --json "{ "cluster_name": "custpm-cluster", "spark_version": "4.1.x-scala2.09", "node_type_id": "Standard_DS3_v2", "autoscale" : { "min_workers": 2, "max_workers": 50 }}"
I was trying to setup the Hyperledger blocchain on my laptop by following the Windows setup , was able to bring the docker images up and running, but when I try to deploy the examples provided, it always throws back the error in the JSON input as shown below.
peer chaincode deploy -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Function":"init", "Args": ["a","100", "b", "200"]}'
response:
sug#sri-ub:~/go/$ docker exec -it aa413f4c4289 bash
root#aa413f4c4289:/opt/gopath/src/github.com/hyperledger/fabric# peer chaincode deploy -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Function":"init", "Args": ["a","100", "b", "200"]}'
04:30:55.822 [logging] LoggingInit -> DEBU 001 Setting default logging level to DEBUG for command 'chaincode' Error: Non-empty JSON chaincode parameters must contain exactly 1 key: 'Args'
I tried in POSTMAN from the HOST machine:
{"jsonrpc":"2.0","method":"deploy","params":{"type":1,"chaincodeID":{"path":"github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02"},"ctorMsg":{"function":"init","args":["a", "1000", "b", "2000"]}},"id":1}
got the response as
{"jsonrpc":"2.0","error":{"code":-32700,"message":"Parse error","data":"Error unmarshalling chaincode request payload: illegal base64 data at input byte 0"},"
This is similar to the error message and I couldn't resolve this still, creating a new post as advised, please help me fix this issue.
Similar issue reported but that also doesn't answer
In latest fabric's version the request's format was changed. Function name should be in Args and all parameters should be base64 encoded.
Instead of:
{"function":"init","args":["a", "1000", "b", "2000"]}}
The arguments for deploy command will look like:
{"args":['aW5pdA==', 'YQ==', 'MTAwMA==', 'Yg==', 'MjAwMA==']}
Update: Format was changed again. Base64 encoding is not necessary any longer. correct payload in latest Fabric is:
{“args”:['init', 'a', '100', 'b', '100']}
I am following the steps mentioned on the AWS to use an interactive Hive session using SSH.
I used the following resources
https://github.com/ucbtwitter/getting-started/wiki/Using-Elastic-Map-Reduce-via-Command-Line
http://docs.amazonwebservices.com/ElasticMapReduce/latest/GettingStartedGuide/SignUp.html
I was getting this error initially "Error: Missing key access-id" and then I fixed my JSON file. The JSON file is in the same format as mentioned in the above links.
When I run this command
./elastic-mapreduce
I am getting the following error :-
Error: Unable to parse credentials.json: can't convert String into Integer.
I checked the values required in JSON at AWS as well.
Does anyone has an idea why am I getting this error?
The region value in the credentials.json must be of int type.
{......
......
"region": 1
}