Pocketsphinx install does not contain acoustic model definition mdef - pocketsphinx

I have tried to install pocketsphinx 5 prealpha on windows. But it seems to be stuck in this error below.
INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='current', VARNORM='no', AGC='none' INFO: cmn.c(143): mean[0]= 12.00, mean[1..12]= 0.0 ERROR: "acmod.c", line 83: Folder 'model/en-us/en-us' does not contain acoustic model definition 'mdef'
My sphinxbase and pocketsphinx folder is in the same parent folder and I have renamed it as the instruction.
how I compile it
I have check all the directories and it did contain mdef file without extension.
What should I do?
Thankyou.

You need to specify a proper path to the model folder. You are currently in bin\Release\x64 folder. In your case the path to the model folder must be ..\..\..\model\en-us\en-us. If you are not sure what is the relative path, specify an absolute path.

This is because when you run the example code it has MODELDIR and DATADIR variables according to the default but you need to put them according to your file location. Changing the following might sort out the issue
MODELDIR = "/usr/local/share/pocketsphinx/model/"
DATADIR = "/my/Desktop/directory/pocketsphinx-master/test/data/"
This should work ! However I'm not sure. Do u have a better solution?

Related

Load a pre-trained model from disk with Huggingface Transformers

From the documentation for from_pretrained, I understand I don't have to download the pretrained vectors every time, I can save them and load from disk with this syntax:
- a path to a `directory` containing vocabulary files required by the tokenizer, for instance saved using the :func:`~transformers.PreTrainedTokenizer.save_pretrained` method, e.g.: ``./my_model_directory/``.
- (not applicable to all derived classes, deprecated) a path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (e.g. Bert, XLNet), e.g.: ``./my_model_directory/vocab.txt``.
So, I went to the model hub:
https://huggingface.co/models
I found the model I wanted:
https://huggingface.co/bert-base-cased
I downloaded it from the link they provided to this repository:
Pretrained model on English language using a masked language modeling
(MLM) objective. It was introduced in this paper and first released in
this repository. This model is case-sensitive: it makes a difference
between english and English.
Stored it in:
/my/local/models/cased_L-12_H-768_A-12/
Which contains:
./
../
bert_config.json
bert_model.ckpt.data-00000-of-00001
bert_model.ckpt.index
bert_model.ckpt.meta
vocab.txt
So, now I have the following:
PATH = '/my/local/models/cased_L-12_H-768_A-12/'
tokenizer = BertTokenizer.from_pretrained(PATH, local_files_only=True)
And I get this error:
> raise EnvironmentError(msg)
E OSError: Can't load config for '/my/local/models/cased_L-12_H-768_A-12/'. Make sure that:
E
E - '/my/local/models/cased_L-12_H-768_A-12/' is a correct model identifier listed on 'https://huggingface.co/models'
E
E - or '/my/local/models/cased_L-12_H-768_A-12/' is the correct path to a directory containing a config.json file
Similarly for when I link to the config.json directly:
PATH = '/my/local/models/cased_L-12_H-768_A-12/bert_config.json'
tokenizer = BertTokenizer.from_pretrained(PATH, local_files_only=True)
if state_dict is None and not from_tf:
try:
state_dict = torch.load(resolved_archive_file, map_location="cpu")
except Exception:
raise OSError(
> "Unable to load weights from pytorch checkpoint file. "
"If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. "
)
E OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
What should I do differently to get huggingface to use my local pretrained model?
Update to address the comments
YOURPATH = '/somewhere/on/disk/'
name = 'transfo-xl-wt103'
tokenizer = TransfoXLTokenizerFast(name)
model = TransfoXLModel.from_pretrained(name)
tokenizer.save_pretrained(YOURPATH)
model.save_pretrained(YOURPATH)
>>> Please note you will not be able to load the save vocabulary in Rust-based TransfoXLTokenizerFast as they don't share the same structure.
('/somewhere/on/disk/vocab.bin', '/somewhere/on/disk/special_tokens_map.json', '/somewhere/on/disk/added_tokens.json')
So all is saved, but then....
YOURPATH = '/somewhere/on/disk/'
TransfoXLTokenizerFast.from_pretrained('transfo-xl-wt103', cache_dir=YOURPATH, local_files_only=True)
"Cannot find the requested files in the cached path and outgoing traffic has been"
ValueError: Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.
Where is the file located relative to your model folder? I believe it has to be a relative PATH rather than an absolute one. So if your file where you are writing the code is located in 'my/local/', then your code should be like so:
PATH = 'models/cased_L-12_H-768_A-12/'
tokenizer = BertTokenizer.from_pretrained(PATH, local_files_only=True)
You just need to specify the folder where all the files are, and not the files directly. I think this is definitely a problem with the PATH. Try changing the style of "slashes": "/" vs "\", these are different in different operating systems. Also try using ".", like so ./models/cased_L-12_H-768_A-12/ etc.
I had this same need and just got this working with Tensorflow on my Linux box so figured I'd share.
My requirements.txt file for my code environment:
tensorflow==2.2.0
Keras==2.4.3
scikit-learn==0.23.1
scipy==1.4.1
numpy==1.18.1
opencv-python==4.5.1.48
seaborn==0.11.1
tensorflow-hub==0.12.0
nltk==3.6.2
tqdm==4.60.0
transformers==4.6.0
ipywidgets==7.6.3
I'm using Python 3.6.
I went to this site here which shows the directory tree for the specific huggingface model I wanted. I happened to want the uncased model, but these steps should be similar for your cased version. Also note that my link is to a very specific commit of this model, just for the sake of reproducibility - there will very likely be a more up-to-date version by the time someone reads this.
I manually downloaded (or had to copy/paste into notepad++ because the download button took me to a raw version of the txt / json in some cases... odd...) the following files:
config.json
tf_model.h5
tokenizer_config.json
tokenizer.json
vocab.txt
NOTE: Once again, all I'm using is Tensorflow, so I didn't download the Pytorch weights. If you're using Pytorch, you'll likely want to download those weights instead of the tf_model.h5 file.
I then put those files in this directory on my Linux box:
/opt/word_embeddings/bert-base-uncased/
Probably a good idea to make sure there's at least read permissions on all of these files as well with a quick ls -la (my permissions on each file are -rw-r--r--). I also have execute permissions on the parent directory (the one listed above) so people can cd to this dir.
From there, I'm able to load the model like so:
tokenizer:
# python
from transformers import BertTokenizer
# tokenizer = BertTokenizer.from_pretrained("bert-base-cased")
tokenizer = BertTokenizer.from_pretrained("/opt/word_embeddings/bert-base-uncased/")
layer/model weights:
# python
from transformers import TFAutoModel
# bert = TFAutoModel.from_pretrained("bert-base-uncased")
bert = TFAutoModel.from_pretrained("/opt/word_embeddings/bert-base-uncased/")
This should be quite easy on Windows 10 using relative path. Assuming your pre-trained (pytorch based) transformer model is in 'model' folder in your current working directory, following code can load your model.
from transformers import AutoModel
model = AutoModel.from_pretrained('.\model',local_files_only=True)
Please note the 'dot' in '.\model'. Missing it will make the code unsuccessful.
In addition to config file and vocab file, you need to add tf/torch model (which has.h5/.bin extension) to your directory.
in your case, torch and tf models maybe located in these url:
torch model: https://cdn.huggingface.co/bert-base-cased-pytorch_model.bin
tf model: https://cdn.huggingface.co/bert-base-cased-tf_model.h5
you can also find all required files in files and versions section of your model: https://huggingface.co/bert-base-cased/tree/main
bert model folder containd these files:
config.json
tf_model.h5
tokenizer_config.json
tokenizer.json
vocab.txt
instaed of these if we require bert_config.json
bert_model.ckpt.data-00000-of-00001
bert_model.ckpt.index
bert_model.ckpt.meta
vocab.txt
then how to do
Here is a short ans.
tokenizer = BertTokenizer.from_pretrained('path/to/vocab.txt',local_files_only=True)
model = BertForMaskedLM.from_pretrained('/path/to/pytorch_model.bin',config='../config.json', local_files_only=True)
Usually config.json need not be supplied explicitly if it resides in the same dir.
you can use simpletransformers library. checkout the link for more detailed explanation.
model = ClassificationModel(
"bert", "dir/your_path"
)
Here I used Classification Model as an example. You can use it for many other tasks as well like question answering etc.

Error when installing Gurobi package Julia

I downloaded Gurobi and verified that my license is working.
I'm trying to add the Gurobi package to Julia, but it seems that the dll file can't be found, even though my GUROBI_HOME variable is okay.
Here is the output of Pkg.build("Gurobi") :
Found GUROBI_HOME = C:\gurobi902\win64
Does this point to the correct install location?
on Windows, this might be C:\Program Files\gurobi810\win64\
alternatively, on Windows, this might be C:/Program Files/gurobi810/win64/
on OSX, this might be /Library/gurobi810/mac64/
on Unix, this might be /home/my_user/gurobi810/linux64/
Note: this has to be a full path, not a path relative to your current
directory or your home directory.
We're going to look for the Gurobi library in this directory:
C:\gurobi902\win64\bin
That directory has the following files:
C:\gurobi902\win64\bin\grbcluster.exe
C:\gurobi902\win64\bin\grbgetkey.exe
C:\gurobi902\win64\bin\grbprobe.exe
C:\gurobi902\win64\bin\grbtune.exe
C:\gurobi902\win64\bin\grb_ts.exe
C:\gurobi902\win64\bin\gurobi.bat
C:\gurobi902\win64\bin\gurobi.env
C:\gurobi902\win64\bin\gurobi90.dll
C:\gurobi902\win64\bin\Gurobi90.NET.dll
C:\gurobi902\win64\bin\Gurobi90.NET.XML
C:\gurobi902\win64\bin\gurobi90_light.dll
C:\gurobi902\win64\bin\GurobiJni90.dll
C:\gurobi902\win64\bin\gurobi_cl.exe
C:\gurobi902\win64\bin\pysetup.bat
C:\gurobi902\win64\bin\vslauncher.exe
C:\gurobi902\win64\bin\vswhere.exe
We were looking for (but could not find) a file named like
libgurobiXXX.so, libgurobiXXX.dylib, or gurobiXXX.dll. You
should update your GUROBI_HOME environment variable to point to the
correct location.
Have you tried to look for the specific dll in your hard disk and update the GARUBI_HOME accordingly, as per error message? Did you double-check that this specific dll exists on that folder?

Front end and back end not compatible - get more linker information

while building a project in VisualStudio 2012 I get the error message
LINK : fatal error C1905: Front end and back end not compatible (must target same processor).
Checking the project manually does not help, all involved (static) libraries have been built for the same processor. I also added
/VERBOSE:lib and /VERBOSE
to command line to get some more information but this does not help, only additional output line I got by this was a stupid
Starting pass 1
So: any ideas how I can find out what causes this strange error message? How can I get more output from the linker?
Thanks!
Old question and I'm not sure whether anyone still need an answer. I had this problem with Visual Studio 2017.
Check paths for generated .obj files, especially when you use some .cpp files in more than one project (within solution) and/or use %(RelativeDir) variable in Properties -> C/C++ -> Output Files -> Object File Name. It happened to me with this path in Object File Name '$(IntDir)\%(RelativeDir)' and this $(ProjectDir)Junk\$(Platform)\ in Intermediate Directory. Error gone when I moved $(Platform) part to Object File Name.
Old paths:
Intermediate Directory: $(ProjectDir)Junk\$(Platform)\.
Object File Name: $(IntDir)\%(RelativeDir).
New paths:
Intermediate Directory: $(ProjectDir)Junk\.
Object File Name: $(IntDir)$(Platform)\%(RelativeDir).
You can also specify Object File Name option for each file, shared between multiple projects to keep using old path (or if new paths configuration isn't working for you) and get rid of that error.

appledoc Exception: at least one directory

After wasting some time to figure out what goes wrong, I finally have to ask for help. I want to use appledocs from Gentle Bytes. I followed every step of the quick install guide, but I´m not able to compile the project.
Here is what I´ve done:
1. cloned it from git://github.com/tomaz/appledoc.git
2. installed the templates to ~/Library/Application Support/appledoc
3. tried to compile the project
Everytime I try to compile, I get following error:
ERROR: AppledocException: At least one directory or file name path is required, use 'appledoc --help'
What do I have to do now?
Sounds like you've compiled it just fine and are now running the program. If it's a command-line program try command-option-R in Xcode to provide some arguments (i.e. names of files that you want to process).
The error means you didn't give it source paths: after all switches, you must give it at least one path to your source files. Can be either file or directory. In later case it will recursively scan the dir. Here's example
appledoc <options> ~/MyProject
Above example will use ~/MyProject directory as a source. You can also add multiple source paths. Note that you need to give the tool few options, see this page for minimum command line and other usage examples.
You either have to copy appledoc executable to one of directories in your path, as suggested by Caleb, or use full path to it when invoking (for example: /path/to/appledoc)

Why am I getting this error in my Primer3/eprimer3 Mac OSX build?

I'm getting this error on my mac osx build.
Primer3/eprimer3 issue:
Error: thermodynamic approach chosen, but path to thermodynamic parameters not specified
From:
http://www.mcardle.wisc.edu/mprime/help/primer3/primer3_manual.htm#globalTags
PRIMER_THERMODYNAMIC_PARAMETERS_PATH (string; default ./primer3_config)
This tag specifies the path to the directory that contains all the parameter files used by the thermodynamic approach. In Linux, there are two default locations that are tested if this tag is not defined: ./primer3_config/ and /opt/primer3_config/. For Windows, there is only one default location: .\primer3_config\.
I put the primer3_config in my PATH in bin and still cannot solve this issue. I even did:
export PRIMER_THERMODYNAMIC_PARAMETERS_PATH=/Users/jared/Downloads/primer3-2.3.2/src
and
export PRIMER_THERMODYNAMIC_PARAMETERS_PATH=/Users/jared/Downloads/primer3-2.3.2/src/primer3_config
to no avail.
According to the primer3 manual:
1.5. IMPORTANT: because PRIMER_THERMODYNAMIC_ALIGNMENT=1
PRIMER_THERMODYNAMIC_PARAMETERS_PATH must point to the right location.
This tag specifies the path to the directory that contains all the
parameter files used by the thermodynamic approach. In Linux, there
are two default locations that are tested if this tag is not
defined: ./primer3_config/ and /opt/primer3_config/. For Windows,
there is only one default location: .\primer3_config. If the the
parameter files are not in one these locations, be sure to set
PRIMER_THERMODYNAMIC_PARAMETERS_PATH.
So if you download and compile primer3 form source using the Make command, to get primer3 to run globally you need to copy the executueable, primer3_core, to your path and place the configuration directory, primer3_config in that same directory or at /opt/primer3_config
cd src
sudo cp primer3_core /usr/local/bin # or /usr/bin
sudo cp -r primer3_config /opt/
I has the same issue. I had installed Primer 3 using homebrew-science which was pretty painless. https://github.com/Homebrew/homebrew-science
I did try copying the primer3_config directory into the homebrew primer3 directory, ie:
/usr/local/Cellar/primer3/2.3.4/bin/primer3_config but this also did not work.
In the end I added the PRIMER_THERMODYNAMIC_PARAMETERS_PATH configuration to the primer 3 input file, and this worked. Note that the directory name must have a trailing slash. It is the last entry in the file below which is copied from the example file in the primer3 sources.
SEQUENCE_ID=example
SEQUENCE_TEMPLATE=GTAGTCAGTAGACNATGACNACTGACGATGCAGACNACACACACACACACAGCACACAGGTATTAGTGGGCCATTCGATCCCGACCCAAATCGATAGCTACGATGACG
SEQUENCE_TARGET=37,21
PRIMER_TASK=pick_detection_primers
PRIMER_PICK_LEFT_PRIMER=1
PRIMER_PICK_INTERNAL_OLIGO=1
PRIMER_PICK_RIGHT_PRIMER=1
PRIMER_OPT_SIZE=18
PRIMER_MIN_SIZE=15
PRIMER_MAX_SIZE=21
PRIMER_MAX_NS_ACCEPTED=1
PRIMER_PRODUCT_SIZE_RANGE=75-100
P3_FILE_FLAG=1
SEQUENCE_INTERNAL_EXCLUDED_REGION=37,21
PRIMER_EXPLAIN_FLAG=1
PRIMER_THERMODYNAMIC_PARAMETERS_PATH=/usr/local/Cellar/primer3/2.3.4/bin/primer3_config/
=
Then run it like this:
$ primer3_core < example2

Resources