How to show Vagrant box version used in a particular directory - vagrant

I have multiple Vagrant boxes, and would like to see what version of what box is running in which directory.
vagrant box list returns a global list of boxes:
puphpet/centos65-x64 (virtualbox, 1.2.1)
puphpet/centos65-x64 (virtualbox, 2.0)
vagrant global-status shows directories with providers:
id name provider state directory
--------------------------------------------------
a427238 default virtualbox poweroff /path/to/dir1
fa21751 default virtualbox running /path/to/dir2
But how can I see which Vagrant box version is used in which directory?

This data is possible to retrieve but is not exposed, as far as I know, through the Vagrant CLI. Take a look in ~/.vagrant.d/data/machine-index/index for Linux or macOS and I would assume it'd be something like C:\Users\whoever\.vagrant.d\data\machine-index on Windows.
You'll get some unformatted JSON which contains details on every machine Vagrant knows about. If you run the JSON through a pretty-printer/beautifier you'll get one of these for every machine:
"d62342a255436211725abe8fd3c313ea": {
"local_data_path": "/Users/whoever/mymachine/.vagrant",
"name": "default",
"provider": "virtualbox",
"state": "poweroff",
"vagrantfile_name": null,
"vagrantfile_path": "/Users/whoever/mymachine",
"updated_at": null,
"extra_data": {
"box": {
"name": "ubuntu/xenial64",
"provider": "virtualbox",
"version": "20170706.0.0"
}
}
},
And the box information associated with your machine is right there. The ubuntu/xenial64 box on the virtualbox provider version 20170706.0.0.

This is kind of an old thread, but I ran into this situation recently that matched the original request, and I discovered an answer that is not listed here:
The vagrant box outdated command lists the current box version number when it tests to see if there is a newer version of the box.
The caveat is that the vagrant box outdated command needs access to the internet to check the current version, which it also outputs.
I only discovered this after I had written this bash script that uses jq to search for the current directory in the ~/.vagrant.d/data/machine-index/index file. I make no guarantees that this will work in your environment:
$ cat ~/scripts/vagrant_box_info.sh
#!/bin/bash
CUR_DIR=`pwd`
JQ_CMD='.machines|to_entries|map(select(.value.vagrantfile_path|test("'$CUR_DIR'$")))[].value.extra_data'
cat ~/.vagrant.d/data/machine-index/index | jq "$JQ_CMD"
$ ~/scripts/vagrant_box_info.sh
{
"box": {
"name": "geerlingguy/centos7",
"provider": "virtualbox",
"version": "1.2.15"
}
}
$

building on kevin's answer, if you are using jq you can get most of what you need by parsing the json with:
cat ~/.vagrant.d/data/machine-index/index | jq ".machines |to_entries[] | .value | .vagrantfile_path,.extra_data"
which gets me:
"/Users/myuser/kds2/chef/vagrant/test_bridged_192"
{
"box": {
"name": "opscode-ubuntu-14.04",
"provider": "virtualbox",
"version": "0"
}
}
"/Users/myuser/kds2/chef/vagrant/testzero"
{
"box": {
"name": "opscode-ubuntu-14.04",
"provider": "virtualbox",
"version": "0"
}
}
"/Users/myuser/kds2/wk/issues/fb230.bare_monit/vag"
{
"box": {
"name": "opscode-ubuntu-14.04",
"provider": "virtualbox",
"version": "0"
}
}
Warning: if you have removed the vm via something like rm -rf .vagrant manually, the index file may not reflect that.
However, if for example you vagrant box remove opscode-ubuntu-14.04 then vagrant will realize that the box is not actually being used, will allow removal of the box and will update the index file accordingly.

Related

How to hide or make relative the paths that appear in the files inside the conda-meta folder?

When a build a conda environment like this
conda create --prefix env python=3.6.5
Some absolute paths appear in some json files in the conda-meta folder. How can I avoid it? I just want to use relative paths here or I just want to hide them completely. Is there a way to achieve this? Are they mandatory? See extracted_package_dir, source or package_tarball_full_path attributes:
{
"arch": "x86_64",
"build": "py36_0",
"build_number": 0,
"channel": "https://repo.anaconda.com/pkgs/main/win-64",
"constrains": [],
"depends": [
"python >=3.6,<3.7.0a0"
],
"extracted_package_dir": "C:\\Users\\UserName\\AppData\\Local\\conda\\conda\\pkgs\\certifi-2019.3.9-py36_0",
"features": "",
"files": [
"Lib/site-packages/certifi-2019.03.09-py3.6.egg-info",
"Lib/site-packages/certifi/__init__.py",
"Lib/site-packages/certifi/__main__.py",
"Lib/site-packages/certifi/__pycache__/__init__.cpython-36.pyc",
"Lib/site-packages/certifi/__pycache__/__main__.cpython-36.pyc",
"Lib/site-packages/certifi/__pycache__/core.cpython-36.pyc",
"Lib/site-packages/certifi/cacert.pem",
"Lib/site-packages/certifi/core.py"
],
"fn": "certifi-2019.3.9-py36_0.tar.bz2",
"license": "ISC",
"link": {
"source": "C:\\Users\\UserName\\AppData\\Local\\conda\\conda\\pkgs\\certifi-2019.3.9-py36_0",
"type": 1
},
"md5": "e1faa30cf88c0cd141dfe71e70a9597a",
"name": "certifi",
"package_tarball_full_path": "C:\\Users\\UserName\\AppData\\Local\\conda\\conda\\pkgs\\certifi-2019.3.9-py36_0.tar.bz2",
"paths_data": {
"paths": [
[...]
If I remove the whole folder the environment become useless and I cannot activate it anymore in order to install, update or remove new packages.
I want to do this to encapsulate the environment in one application and I do not want to have my original absolute paths in the computer of the final user.
My Use Case
I am developing an electron app that uses a tornado server (that uses python)
Currently I am using electron-builder to add the environment to the installer and works pretty well, but one drawback is the conda-meta folder I commented above. What I do now is to remove it manually when I want to make an installer.
That will probably break conda. It's not written to treat those as relative paths. If you told us more about your use case, maybe we could help. Are you trying to redistribute an installed environment? Have you see the "constructor" or "conda-pack" projects?
Finally the best solution I found was to ignore the folder when creating the final installer with electron-builder.
So I have applied the directive extraResources to add the conda environment except the folder conda-meta. And I have added the filter "!conda-meta${/*}", the meaning is explained here
Remember that !doNotCopyMe/**/* would match the files in the doNotCopyMe directory, but not the directory itself, so the empty directory would be created. Solution — use macro ${/*}, e.g. !doNotCopyMe${/*}.
The result in the package.json file:
"extraResources": [
{
"from": "../env",
"to": "env",
"filter": [
"**/*",
"!*.pyc",
"!conda-meta${/*}"
]
}
],

Integrated terminal: update environment variable

This is my first day using vscode with beego.
I used IntelliJ otherwise, which has a setting to specify custom paths for GOPATH.
Vscode does not seem to have this option of allowing multiple GOPATHs, and I thought I could try to append GOPATH variable for all integrated terminal sessions.
I've added following to settings.json
"terminal.integrated.env.osx": {
"GOPATH": "/Users/hk/go:/Users/hk/Documents/code/go/go-beego"
}
However, it has no effect on tasks.json
{
// See https://go.microsoft.com/fwlink/?LinkId=733558
// for the documentation about the tasks.json format
"version": "2.0.0",
"tasks": [
{
"label": "go: run beego",
"type": "shell",
"command": "echo \"gopath is $GOPATH\" | bee run portal"
}
]
}
Output of tasks
gopath is /Users/hk/go
FATAL ▶ 0001 No application 'portal'
found in your GOPATH.
The terminal process terminated with exit code: 255
EDIT: The integrated terminal does not honour the following:
"go.gopath": "/Users/hk/go:/Users/hk/Documents/code/go/go-beego",

Rasa NLU failing to classify intent

I'm running rasa-nlu on a docker container.
trying to train it on my data and then performing requests to the http server, which always result as follow:
"intent": { "confidence": 1.0, "name": "None" }
I'm running a config file as follows:
{
"name": null,
"pipeline": "mitie",
"language": "en",
"num_threads": 4,
"max_training_processes": 1,
"path": "./models",
"response_log": "logs",
"config": "config.json",
"log_level": "INFO",
"port": 5000,
"data": "./data/test/demo-rasa.json",
"emulate": null,
"log_file": null,
"mitie_file": "./data/total_word_feature_extractor.dat",
"spacy_model_name": null,
"server_model_dirs": null,
"token": null,
"cors_origins": [],
"aws_endpoint_url": null,
"max_number_of_ngrams": 7,
"duckling_dimensions": null,
"entity_crf_BILOU_flag": true,
"entity_crf_features": [
["low", "title", "upper", "pos", "pos2"],
["bias", "low", "word3", "word2", "upper", "title", "digit", "pos", "pos2", "p
attern"],
["low", "title", "upper", "pos", "pos2"]]
}
What's the reason for that behaviour?
The models folder contains the trained
model inside another nested folder, is it ok?
Thanks.
I already saw your GitHub issue, thanks for providing a bit more information here. You're still leaving a lot of details about the Docker container ambiguous.
I and a few others got a pull request merged into the rasa repo available here on Docker Hub. There are several different builds now available and the basic usage instructions can be found below or on the main repo README.
General Docker Usage Instructions
For the time being though, follow the below steps:
docker run -p 5000:5000 rasa/rasa_nlu:latest-mitie
The demo data should be loaded already able to be parsed against using the below command:
curl 'http://localhost:5000/parse?q=hello'
Trying to troubleshoot your specific problem
As for your specific install and why it is failing, my guess is that your trained data either isn't there or is a name that rasa doesn't expect. Run this command to see what models are available:
curl 'http://locahost:5000/status'
your response should be something like:
{
"trainings_queued" : 0,
"training_workers" : 1,
"available_models" : [
"test_model"
]
}
If you have a model listed under available_models you can load/parse it with the below command replacing test_model with your model name.
curl 'http://localhost:5000/parse?q=hello&model=test_model'
Actually, I found that using Mitie always fails, thus, the model wasn't getting updated. Thanks for the info though.
Using Mitie-Sklearn fixed the issue.
Thank you.
There are some issues with MITIE Pipeline on Windows :( , training on MITIE takes a lot of time and spaCy trains the model very quickly. (2-3 minutes depending on your Processor and RAM).
Here's how I resolved it:
[Note: I am using Python 3.6.3 x64 Anaconda and Windows 8.1 O.S]
Install the following packages in this order:
Spacy Machine Learning Package: pip install -U spacy
Spacy English Language Model: python -m spacy download en
Scikit Package: pip install -U scikit-learn
Numpy package for mathematical calculations: pip install -U numpy
Scipy Package: pip install -U scipy
Sklearn Package for Intent Recognition: pip install -U sklearn-crfsuite
NER Duckling for better Entity Recognition with Spacy: pip install -U duckling
RASA NLU: pip install -U rasa_nlu==0.10.4
Now, in RASA v0.10.4 they use Twisted Asynchronous server which is not WSGI compatible. (More information on this here.)
Now make the config file as follows:
{
"project": "Travel",
"pipeline": "spacy_sklearn",
"language": "en",
"num_threads": 1,
"max_training_processes": 1,
"path": "C:\\Users\\Kunal\\Desktop\\RASA\\models",
"response_log": "C:\\Users\\Kunal\\Desktop\\RASA\\log",
"config": "C:\\Users\\Kunal\\Desktop\\RASA\\config_spacy.json",
"log_level": "INFO",
"port": 5000,
"data": "C:\\Users\\Kunal\\Desktop\\RASA\\data\\FlightBotFinal.json",
"emulate": "luis",
"spacy_model_name": "en",
"token": null,
"cors_origins": ["*"],
"aws_endpoint_url": null
}
Now run the server, like the following template:
http://localhost:5000/parse?q=&project=
You will get a JSON response something like this, like the LUISResult class of BotFramework C#.

Doesn't work on DotCloud

I trying to launch app with StrongOps on DotCloud, but information about process/app does not appear in dashboard.Locally it works fine.
API-key and APP-name passed directly in the code. Also, i try to set ENV vars (SL_APP_NAME and SL_KEY), but no result.
App name - is random string and shoudn't represent any real variable, right?
Logs. Only this
strong-agent profiling
Cluster controls unavailable.
My code
require('strong-agent').profile(KEY,APP_NAME);
My package.json
{
"name": "slovohvat",
"version": "0.0.2",
"strongAgentKey": "607dbd9b5cd4c6dd20ae05d128b63652",
"scripts": {
"start": "node app.js",
"test": "echo \"Error: no test specified\" && exit 1"
},
"dependencies": {
"express": "3.4.0",
"nunjucks": "0.1.9",
"socket.io": "0.9.16",
"bigint-node": "1.0.1",
"connect": "2.9.0",
"request":"2.27.0",
"node-logentries": "0.0.2",
"redis": "0.8.6",
"socket.io-clusterhub": "0.2.0",
"connect-redis": "1.4.x",
"async": "0.2.9",
"nodetime": ">=0.8.0",
"emailjs ":"0.3.6",
"strong-agent":"0.2.18",
"raygun": "~0.3.0"
},
"repository": "",
"author": "",
"license": "BSD"
}
And my dotcloud.yaml
www:
type: nodejs
approot: app
process: node app.js 0
config:
node_version: v0.8.x
smtp_server: smtp.XXX.org
smtp_port: 587
smtp_username: XX#XX.XX
smtp_password: XXX
data:
type: redis
strongloop.json exists at same dir as dotcloud.yaml and looks correct.
Please, give me any advice i should to try.
Thank you. And sorry for my English :)
You should create a strongloop.json by using the slc strongops command, it will write the config file after you login. It sounds like you might already have done that.
Note that if you have a stongloop.json, you should NOT provide any args to .profile(). The API arguments are a mechanism for fine-grained control, and for environments when you cannot deploy a config file.
Also, you should remove strongAgentKey from your package.json (it lets anyone on stackoverflow publish data to your account), and the env variables. It sounds like you are configuring strong-agent using all 4 mechanisms at the same time! Sorry about the confusion.
After clearing the redundant config, you should be able to run your app (node .). Login to your strongops console, and see the app after a few minutes as data starts coming in.
If that doesn't work, we will need more details. It might be easier to work through this over irc or email, check out our support page: http://strongloop.com/developers/forums/

NPM package 'bin' script for Windows

Cucumber.js is supplying a command-line "binary" which is a simple .js file containing a shebang instruction:
#!/usr/bin/env node
var Cucumber = require('../lib/cucumber');
// ...
The binary is specified in package.json with the "bin" configuration key:
{ "name" : "cucumber"
, "description" : "The official JavaScript implementation of Cucumber."
// ...
, "bin": { "cucumber.js": "./bin/cucumber.js" }
// ...
This all works well on POSIX systems. Someone reported an issue when running Cucumber.js on Windows.
Basically, the .js file seems to be executed through the JScript interpreter of Windows (not Node.js) and it throws a syntax error because of the shebang instruction.
My question is: what is the recommended way of setting up a "binary" script that works on both UNIX and Windows systems?
Thanks.
Windows ignores the shebang line #!/usr/bin/env node and will execute it according to the .js file association. Be explicit about calling your script with node
node hello.js
ps. Pedantry: shebangs aren't in the POSIX standard but they are supported by most *nix system.
If you package your project for Npm, use the 'bin' field in package.json. Then on Windows, Npm will install a .cmd wrapper along side your script so users can execute it from the command-line
hello
For npm to create the shim right, the script must have the shebang line #!/usr/bin/env node
your "bin" should be "cucumber" npm will create a "cucumber" or "cucumber.cmd" file pointing to "node %SCRIPTNAME%". the former being for posix environments, the latter being for windows use... If you want the "js" to be part of the executable name... you should use a hyphon instead... "cucumber-js" ... Having a .js file will come before the .js.cmd in your case causing the WScript interpreter to run it as a JScript file, not a node script.
I would suggest looking at coffee-script's package.json for a good example.
{
"name": "coffee-script",
"description": "Unfancy JavaScript",
"keywords": ["javascript", "language", "coffeescript", "compiler"],
"author": "Jeremy Ashkenas",
"version": "1.4.0",
"licenses": [{
"type": "MIT",
"url": "https://raw.github.com/jashkenas/coffee-script/master/LICENSE"
}],
"engines": {
"node": ">=0.4.0"
},
"directories" : {
"lib" : "./lib/coffee-script"
},
"main" : "./lib/coffee-script/coffee-script",
"bin": {
"coffee": "./bin/coffee",
"cake": "./bin/cake"
},
"scripts": {
"test": "node ./bin/cake test"
},
"homepage": "http://coffeescript.org",
"bugs": "https://github.com/jashkenas/coffee-script/issues",
"repository": {
"type": "git",
"url": "git://github.com/jashkenas/coffee-script.git"
},
"devDependencies": {
"uglify-js": ">=1.0.0",
"jison": ">=0.2.0"
}
}
I managed to figure out a solution to a similar issue.
My original plan was to have only one large .js file, for both the API and CLI (the reason is because I didn't know how to share variables between two files at the time). And when everything was built, I tried to add the #!/usr/bin/env node shebang to my file. However that didn't stop Windows Script Host from giving an error.
What I ended up doing was coming up with an idea of a "variable bridge" that allowed variables to be read and set using getVar and setVar. This made me have to extract the CLI code from the API code and add some imports to the variable bridge.
In the CLI file, I added the shebang, and modified the package.json of my project to have:
{
...
"main": "./bin/api.js",
"bin": {
"validator": "./bin/cli.js"
}
...
}
Here are a few small notes that I think might help if Windows Script Host is still giving an error (I applied all of them so I'm not sure which one helped):
Using only LF line endings seemed to help.
It seems that ./bin is the preferred directory for compiled files. I did try ./dist but it didn't work for me.
An empty line after the shebang may be needed:
// cli.js
#!/usr/bin/env node
// code...
Using the same name for main and bin in package.json seemed to be an issue for me.

Resources