I am trying to test docker port mapping by specifying parameters in the chronos job definition. The parameters options doesn't seem to take any effect on the docker run.
Job definition as follows:
{
"schedule": "R0//P",
"name": "testjob",
"container": {
"type": "DOCKER",
"image": "path_to_image",
"network": "BRIDGE",
"parameters" : [
{"key" : "-p", "value": "8888:4400"}
]
},
"cpus": "4",
"mem": "512",
"uris": ["path to dockercfg.tar.gz"],
"command" : "./command-to-execute"
}
1) Docker run on the node doesn't take parameters into consideration. Any suggestions on the correct way to include parameters as part of docker run will be highly appreciated?
2) The Docker Image I am trying to run has ENTRYPOINT specified in it. So technically, the ENTRYPOINT should run when the docker runs the container. With the way Chronos is set up, I am forced to provide "command" option in the job JSON (skipping command option during job submission throws back error). When the container is actually scheduled on the target node then instead of using the ENTRYPOINT from the dockerfile, it actually tries to run the command specified in the job definition JSON.
Can someone provide a way for using Chronos to run ENTRYPOINT instead of command from Chronos job JSON definition?
Notes:
Setting command to blank doesn't help.
ENTRYPOINT can be specified as a command in JSON job definition and that should fix the problem with command. But don't have access to ENTRYPOINT for all the containers.
***Edit 1: Modified question with some more context and clarity
You should have a look at the official docs regarding how to run a Docker job.
A docker job takes the same format as a scheduled job or a dependency job and runs on a Docker container. To configure it, an additional container argument is required, which contains a type (required), an image (required), a network mode (optional), mounted volumes (optional) and whether Mesos should always (force)Pull(Image) the latest image before executing or not (optional).
Concerning 1)
There's no way IMHO to set the parameters like you're trying to. Also, port mappings are specified differently (with Marathon), but as I understand the docs it's not possible at all. And probably not necessary for a batch job.
If you want to run longrunning services, use Marathon.
Concerning 2)
Not sure if I understand you correctly, but normally this would be implemented via specifying an ENTRYPOINT in the Dockerfile
Concerning 3)
Not sure if I understand you correctly, but I think you should be able to omit the command property with Docker jobs.
To use the docker container entrypoint you must set "shell" false, and command has to be blank. If command is different than blank, chronos will pass it as an argument to the entrypoint. Your json would be like below.
I don't know if you should use the "uris" field, it is deprecated, and, if it is what I think it is, it seems not required anymore to start docker apps.
About the docker parameters, I think the problem is with the key name that you used. It seems you must omit the - symbol. Try as below.
{
"schedule": "R0//P",
"name": "testjob",
"shell": false,
"container": {
"type": "DOCKER",
"image": "path_to_image",
"network": "BRIDGE",
"parameters" : [
{"key" : "p", "value": "8888:4400"}
]
},
"cpus": "4",
"mem": "512",
"command" : ""
}
Related
We are trying to access some of the experimental features of Docker with DOCKER_BUILDKIT. We see that this works fine on Mac and Linux, just not Windows. Any idea how to get this to work on Windows?
The ability to build Windows images is a known limitation of buildkit. You can subscribe to and add your vote to this issues on the roadmap if you are interested in the feature:
https://github.com/microsoft/Windows-Containers/issues/34
Otherwise, for building Linux images, buildkit should work the same on Docker for Windows as it does on other environments, with either the DOCKER_BUILDKIT=1 environment variable for the feature flag set in the daemon.json file (configurable from the Docker preferences UI):
{ "features": { "buildkit": true } }
Note that the environment variable overrides the feature flag on the engine. You can also use buildx which is another method to access buildkit. This has the same limitations as accessing buildkit directly (mainly you cannot build Windows images).
It works for me using Docker Desktop for Windows
Try to add the following to your daemon.json:
"features": { "buildkit": true }
I use docker within powershell and this worked for me:
# for docker build ...
$env:DOCKER_BUILDKIT = 1
# for docker-compose build ... (additional!)
$env:COMPOSE_DOCKER_CLI_BUILD = 1
This worked for me without any changes in the settings (as described in some other answers).
I know it is pretty late for answer but....
better late then never.
First of all, in docker desktop, go to settings >> docker engine and make sure you have everything set as shown below
{
"registry-mirrors": [],
"insecure-registries": [],
"debug": true,
"experimental": false,
"features": {
"buildkit": true
}
}
"features": {
"buildkit": true } is set to true by default i believe.
But mark that debug is set to true, while by default it is set to false.
So you will probably have to change it.
And second of all. The most obvious thing that is realy hard to find in documentation.
Unlike on ubuntu, you DO NOT actualy add DOCKER_BUILDKIT=1 at the start of your build instruction.
Personaly, it was extremaly confusing for me, because im so used to adding this phrase from Linux systems. But in Windows, if you enable the options as shown above, buildkit option will always trigger by default.
I have a simple LinkedList implementation in Node. I want to test it using Mocha -- simply exercise different append/delete operations. And most importantly, I want to be able to stepthrough/debug my linkedlist as called from the mocha tests.
This is my launch.json:
{
"version": "0.1.0",
"configurations": [
{
"type": "node",
"request": "launch",
"protocol": "inspector",
"name": "Mocha All",
"windows":{
"runtimeExecutable": "c:\\Users\\alern\\AppData\\Roaming\\npm\\_mocha.cmd"
},
"args": [
"--colors",
"${workspaceFolder}\\test\\test.js"
],
"console": "integratedTerminal",
"internalConsoleOptions": "neverOpen"
}
]
}
when I press Run on my "Mocha All" configuration in VSCode,
global installation of mocha.cmd starts running, I see the following in the terminal:
cd 'd:\Projects\Algorithms\LinkedList'; & 'c:\Users\alern\AppData\Roaming\npm\_mocha.cmd' '--inspect-brk=30840' '--colors' 'D:\Projects\Algorithms\LinkedList\test\test.js'
(node:21524) ExperimentalWarning: The ESM module loader is experimental.
The above makes sense.
And then I see:
all of my tests simply rip through --- although I have setup some breakpoints in the Mocha scripts, they get ignored. So thats useless.
once all the tests are done (some successful, some not), at the end I get a popup that says:
"Cannot connect to runtime process, timeout after 10000ms - reason: Cannot connect to target: connect ECONNREFUSED 127.0.0.1:30840" This error feels like my process is done running and gone away, and VSCode or debugger are trying to connect to it?
In any case, what do I tweak to have my breakpoints stop execution? Thank you
The first thing I might try is to change the type to "pwa-node". I think mocha or VSCode recently made some changes that make that mandatory. You might also not want to set the runtime executable, but rather the "program". Generally the runtimeExecutable will be node, and if it's on your path VSCode should be able to find it automatically.
If all else fails I'd also try looking at the default configuration that ships with VSCode and see if that gives you any clues. That can be accessed by adding a new configuration through the "Add Configuration..." button. https://code.visualstudio.com/docs/nodejs/nodejs-debugging#_launch-configurations-for-common-scenarios
I'm running rasa-nlu on a docker container.
trying to train it on my data and then performing requests to the http server, which always result as follow:
"intent": { "confidence": 1.0, "name": "None" }
I'm running a config file as follows:
{
"name": null,
"pipeline": "mitie",
"language": "en",
"num_threads": 4,
"max_training_processes": 1,
"path": "./models",
"response_log": "logs",
"config": "config.json",
"log_level": "INFO",
"port": 5000,
"data": "./data/test/demo-rasa.json",
"emulate": null,
"log_file": null,
"mitie_file": "./data/total_word_feature_extractor.dat",
"spacy_model_name": null,
"server_model_dirs": null,
"token": null,
"cors_origins": [],
"aws_endpoint_url": null,
"max_number_of_ngrams": 7,
"duckling_dimensions": null,
"entity_crf_BILOU_flag": true,
"entity_crf_features": [
["low", "title", "upper", "pos", "pos2"],
["bias", "low", "word3", "word2", "upper", "title", "digit", "pos", "pos2", "p
attern"],
["low", "title", "upper", "pos", "pos2"]]
}
What's the reason for that behaviour?
The models folder contains the trained
model inside another nested folder, is it ok?
Thanks.
I already saw your GitHub issue, thanks for providing a bit more information here. You're still leaving a lot of details about the Docker container ambiguous.
I and a few others got a pull request merged into the rasa repo available here on Docker Hub. There are several different builds now available and the basic usage instructions can be found below or on the main repo README.
General Docker Usage Instructions
For the time being though, follow the below steps:
docker run -p 5000:5000 rasa/rasa_nlu:latest-mitie
The demo data should be loaded already able to be parsed against using the below command:
curl 'http://localhost:5000/parse?q=hello'
Trying to troubleshoot your specific problem
As for your specific install and why it is failing, my guess is that your trained data either isn't there or is a name that rasa doesn't expect. Run this command to see what models are available:
curl 'http://locahost:5000/status'
your response should be something like:
{
"trainings_queued" : 0,
"training_workers" : 1,
"available_models" : [
"test_model"
]
}
If you have a model listed under available_models you can load/parse it with the below command replacing test_model with your model name.
curl 'http://localhost:5000/parse?q=hello&model=test_model'
Actually, I found that using Mitie always fails, thus, the model wasn't getting updated. Thanks for the info though.
Using Mitie-Sklearn fixed the issue.
Thank you.
There are some issues with MITIE Pipeline on Windows :( , training on MITIE takes a lot of time and spaCy trains the model very quickly. (2-3 minutes depending on your Processor and RAM).
Here's how I resolved it:
[Note: I am using Python 3.6.3 x64 Anaconda and Windows 8.1 O.S]
Install the following packages in this order:
Spacy Machine Learning Package: pip install -U spacy
Spacy English Language Model: python -m spacy download en
Scikit Package: pip install -U scikit-learn
Numpy package for mathematical calculations: pip install -U numpy
Scipy Package: pip install -U scipy
Sklearn Package for Intent Recognition: pip install -U sklearn-crfsuite
NER Duckling for better Entity Recognition with Spacy: pip install -U duckling
RASA NLU: pip install -U rasa_nlu==0.10.4
Now, in RASA v0.10.4 they use Twisted Asynchronous server which is not WSGI compatible. (More information on this here.)
Now make the config file as follows:
{
"project": "Travel",
"pipeline": "spacy_sklearn",
"language": "en",
"num_threads": 1,
"max_training_processes": 1,
"path": "C:\\Users\\Kunal\\Desktop\\RASA\\models",
"response_log": "C:\\Users\\Kunal\\Desktop\\RASA\\log",
"config": "C:\\Users\\Kunal\\Desktop\\RASA\\config_spacy.json",
"log_level": "INFO",
"port": 5000,
"data": "C:\\Users\\Kunal\\Desktop\\RASA\\data\\FlightBotFinal.json",
"emulate": "luis",
"spacy_model_name": "en",
"token": null,
"cors_origins": ["*"],
"aws_endpoint_url": null
}
Now run the server, like the following template:
http://localhost:5000/parse?q=&project=
You will get a JSON response something like this, like the LUISResult class of BotFramework C#.
I'd like to use an executable WAR file in a production environment. I also have several Node.js apps in the same environment for which I use the PM2 process manager to manage the whole lifecycle of (startup on boot, restart on failure, etc.).
PM2 is capable of handling java jar files as well (see e.g. https://stackoverflow.com/a/41702429/1266411 for details) so it would make sense to use PM2 for this purpose too, but I don't see how a JHipster executable WAR can be configured this way (to be used standalone, without a container).
Any suggestions?
This is my working example. FYI.
{
"apps": [{
"name": "War",
"cwd": ".",
"args": [
"-jar",
"/path/to/your.war"
],
"env": {
},
"script": "/usr/bin/java",
"node_args": [],
"log_date_format": "YYYY-MM-DD HH:mm Z",
"exec_interpreter": "none",
"exec_mode": "fork"
}]
}
I'm trying to write a manifest for JPS deployment of an Jelastic application.
Creating nodes and deploying webapps works fine but I can't create a database and load a sql dump into it using the manifest directives.
My configs section looks like this:
"configs": [
{
"nodeType": "postgres9",
"restart": false,
"database": [{
"name": "somedbname",
"user" : "someusername",
"dump": "http://www.somehost.de/jelastic/somedump.sql"
}]
},
...
]
...
It seems that the database section is completely ignored.
Any ideas?
Likely you have an extra square brackets around database object definition, i.e. you must have "database" : { ... }, instead of "database" : [{...}]
Also I can suggest you to review an example from Cyclos. Their idea is to download an executable bash script which will be started by cron and do all the things required for setting the database up, including adding of a new user, extensions etc.
Best regards.