Docker - Creating multiple containers/environments with different versions - windows

I'm starting with MongoDB and taking four courses. All of them use different versions of mongodb, python, nodejs, asp.net, mean stack, etc. The structure of my desirable workspace:
courses
├─ mongodb_basic
│ ├─ hello_world-2.7.py
│ └─ data
│ └─ db
├─ python-3.6_mongodb
│ ├─ getting_started.py
│ └─ data
│ └─ db
├─ dotnet_and_mongodb
│ ├─ (project files)
│ └─ data
│ └─ db
├─ mongodb_node
│ ├─ (project files)
│ └─ data
│ └─ db
└─ mean_intro
└─ (project files)
I want to keep my Windows 10 system clean using Docker without installing all the stuff on the host and stuck in the first course, don't know how to:
link containers
python/pymongo <-> mongodb
aspnet <-> mongodb
... <-> mongodb
map data\folders
start/stop linked containers with one command (desirable)
I'd like to keep a workspace on the host (external HDD) in order to work on different computers (three W10 PCs).
Google results have many tuts (containerize, docker-compose, etc.) and don't know where to start.

I think it might be possible to do what you are trying to do using docker-compose and defining the dockerfiles correctly. So if you are wondering where to start, I would suggest getting acquainted with the dockerfiles and docker-compose.
To answer your question:
linking containers:
that can be done using docker-compose. Specify the container services you want to use in a compose file like the one specified here.
NOTE: the volumes: declaration is where you would specify your workspace folder structure for the containers to access.
map folder/data: Again I would check out the link mentioned above. In their dockerfile they use the ADD command to add the current directory of the container into the path of the /code directory. This was included as a volume: in the compose file. What does that mean? Well whatever you change in the host workspace, should show up in the root directory of the container.
start/stop with one command: you should be able to create,start or stop all the services or a specific service using one of the docker-compose up, docker-compose start or docker-compose stop
commands.
For your application you might even be able to get away with defining your workspace as volumes in all of the dockerfiles and then building them with a script. Or you can use some kind of orchestration service like Kubernetes as well but that might be overkill.
Hope this is helpful.

Related

Windows Docker Dockerfile COPY file inside folder

I'm trying to build a Dockerfile to copy a file to container, I'm using Windows 10. This is my Dockerfile
FROM openjdk:8
COPY /target/myfile.java /
And I'm getting the error:
failed to solve with frontend dockerfile.v0: failed to build LLB: failed to compute cache key: "/target/myfile.java" not found: not found
I already tried //target//myfile.java, \\target\\myfile.java, \target\myfile.java, target/myfile.java, target\myfile.java but none of them worked.
If I put the myfile.java on the same directory of Dockerfile and use COPY myfile.java / works without problem. So the problem is to copy a file inside a folder. Any suggestion?
I tried your Dockerfile locally and it built fine with the following directory structure:
Project
│ Dockerfile
│
└───target
│ │ myfile.java
I built it from the 'Project' directory with the following command:
docker build . -t java-test
I could only reproduce the error when the Docker server couldn't find the 'myfile.java', i.e. using the following directory structure:
Project
│ Dockerfile
│
└───target
│ └───target
│ └───myfile.java
So your dockerfile looks fine, just make sure you build it from the right directory with the correct build context and the file is stored in the correct place locally

API Rest with Hyperledger Fabric and node js Heroku

I am trying to connect my Hyperledger Fabric network with my backend in Heroku.
I did all the connections as the examples suggest. This is how my code looks like:
my code
When I deploy to Heroku I get the following error:
[NetworkConfig101.js]: NetworkConfig101 - problem reading the PEM file :: Error: ENOENT: no such file or directory
My .pem files are in the same folder as my configuration file. folders
Using a path from working dir. For example, your tree structure dir:
.
├── app.js
└── artifacts
├── crypto-config
│ ├── ca.crt
│ └── key.pem
└── network-config.yaml
In network-config.yaml, you should use path:
path: ./artifacts/crypto-config/ca.crt
Another way is using absolute path.
path: /data/app/artifacts/crypto-config/ca.crt

How to remove addon when attached with multiple names on Heroku?

We're migrating some things around and needed to attach the same database using two different environment variables temporarily. Now that we migrated, I would like to remove the attachment, but sadly, the Heroku command-line client says that I have an ambiguous identifier:
$ heroku addons --app $APP_SOURCE
Add-on Plan Price State
────────────────────────────────────────────────────────────── ─────────── ──────── ───────
heroku-postgresql (...) hobby-basic $9/month created
├─ as DATABASE
├─ as HEROKU_POSTGRESQL_ORANGE
├─ as DATABASE on stb-crds-rails-sf app
├─ as SHARETHEBUS_RAILS_DATABASE on stb-crds-rails-sf app
└─ as SHARETHEBUS_RAILS_DATABASE_URL on stb-crds-rails-sf app
$ heroku addons:detach --app stb-crds-rails-sf $ADDON_NAME
▸ Ambiguous identifier; multiple matching attachments found: DATABASE, SHARETHEBUS_RAILS_DATABASE, SHARETHEBUS_RAILS_DATABASE_URL.
I tried also tried heroku addons:detach --app stb-crds-rails-sf $ADDON_NAME --as SHARETHEBUS_RAILS_DATABASE_URL and heroku addons:detach --app stb-crds-rails-sf $ADDON_NAME SHARETHEBUS_RAILS_DATABASE_URL, but the command-line says the last arguments are unexpected.
What are our options to remove the extra addons?
Turns out you can use the variable name instead of the addon name:
$ heroku addons:detach SHARETHEBUS_RAILS_DATABASE_URL

Strapi how to start in background?

Usually we use "strapi start" to start strapi.
I'm hosting it on AWS ubuntu:
tried "start strapi &" to run it in background. However, once the terminal is closed, we can't access the strapi console anymore.
I got script not found: server.js error when using #user1872384's solution.
So, here is the correct way to run strapi in background mode.
NODE_ENV=production pm2 start --name APP_NAME npm -- start
This will just tell pm2 to use npm start command and let npm do the which script to run part.
Hope it helps someone.
To run the strapi in development mode, use the following pm2 command from your project folder.
pm2 start npm --name my-project -- run develop
and
pm2 list
to view the status
We also can start with pm2 by type
pm2 start "yarn develop"
Need to use pm2:
To Start:
npm install pm2 -g
NODE_ENV=production pm2 start server.js --name api
To list all process:
pm2 list
┌──────────┬────┬─────────┬──────┬───────┬────────┬─────────┬────────┬─────┬────────────┬────────┬──────────┐
│ App name │ id │ version │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼──────┼───────┼────────┼─────────┼────────┼─────┼────────────┼────────┼──────────┤
│ api │ 0 │ 0.1.0 │ fork │ 21817 │ online │ 0 │ 2m │ 0% │ 108.0 MB │ ubuntu │ disabled │
└──────────┴────┴─────────┴──────┴───────┴────────┴─────────┴────────┴─────┴────────────┴────────┴──────────┘
To Stop, use the id:
pm2 stop 0
First
npm install pm2 -g
add server.js to root of your project and write below line:
const strapi = require('strapi');
strapi().start();
save then
pm2 start server.js
The best way is to use pm2 and its ecosystem.config.js file.
Firstly, install pm2 by:
npm i -g pm2#latest
In ecosystem.config.js, add the following code:
module.exports = {
apps: [
{
name: 'give-your-app-a-name',
script: 'npm',
args: 'start',
watch: true, // automatically restart the server for file changes
max_memory_restart: '450M',
env: {
NODE_ENV: 'production',
},
},
{
name: 'give-your-another-app-a-name',
script: 'npm',
args: 'start',
env: {
NODE_ENV: 'production',
},
},
],
}
Finally on your server do:
pm2 start ecosystem.config.js
That's it.
Here's the official page about starting Strapi with PM2.
Starting with strapi command
By default there is two important commands.
yarn develop to start your project in development mode.
yarn start to start your app for production.
You can also start your process manager using the yarn start or develop command.
pm2 start npm --name my-app -- run develop
pm2 start npm --name my-app -- run develop

Chef in local mode cookbooks only

I'm trying to set up a workflow to develop Chef cookbooks locally. We're currently using Chef Server with the provisioned nodes using chef-client.
As part of the new workflow, we want to be able to start using Vagrant to test cookbooks locally to avoid incurring in the costs of testing on a remote machine in a cloud.
I'm able to launch and provision a local Vagrant machine, but the one thing I'm not really sure how to do is to have Chef load the local version of the cookbook, but still talk to the Chef server for everything else (environments, roles, data bags, etc.), so I don't have to upload the cookbook via knife every time I make a change I want to test. Is this possible?
In other words, can I make chef-client talk to the local chef-zero server only for the cookbooks but to the remote Chef server for everything else? Or maybe a different approach that would yield the same effect? I'm open to suggestions.
UPDATE
I think an example will help to express what I'm looking for. I'm realizing that this may not really be what I need, but I'm curious about how to achieve it anyway. In this scenario, a recipe reads from a databag stored in the remote Chef server
metadata.rb
name 'proxy-cookbook'
version '0.0.0'
.kitchen.yml
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: ubuntu-12.04
suites:
- name: default
run_list:
- recipe[proxy-cookbook::default]
attributes:
recipes/default.rb
...
key = data_bag_item("key", "main")
....
Now, I know I can create something along the lines of:
data_bags/main.json
{
"id": "main",
"key": "s3cr3tk3y"
}
And have my kitchen tests read from that data bag; but that is exactly what I'm trying to avoid. Is it possible to either:
Instruct test-kitchen to get the actual data bag from chef server,
Have chef-zero retrieve a temporary copy of the data bags for local tests, or
Quickly "dump" the contents of a remote Chef server locally?
I hope that makes sense. I can add some context if necessary.
Test kitchen is the best way to drive vagrant. It provides the integration you're looking for with chef zero. Enables you to completely emulate your production chef setup locally and test your cookbook against multiple platforms.
Test kitchen has replaced the older workflows I used to have chef development. Very well worthwhile learning.
Example
Generate a demo cookbook that installs java using the community cookbook. Tools like Berkshelf (to manage cookbook dependencies) and chef zero are setup automatically.
chef generate cookbook demo
Creates the following files:
└── demo
├── .kitchen.yml
├── Berksfile
├── metadata.rb
├── recipes
│   └── default.rb
└── test
└── integration
├── default
│   └── serverspec
│   └── default_spec.rb
.kitchen.yml
Update the platform versions. Kitchen is told to use vagrant and chef zero.
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: ubuntu-14.04
- name: centos-6.6
suites:
- name: default
run_list:
- recipe[demo::default]
attributes:
Berksfile
This file controls how cookbook dependencies are managed. The special "metadata" setting tells Berkshelf to refer to the cookbook metadata file.
source 'https://supermarket.chef.io'
metadata
metadata.rb
Add the "apt" and "java" cookbooks as a dependencies:
name 'demo'
..
..
depends "apt"
depends "java"
recipes/default.rb
include_recipe "apt"
include_recipe "java"
test/integration/default/serverspec/default_spec.rb
Test for the installation of the JDK package
require 'spec_helper'
describe package("openjdk-6-jdk") do
it { should be_installed }
end
Running the example
$ kitchen verify default-ubuntu-1404
-----> Starting Kitchen (v1.4.0)
..
..
Package "openjdk-6-jdk"
should be installed
Finished in 0.1007 seconds (files took 0.268 seconds to load)
1 example, 0 failures
Finished verifying <default-ubuntu-1404> (0m13.73s).
-----> Kitchen is finished. (0m14.20s)
Update
The following example demonstrates using test kitchen with roles (works for data bags and other items you want loaded into chef-zero):
Can the java cookbook be used to install a local copy of oracle java?
I think I found what I was looking for.
You can use knife to download the Chef server objects that you need. You can bootstrap this in .kitchen.yml so you don't have to do it manually every time.
.kitchen.yml
...
driver:
name: vagrant
pre_create_command: 'mkdir -p chef-server; knife download /data_bags /roles /environments --chef-repo-path chef-server/'
...
provisioner:
name: chef_zero
data_bags_path: chef-server/data_bags
roles_path: chef-server/roles
environments_path: chef-server/environments
client_rb:
environment: development
...
And then I just added the chef-server directory to .gitignore
.gitignore
chef-server/
There might be a less redundant way of doing this, but this works for me right now, and since I just wanted to document this, I'm leaving it like that.

Resources