How could I add more peers to network blockchain that have been developed with hyperledge composer? - hyperledger-composer

I have developed a Blockchain network with hyperledge composer over development environment as the documentation shows. I have tested it and works nice. So I want to build a production network. At this moment, my first objective is add more peers to development environment on the same server in order to learn. I have looked the startFabric.sh and I have edit the docker file and this sh but it doesn’t work. I have attached two files that I have edited from original code. The error that it fires me is that container of peer1 isn’t working. The database 2 is working.
I have searched on forums about how I can add more peer but I don’t find a good guide in order to understand how to do step by step.
So my question, what have I done bad? Do you know a good tutorial in order to learn how I add more peers to development environment?
Thank you
startFabric.sh
#!/bin/bash
# Exit on first error, print all commands.
set -ev
#Detect architecture
ARCH=`uname -m`
# Grab the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
#
cd "${DIR}"/composer
ARCH=$ARCH docker-compose -f "${DIR}"/composer/docker-compose.yml down
ARCH=$ARCH docker-compose -f "${DIR}"/composer/docker-compose.yml up -d
# wait for Hyperledger Fabric to start
# incase of errors when running later commands, issue export FABRIC_START_TIMEOUT=<larger number>
echo ${FABRIC_START_TIMEOUT}
sleep ${FABRIC_START_TIMEOUT}
# Create the channel
docker exec peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c composerchannel -f /etc/hyperledger/configtx/composer-channel.tx
docker exec peer1.org1.example.com peer channel create -o orderer.example.com:7050 -c composerchannel -f /etc/hyperledger/configtx/composer-channel1.tx
# Join peer0.org1.example.com to the channel.
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer0.org1.example.com peer channel join -b composerchannel.block
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin#org1.example.com/msp" peer1.org1.example.com peer channel join -b composerchannel.block
cd ../..
docker-composer.yml
version: '2'
services:
ca.org1.example.com:
image: hyperledger/fabric-ca:$ARCH-1.0.1
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca.org1.example.com
# - FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/org1.example.com-cert.pem
# - FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/a22daf356b2aab5792ea53e35f66fccef1d7f1aa2b3a2b92dbfbf96a448ea26a_sk
ports:
- "7054:7054"
command: sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/19ab65a$
volumes:
- ./crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca.org1.example.com
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:$ARCH-1.0.1
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/composer-genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
ports:
- 7050:7050
volumes:
- ./:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/etc/hyperledger/msp/orderer/msp
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer:$ARCH-1.0.1
environment:
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=composer_default
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start --peer-defaultchain=false
ports:
- 7051:7051
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./:/etc/hyperledger/configtx
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/peer/msp
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
depends_on:
- orderer.example.com
- couchdb
couchdb:
container_name: couchdb
image: hyperledger/fabric-couchdb:$ARCH-1.0.1
ports:
- 5984:5984
environment:
DB_URL: http://localhost:5984/member_db
peer1.org1.example.com:
container_name: peer1.org1.example.com
image: hyperledger/fabric-peer:$ARCH-1.0.1
environment:
- CORE_LOGGING_PEER=debug
- CORE_CHAINCODE_LOGGING_LEVEL=DEBUG
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_PEER_ID=peer1.org1.example.com
- CORE_PEER_ADDRESS=peer1.org1.example.com:7051
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=composer_default
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb2:5985
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start --peer-defaultchain=false
ports:
- 7061:7061
- 7063:7063
volumes:
- /var/run/:/host/var/run/
- ./:/etc/hyperledger/configtx
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/peer/msp
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
depends_on:
- orderer.example.com
- couchdb2
couchdb2:
container_name: couchdb2
image: hyperledger/fabric-couchdb:$ARCH-1.0.1
ports:
- 5985:5985
environment:
DB_URL: http://localhost:5984/member_db

We provide a basic Hyperledger Fabric network for development purposes only and isn't meant to be an example to demostrate how to build one. Hyperledger Composer will work with any Hyperledger Fabric setup with the right connection profiles anf Hyperledger Fabric provide documentation and examples on how to build your own networks which I think is what you need
See https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
about how to build your own network and also see
https://hyperledger.github.io/composer/reference/connectionprofile.html
for information about composer connection profiles.
also see
Does composer support endorsement policy? How?
which provides some info about multi org networks and connection profiles

Related

Passing a selenium docker container IP to a ruby cucumber script

I'm attempting to put a ruby Cucumber test into Docker. I'm using a docker-compose.yml file to start a selenium hub container along with a chrome and firefox node. Then I'm building an alpine ruby based image with my tests.
I've gotten the process to work, however it involves finding the IP of the hub container each time it is built, and then hardcoding the IP into my env.rb file where I connect to the Selenium grid.
I've seen that containers that are linked can be connected using the name but haven't had much luck there. Is there any way I can easily pass the hub container IP to my test's container?
Here is my yml file:
version: "3"
services:
hub:
image: selenium/hub
ports:
- "4444:4444"
environment:
GRID_MAX_SESSION: 16
GRID_BROWSER_TIMEOUT: 3000
GRID_TIMEOUT: 3000
chrome:
image: selenium/node-chrome
container_name: web-automation_chrome
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 4
NODE_MAX_INSTANCES: 4
volumes:
- /dev/shm:/dev/shm
ports:
- "9001:5900"
links:
- hub
firefox:
image: selenium/node-firefox
container_name: web-automation_firefox
depends_on:
- hub
environment:
HUB_PORT_4444_TCP_ADDR: hub
HUB_PORT_4444_TCP_PORT: 4444
NODE_MAX_SESSION: 2
NODE_MAX_INSTANCES: 2
volumes:
- /dev/shm:/dev/shm
ports:
- "9002:5900"
links:
- hub
myapp:
build: .
image: justinpshields/myapp
depends_on:
- hub
environment:
URL: hub
links:
- hub
networks:
default:
links is useless. Every container in a docker-compose.yml share the same network unless stated otherwise.
You should also wait until the selenium hub start and attach its browsers containers.
For instance with that:
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"ready\": true" >/dev/null; do
echo 'Waiting for the Grid'
sleep 1
done
while ! curl -sSL "http://$SELENIUMHUBHOST:4444/status" 2>&1 | grep "\"browserName\": \"$BROWSER\"" >/dev/null; do
echo "Waiting for the node $BROWSER"
sleep 1
done

Docker-compose for production running laravel with nginx on azure

I have an app that is working but I am getting problems to make it run on Azure.
I have the next docker-compose
version: "3.6"
services:
nginx:
image: nginx:alpine
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
environment:
PORT: ${PORT}
command: /bin/sh -c "envsubst '$${PORT}' < /etc/nginx/template/nginx.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"
networks:
- mynet
depends_on:
- app
- worker
app:
image: myimage:latest
build:
context: .
dockerfile: ./setup/azure/Dockerfile
restart: unless-stopped
tty: true
expose:
- 9000
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
networks:
- mynet
worker:
image: my_image:latest
command: bash -c "/usr/local/bin/php artisan queue:work --timeout=0"
depends_on:
- app
networks:
- mynet
volumes:
uploads:
logos:
networks:
mynet:
I am unsure if the volumes in nginx ok, I think that perhaps I should create a new Dockerfile to copy the files. However, this would increase a lot the size of the project.
When using App Services on azure the development is made assigning a randomly port, that's wgy i have the envsubst instruction in command. I appreciate any other suggestion to make it run this project on Azure
I'm assuming you're trying to persist the storage in your app to a volume. Check out this doc issue. Now I don't think you need
volumes:
- ./:/var/www/
- ./setup/azure/nginx/conf.d/:/etc/nginx/template
but for
volumes:
- uploads:/var/www/simple/public/uploads
- logos:/var/www/simple/public/logos
you can create a storage account, mount it to your linux app plan (it's not available for Windows app plans yet), and mount the relative path /var/www/simple/public/uploads to the file path of the storage container.

Issue with spring cloud dataflow and Remote Repository: Apps are installed but i can't deploy streams

i'm facing an issue using spring cloud dataflow connected to a remote repository.
I think i managed to connect the dataflow server to the repository correctly because at first i couldn't import apps and now i can
The problem is that when i try to deploy a stream the dataflow server doesn't see the remote repository.
Here's an example to make myself clear
When i try to import a jar that does not exist the import is successful but if i try to open the details from the UI i get:
Failed to resolve MavenResource: [JAR-NAME] Configured remote repositories: : [repo1],[springRepo]
So i guess that the system sees "repo1"
But then when i deploy a stream (with all valid apps) i get:
Error Message = [Failed to resolve MavenResource: [JAR-NAME] Configured remote repository: : [springRepo]]
I followed this: https://github.com/spring-cloud/spring-cloud-dataflow/issues/982
And this: https://docs.spring.io/spring-cloud-dataflow/docs/1.1.0.BUILD-SNAPSHOT/reference/html/getting-started-deploying-spring-cloud-dataflow.html
This is my docker-compose.yml:
version: '3'
services:
kafka:
image: wurstmeister/kafka:2.11-0.11.0.3
expose:
- "9092"
environment:
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
- KAFKA_ADVERTISED_HOST_NAME=kafka
depends_on:
- zookeeper
zookeeper:
image: wurstmeister/zookeeper
expose:
- "2181"
dataflow-server:
image: springcloud/spring-cloud-dataflow-server:2.0.2.RELEASE
container_name: dataflow-server
ports:
- "9393:9393"
environment:
- spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.brokers=kafka:9092
- spring.cloud.dataflow.applicationProperties.stream.spring.cloud.stream.kafka.binder.zkNodes=zookeeper:2181
- spring.cloud.skipper.client.serverUri=http://skipper-server:7577/api
- spring.cloud.dataflow.applicationProperties.stream.management.metrics.export.influx.enabled=true
- spring.cloud.dataflow.applicationProperties.stream.management.metrics.export.influx.db=myinfluxdb
- spring.cloud.dataflow.applicationProperties.stream.management.metrics.export.influx.uri=http://influxdb:8086
- spring.cloud.dataflow.grafana-info.url=http://localhost:3000
- maven.localRepository=null
- maven.remote-repositories.repo1.url= [URL]
- maven.remote-repositories.repo1.auth.username=***
- maven.remote-repositories.repo1.auth.password=***
depends_on:
- kafka
volumes:
- ~/.m2/repository:/m2repo
app-import:
image: springcloud/openjdk:latest
depends_on:
- dataflow-server
command: >
/bin/sh -c "
while ! nc -z dataflow-server 9393;
do
sleep 1;
done;
wget -qO- 'http://dataflow-server:9393/apps' --post-data='uri=https://repo.spring.io/libs-release/org/springframework/cloud/stream/app/spring-cloud-stream-app-descriptor/Einstein.RELEASE/spring-cloud-stream-app-descriptor-Einstein.RELEASE.stream-apps-kafka-maven&force=true';
echo 'Stream apps imported'
wget -qO- 'http://dataflow-server:9393/apps' --post-data='uri=https://repo.spring.io/libs-release-local/org/springframework/cloud/task/app/spring-cloud-task-app-descriptor/Dearborn.SR1/spring-cloud-task-app-descriptor-Dearborn.SR1.task-apps-maven&force=true';
echo 'Task apps imported'"
skipper-server:
image: springcloud/spring-cloud-skipper-server:2.0.1.RELEASE
container_name: skipper
ports:
- "7577:7577"
- "9000-9010:9000-9010"
influxdb:
image: influxdb:1.7.4
container_name: 'influxdb'
ports:
- '8086:8086'
grafana:
image: springcloud/spring-cloud-dataflow-grafana-influxdb:2.0.2.RELEASE
container_name: 'grafana'
ports:
- '3000:3000'
volumes:
scdf-targets:
You need to set the maven remote repository configuration for the Skipper server as well. It is the Skipper server that takes care of handling the deployment request from the SCDF server and hence the Skipper server requires the similar configuration:
- maven.remote-repositories.repo1.url= [URL]
- maven.remote-repositories.repo1.auth.username=***
- maven.remote-repositories.repo1.auth.password=***

Can't add persistent folder to bitnami/mongodb on windows

I think this might be related to file system incompatibility (nfts/ext*)
How can I compose my containers and persist the db without the container exiting?
I'm using the bitnami-mongodb-image
Error:
Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Full Output:
Recreating mongodb_1 ... done
Starting node_1 ... done
Attaching to node_1, mongodb_1
mongodb_1 |
mongodb_1 | Welcome to the Bitnami mongodb container
mongodb_1 | Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mongodb
mongodb_1 | Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mongodb/issues
mongodb_1 |
mongodb_1 | nami INFO Initializing mongodb
mongodb_1 | mongodb INFO ==> Deploying MongoDB from scratch...
mongodb_1 | Error executing 'postInstallation': EACCES: permission denied, mkdir '/bitnami/mongodb'
mongodb_1 exited with code 1
Docker Version:
Docker version 18.06.0-ce, build 0ffa825
Windows Version:
Microsoft Windows 10 Pro
Version 10.0.17134 Build 17134
This is my docker-compose.yml so far:
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- "./data/db:/bitnami"
- "./conf/mongo:/opt/bitnami/mongodb/conf"
I do not use Windows but you can definitely try to use a named volume and see if the permission problem goes away
version: "2"
services:
node:
image: "node:alpine"
user: "node"
working_dir: /home/node/app
environment:
- NODE_ENV=development
volumes:
- ./:/home/node/app
ports:
- "8888:8888"
command: "tail -f /dev/null"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- mongodata:/bitnami:rw
- "./conf/mongo:/opt/bitnami/mongodb/conf"
volumes:
mongodata:
I would like to stress this is a named volume, compared to the host volumes you are using. It is the best option for production and you need to be aware that docker will manage and store the files for you so you will not see the files in your project folder.
If you still want to use host volumes (so volumes that write to that location you specify in your project subfolder on the host machine) you need to apply a permission fix, here is an example for mariadb but it will work for mongo too
https://github.com/bitnami/bitnami-docker-mariadb/issues/136#issuecomment-354644226
In short, you need to know what is the user of the filesystem (in the example 1001 is the user id on my host machine for my logged in user) on your host and then chown that folder to this user so the user will be the same on the folder and your host system.
A full example:
version: "2"
services:
fix-mongodb-permissions:
image: 'bitnami/mongodb:latest'
user: root
command: chown -R 1001:1001 /bitnami
volumes:
- "./data:/bitnami"
mongodb:
image: 'bitnami/mongodb'
ports:
- "27017:27017"
volumes:
- ./data:/bitnami:rw
depends_on:
- fix-mongodb-permissions
I hope this helps

Docker compose containers fail and exit with code 127 missing /bin/env bash

I'm new to Docker so bear with me for any wrong term.
I have Docker Tools installed on Windows 7 and I'm trying to run a Docker compose file of a proprietary existing project stored in a git repository and that has probably been only run on Linux.
These are the commands I ran:
docker-machine start
docker-machine env
#FOR /f "tokens=*" %i IN ('docker-machine env') DO #%i
this was output by step (2)
docker-compose -f <docker-file.yml> up
Most of the Docker work has gone fine (image download, extraction, etc).
It is failing at container start, where some containers run fine - I recognize a working MongoDB instance since its log doesn't report any error - but other containers exit pretty soon with an error code, i.e.:
frontend_1 exited with code 127
Scrolling up a bit the console, I can see lines like:
No such file or directoryr/bin/env: bash
I have no idea where to go from here. I tried launching composer from a CygWin terminal, but got the same result.
Docker Compose file
version: "2"
services:
frontend:
command: "yarn start"
image: company/application/frontend:1
build:
context: frontend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "3000:3000"
volumes:
- ./frontend:/opt/app
backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && yarn run dev"
image: company/application/backend:1
build:
context: backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "4000:4000"
volumes:
- ./backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
generator-backend:
restart: "no"
# source ~/.bashrc is needed to add the ssh private key, used by git
command: bash -c "source ~/.bashrc && npm run dev"
image: company/generator/backend:1
build:
context: generator-backend
dockerfile: docker/Dockerfile
environment:
<env entries>
ports:
- "5000:5000"
volumes:
- ./generator-backend:/opt/app
- ./:/opt:rw
- ./.ssh/company_utils:/tmp/company_utils
depends_on:
- db
db:
image: mongo:3.4
volumes:
- mongo:/data/db
ports:
- "27017:27017"
volumes:
mongo:
It turned out it was a matter of file line endings, caused by git clone, as pointed out by #mklement0 in his answer to env: bash\r: No such file or directory question.
Disabling core.autocrlf then recloning the repo solved it.

Resources