Error in using persistent data store with COMPOSER REST SERVER - hyperledger-composer

I tried to setup a persistent data store for REST server but was unable to do it.I am posting the steps which I have followed to do it.
Steps which I followed to set a persistent data store for REST server.
Started an instance of MongoDB:
root#ubuntu:~# docker run -d --name mongo --network composer_default -p 27017:27017 mongo
dda3340e4daf7b36a244c5f30772f50a4ee1e8f81cc7fc5035f1090cdcf46c58
Created a new, empty directory. Created a new file named Dockerfile the new directory, with the following contents:
FROM hyperledger/composer-rest-server
RUN npm install --production loopback-connector-mongodb passport-github && \
npm cache clean && \
ln -s node_modules .node_modules
Changed into the directory created in step 2, and build the Docker image:
root#ubuntu:~# cd examples/dir/
root#ubuntu:~/examples/dir# ls
Dockerfile ennvars.txt
root#ubuntu:~/examples/dir# docker build -t myorg/my-composer-rest-server .
Sending build context to Docker daemon 4.096 kB
Step 1/2 : FROM hyperledger/composer-rest-server
---> 77cd6a591726
Step 2/2 : RUN npm install --production loopback-connector-couch passport-github && npm cache clean && ln -s node_modules .node_modules
---> Using cache
---> 2ff9537656d1
Successfully built 2ff9537656d1
root#ubuntu:~/examples/dir#
Created file named ennvars.txt in the same directory.
The contents are as follows:
COMPOSER_CONNECTION_PROFILE=hlfv1
COMPOSER_BUSINESS_NETWORK=blockchainv5
COMPOSER_ENROLLMENT_ID=admin
COMPOSER_ENROLLMENT_SECRET=adminpw
COMPOSER_NAMESPACES=never
COMPOSER_SECURITY=true
COMPOSER_CONFIG='{
"type": "hlfv1",
"orderers": [
{
"url": "grpc://localhost:7050"
}
],
"ca": {
"url": "http://localhost:7054",
"name": "ca.example.com"
},
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
}
],
"keyValStore": "/home/ubuntu/.hfc-key-store",
"channel": "mychannel",
"mspID": "Org1MSP",
"timeout": "300"
}'
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "mongo"
}
}'
COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "a88810855b2bf5d62f97",
"clientSecret": "f63e3c3c65229dc51f1c8964b05e9717bf246279",
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "/",
"failureRedirect": "/"
}
}'
Loaded the env variables by the following command.
root#ubuntu:~/examples/dir# source ennvars.txt
Started the docker container by the below command
root#ubuntu:~/examples/dir# docker run \
-d \
-e COMPOSER_CONNECTION_PROFILE=${COMPOSER_CONNECTION_PROFILE} \
-e COMPOSER_BUSINESS_NETWORK=${COMPOSER_BUSINESS_NETWORK} \
-e COMPOSER_ENROLLMENT_ID=${COMPOSER_ENROLLMENT_ID} \
-e COMPOSER_ENROLLMENT_SECRET=${COMPOSER_ENROLLMENT_SECRET} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_SECURITY=${COMPOSER_SECURITY} \
-e COMPOSER_CONFIG="${COMPOSER_CONFIG}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
942eb1bfdbaf5807b1fe2baa2608ab35691e9b6912fb0d3b5362531b8adbdd3a
It got executed successfully. So now I should be able to access the persistent and secured REST server by going to explorer page of loopback
But when tried to open the above url got the below error.
Error Image
Have I missed any step or done something wrong.

Two things:
You need to put export in front of the envvars in your envvars.txt file.
Check the version of Composer you are running. The FROM hyperledger/composer-rest-server command will pull the latest version of the rest server down, and if your composer version is not updated, the two will be incompatible.

Related

pass arguments of make commands

I have a sequence of make commands to upload zip file to s3 bucket and then update the lambda function reading that s3 file as source code. Once I update the lambda function, I wish to publish it and after publishing it, I want to attach an event to that lambda function using lambda bridge.
I can do most of these commands automatically using make. For example:
clean:
#rm unwanted_build_files.zip
build-lambda-pkg:
mkdir pkg
cd pkg && docker run #something something
cd pkg && zip -9qr build.zip
cp pkg/build.zip .
rm pkg
upload-s3:
aws s3api put-object --bucket my_bucket \
--key build.zip --body build.zip
update-lambda:
aws lambda update-function-code --function-name my_lambda \
--s3-bucket my_bucket \
--s3-key build.zip
publish-lambda:
aws lambda publish-version --function-name my_lambda
## I can get "Arn" value from publish-lambda command. publish-lambda ##returns a json (or I would say it prints a json type structure on cmd) which has one key as "FunctionArn"
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="arn:aws:lambda:::function/my_lambda/version_number"
## the following combines the above command into single command
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
I am stuck at the last step i.e. to combine and include publish-lambda and attach-event in the build-n-update command. The problem is I am unable to pass argument from previous command to next command. I will try to explain it better:
publish-lambda prints a json style output on terminal:
{
"FunctionName": "my_lambda",
"FunctionArn": "arn:aws:lambda:us-east-2:12345:function:my_lambda:5",
"Runtime": "python3.6",
"Role": "arn:aws:iam::12345:role/my_role",
"Handler": "lambda_function.lambda_handler",
"CodeSize": 62403592,
"Description": "",
"Timeout": 180,
"MemorySize": 512,
"LastModified": "2021-02-28T17:34:04.374+0000",
"CodeSha256": "ErfsYHVMFCQBg4iXx5ev9Z0U=",
"Version": "5",
"Environment": {
"Variables": {
"PATH": "/var/task/bin",
"PYTHONPATH": "/var/task/src:/var/task/lib"
}
},
"TracingConfig": {
"Mode": "PassThrough"
},
"RevisionId": "49b5-acdd-c1032aa16bfb",
"State": "Active",
"LastUpdateStatus": "Successful"
}
I wish to extract function arn from the above output stored in key "FunctionArn" and use it in the next command i.e. attach-event as attach-event has a --targets argument which takes the "Arn" of last published function.
Is it possible to do in single command?
I have tried to experiment a bit as follows:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
make publish-lambda | xargs jq .FunctionArn -r {}
But this throws an error:
jq: Unknown option --function-name
Please help
Well, running:
make publish-lambda | xargs jq .FunctionArn -r {}
will print the command to be run, then the output of the command (run it yourself from you shell prompt and see). Of course, jq cannot parse the command line make prints.
Anyway, what would be the goal of this? You'd just print the function name to stdout and it wouldn't do you any good.
You basically have two choices: one is to combine the two commands into a single make recipe, so you can capture the information you need in a shell variable:
build-n-update: clean build-lambda-pkg upload-s3 update-lambda
func=$$(aws lambda publish-version --function-name my_lambda \
| jq .FunctionArn -r); \
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$func"
The other alternative is to redirect the output of publish-version to a file, then parse that file in the attach-event target recipe:
publish-lambda:
aws lambda publish-version --function-name my_lambda > publish.json
attach-event:
aws events put-targets --rule rstats-post-explaination-at-10pm-ist \
--targets "Id"="1","Arn"="$$(jq .FunctionArn -r publish.json)"

React-Native + Detox + Gitlab-ci + AWS EC2 / Cannot boot Android Emulator with the name

Describe the bug
My goal is to run unit test e2e detox for a mobile application in react-native from a Gitlab-ci on a AWS ec2 instance
AWS EC2: c5.xlarge 4 CPU / 8GB RAM
I just create an instance ec2 c5.xlarge on AWS and setup docker and gitlab-runner with docker executor (image: alpine) on it.
Here my .gitlab-ci.yml :
stages:
- unit-test
variables:
LC_ALL: 'en_US.UTF-8'
LANG: 'en_US.UTF-8'
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- node -v
- npm -v
- yarn -v
detox-android:
stage: unit-test
image: reactnativecommunity/react-native-android
before_script:
- echo fs.inotify.max_user_watches=524288 | tee -a /etc/sysctl.conf && sysctl -p
- yarn install:module_only
script:
- mkdir -p /root/.android && touch /root/.android/repositories.cfg
#- $ANDROID_HOME/tools/bin/sdkmanager --list --verbose
- echo yes | $ANDROID_HOME/tools/bin/sdkmanager --channel=0 --verbose "system-images;android-25;google_apis;armeabi-v7a"
- echo no | $ANDROID_HOME/tools/bin/avdmanager --verbose create avd --force --name "Pixel_API_28_AOSP" --package "system-images;android-25;google_apis;armeabi-v7a" --sdcard 200M --device 11
- echo "Waiting emulator is ready..."
- emulator -avd "Pixel_API_28_AOSP" -debug-init -no-window -no-audio -gpu swiftshader_indirect -show-kernel &
- adb wait-for-device shell 'while [[ -z $(getprop sys.boot_completed) ]]; do sleep 1; done; input keyevent 82'
- echo "Emulator is ready!"
- yarn detox-emu:build:android
- yarn detox-emu:test:android
tags:
- detox-android
only:
- ci/unit-test
here the script in my package.json for the ci:
{
scripts: {
"detox-emu:test:android": "npx detox test -c android.emu.release.ci --headless -l verbose",
"detox-emu:build:android": "npx detox build -c android.emu.release.ci"
}
}
here my .detoxrc.json
{
"testRunner": "jest",
"runnerConfig": "e2e/config.json",
"configurations": {
"android.real": {
"binaryPath": "android/app/build/outputs/apk/debug/app-debug.apk",
"build": "cd android && ./gradlew assembleDebug assembleAndroidTest -DtestBuildType=debug && cd ..",
"type": "android.attached",
"device": {
"adbName": "60ac9404"
}
},
"android.emu.debug": {
"binaryPath": "android/app/build/outputs/apk/debug/app-debug.apk",
"build": "cd android && ./gradlew assembleDebug assembleAndroidTest -DtestBuildType=debug && cd ..",
"type": "android.emulator",
"device": {
"avdName": "Pixel_API_28_AOSP"
}
},
"android.emu.release": {
"binaryPath": "android/app/build/outputs/apk/release/app-release.apk",
"build": "cd android && ./gradlew assembleRelease assembleAndroidTest -DtestBuildType=release && cd ..",
"type": "android.emulator",
"device": {
"avdName": "Pixel_API_28_AOSP"
}
},
"android.emu.release.ci": {
"binaryPath": "android/app/build/outputs/apk/release/app-release.apk",
"build": "cd android && ./gradlew assembleRelease assembleAndroidTest -DtestBuildType=release && cd ..",
"type": "android.emulator",
"device": {
"avdName": "Pixel_API_28_AOSP"
}
}
}
}
Here the things I tried many way to setup an android emulator on an EC2 but it's look working only with an emulator armeabi-v7a due to the cpu virtualisation. It's look like the latest emulator available for armeabi-v7a is system-images;android-25;google_apis;armeabi-v7a. It's look like I can only run an emulator with sdkversion 25 on EC2 instance then.
On my mobile app, I'm using mapbox for some features that require with detox minSdkversion 26. That I set on my build.gradle as well.
You can see full logs of my CI in attachement.
Log_CI.txt
I get an error because detox don't find my emulator for the name Pixel_API_28_AOSP. This error could be related to the minSdkVersion ? Or I miss something in my CI ?
Environment (please complete the following information):
Detox: 17.10.2
React Native: 0.63.2
Device: emulator system-images;android-25;google_apis;armeabi-v7a
OS: android
Thanks in advance for your help !

How to connect Opentracing application to a remote Jaeger collector

I am using Jaeger UI to display traces from my application. It's work fine for me if both application an Jaeger are running on same server. But I need to run my Jaeger collector on a different server. I tried out with JAEGER_ENDPOINT, JAEGER_AGENT_HOST and JAEGER_AGENT_PORT, but it failed.
I don't know, whether my values setting for these variables is wrong or not. Whether it required any configuration settings inside application code?
Can you provide me any documentation for this problem?
In server 2 , Install jaeger
$ docker run -d --name jaeger \
-p 5775:5775/udp \
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
-p 9411:9411 \
jaegertracing/all-in-one:latest
In server 1, set these environment variables.
JAEGER_SAMPLER_TYPE=probabilistic
JAEGER_SAMPLER_PARAM=1
JAEGER_SAMPLER_MANAGER_HOST_PORT=(EnterServer2HostName):5778
JAEGER_REPORTER_LOG_SPANS=false
JAEGER_AGENT_HOST=(EnterServer2HostName)
JAEGER_AGENT_PORT=6831
JAEGER_REPORTER_FLUSH_INTERVAL=1000
JAEGER_REPORTER_MAX_QUEUE_SIZE=100
application-server-id=server-x
Change the tracer registration application code as below in server 1, so that it will get the configurations from the environment variables.
#Produces
#Singleton
public static io.opentracing.Tracer jaegerTracer() {
String serverInstanceId = System.getProperty("application-server-id");
if(serverInstanceId == null) {
serverInstanceId = System.getenv("application-server-id");
}
return new Configuration("ApplicationName" + (serverInstanceId!=null && !serverInstanceId.isEmpty() ? "-"+serverInstanceId : ""),
Configuration.SamplerConfiguration.fromEnv(),
Configuration.ReporterConfiguration.fromEnv())
.getTracer();
}
Hope this works!
Check this link for integrating elasticsearch as the persistence storage backend so that the traces will not remove once the Jaeger instance is stopped.
How to configure Jaeger with elasticsearch?
Specify "JAEGER_AGENT_HOST" and ensure "local_agent" is not specified in tracer config file.
Below is the working solution for Python
import os
os.environ['JAEGER_AGENT_HOST'] = "123.XXX.YYY.ZZZ" # Specify remote Jaeger-Agent here
# os.environ['JAEGER_AGENT_HOST'] = "16686" # optional, default: "16686"
from jaeger_client import Config
config = Config(
config={
'sampler': {
'type': 'const',
'param': 1,
},
# ENSURE 'local_agent' is not specified
# 'local_agent': {
# # 'reporting_host': "127.0.0.1",
# # 'reporting_port': 16686,
# },
'logging': True,
},
service_name="your-service-name-here",
)
# create tracer object here and voila!
Guidance of Jaeger: https://www.jaegertracing.io/docs/1.33/getting-started/
Jaeger-Client features: https://www.jaegertracing.io/docs/1.33/client-features/
Flask-OpenTracing: https://github.com/opentracing-contrib/python-flask
OpenTelemetry-Python: https://opentelemetry.io/docs/instrumentation/python/getting-started/

Elasticsearch docker burn data in image

I'm trying to build an elasticsearch image with preloaded data. I'm doing a restore operation from S3.
FROM elasticsearch:5.3.1
ARG bucket
ARG access_key
ARG secret_key
ARG repository
ARG snapshot
ENV ES_JAVA_OPTS="-Des.path.conf=/etc/elasticsearch"
RUN elasticsearch-plugin install repository-s3
ADD https://raw.githubusercontent.com/vishnubob/wait-for-it/e1f115e4ca285c3c24e847c4dd4be955e0ed51c2/wait-for-it.sh wait-for-it.sh
RUN chmod +x wait-for-it.sh
RUN /docker-entrypoint.sh elasticsearch -p /tmp/epid & ./wait-for-it.sh -t 0 localhost:9200 -- echo "Elasticsearch is ready!" && \
curl -H 'Content-Type: application/json' -X PUT "localhost:9200/_snapshot/$repository" -d '{ "type": "s3", "settings": { "bucket": "'$bucket'", "access_key": "'$access_key'", "secret_key": "'$secret_key'" } }' && \
curl -H "Content-Type: application/json" -X POST "localhost:9200/_snapshot/$repository/$snapshot/_restore?wait_for_completion=true" -d '{ "indices": "myindex", "ignore_unavailable": true, "index_settings": { "index.number_of_replicas": 0 }, "ignore_index_settings": [ "index.refresh_interval" ] }' && \
curl -H "Content-Type: application/json" -X GET "localhost:9200/_cat/indices"
RUN kill $(cat /tmp/epid) && wait $(cat /tmp/epid); exit 0;
CMD ["-E", "network.host=0.0.0.0", "-E", "discovery.zen.minimum_master_nodes=1"]
The image is built successfully, but when I start the container the index is lost. I'm not using any volumes. What am I missing?
version: '2'
services:
elasticsearch:
container_name: "elasticsearch"
build:
context: ./elasticsearch/
args:
access_key: access_key_here
secret_key: secret_key_here
bucket: bucket_here
repository: repository_here
snapshot: snapshot_here
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xms1g -Xmx1g -Des.path.conf=/etc/elasticsearch"
It seems that volumes cannot be burnt in images. The directory that holds the data generated are specified as a volume by the parent image. The only way to do this is to fork the parent Dockerfile and remove the volume part.

Dockerfile: How to replace a placeholder in environment variable with build-arg's?

I have a web application which I want to run on Docker for testing purposes.
The application uses a database as storage and the configuration for the database is maintained in an environment variable (JSON).
Below you can see the env variable definition in my Dockerfile (see also my approaches below)
ENV CONFIG '{ \
"credentials":{ \
"hostname": "172.17.0.5", \
"password": "PWD", \
"port": "1234", \
"username": "${USER}" \
}, \
"name":"database", \
"tags":[] \
}, \
...
If I hardcode all parameters for the database everything is working but I don't want to change my Dockerfile only because the IP address of the database has changed.
Therefore I want to use Docker build-args.
I already tried two approaches:
Directly reference the variable (see line with "${USER}")
Replace a placeholder like "PWD" with the following command RUN CONFIG=$(echo $CONFIG | sed 's/PWD/'$db_pwd'/g')
The first approach results in no replacement so ${USER} is ${USER}. The second approach seems to work (at least in terminal) but it seems like the variable assignment is not working.
Do you have any idea how I can make this work? Feel free to suggest other approaches. I just don't want to have hardcoded parameters in my Dockerfile.
Thanks!
Variable expansion can only work in double-quoted strings. This is working:
ENV CONFIG "{ \
\"credentials\":{ \
\"hostname\": \"172.17.0.5\", \
\"password\": \"PWD\", \
\"port\": \"1234\", \
\"username\": \"${USER}\" \
}, \
\"name\":\"database\", \
\"tags\":[] \
}"
A simple example:
FROM alpine
ENV USER foo
ENV CONFIG "{ \
\"credentials\":{ \
\"hostname\": \"172.17.0.5\", \
\"password\": \"PWD\", \
\"port\": \"1234\", \
\"username\": \"${USER}\" \
}, \
\"name\":\"database\", \
\"tags\":[] \
}"
ENTRYPOINT env | sort
_
$ docker build -t test .
$ docker run -it --rm test
CONFIG={ "credentials":{ "hostname": "172.17.0.5", "password": "PWD", "port": "1234", "username": "foo" }, "name":"database", "tags":[] }
HOME=/root
HOSTNAME=43d29bd12bc5
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
SHLVL=1
TERM=xterm
USER=foo

Resources