My aim is to deploy a container-labelling-webhook solution onto my AKS cluster using flux CD v2. Once I have it operational, I want to rollout to multiple clusters.
Command used to bootstrap AKS Cluster(Flux Installation I mean)
flux bootstrap git --url=https://github.xxxxxx.com/user1/test-repo.git --username=$GITHUB_USER --password=$GITHUB_TOKEN --token-auth=true --path=clusters/my-cluster
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
Now, I am trying to deploy my helm charts, note, helm chart deployment by itself works fine, not via Flux though.
flux create source helm label-webhook --url https://github.xxxxxx.com/user1/test-repo/tree/main/chart --namespace label-webhook --cert-file=./tls/label-webhook.pem --key-file=./tls/label-webhook-key.pem --ca-file=./tls/ca.pem --verbose
✚ generating HelmRepository source
► applying secret with repository credentials
✔ authentication configured
► applying HelmRepository source
✔ source created
◎ waiting for HelmRepository source reconciliation
✗ failed to fetch Helm repository index: failed to cache index to temporary file: Get "https://github.xxxxxx.com/user1/test-repo/tree/main/chart/index.yaml": x509: certificate signed by unknown authority
I am generating certs with the process below:
cat << EOF > ca-config.json
{
"signing": {
"default": {
"expiry": "43830h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "43830h"
}
}
}
}
EOF
cat << EOF > ca-csr.json
{
"hosts": [
"cluster.local"
],
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "AU",
"L": "Melbourne",
"O": "xxxxxx",
"OU": "Container Team",
"ST": "aks1-dev"
}
]
}
EOF
docker run -it --rm -v ${PWD}:/work -w /work debian bash
apt-get update && apt-get install -y curl &&
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o /usr/local/bin/cfssl && \
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o /usr/local/bin/cfssljson && \
chmod +x /usr/local/bin/cfssl && \
chmod +x /usr/local/bin/cfssljson
cfssl gencert -initca ca-csr.json | cfssljson -bare /tmp/ca
cfssl gencert \
-ca=/tmp/ca.pem \
-ca-key=/tmp/ca-key.pem \
-config=ca-config.json \
-hostname="mutation-label-webhook,mutation-label-webhook.label-webhook.svc.cluster.local,mutation-label-webhook.label-webhook.svc,localhost,127.0.0.1" \
-profile=default \
ca-csr.json | cfssljson -bare /tmp/label-webhook
root#91bc7986cb94:/work# ls -alrth /tmp/
total 32K
drwxr-xr-x 1 root root 4.0K Jul 29 04:42 ..
-rw-r--r-- 1 root root 2.0K Jul 29 04:43 ca.pem
-rw-r--r-- 1 root root 1.8K Jul 29 04:43 ca.csr
-rw------- 1 root root 3.2K Jul 29 04:43 ca-key.pem
-rw-r--r-- 1 root root 2.2K Jul 29 04:43 label-webhook.pem
-rw-r--r-- 1 root root 1.9K Jul 29 04:43 label-webhook.csr
-rw------- 1 root root 3.2K Jul 29 04:43 label-webhook-key.pem
drwxrwxrwt 1 root root 4.0K Jul 29 04:43 .
root#91bc7986cb94:/work#
root#83faa77cd5f6:/work# cp -apvf /tmp/* .
'/tmp/ca-key.pem' -> './ca-key.pem'
'/tmp/ca.csr' -> './ca.csr'
'/tmp/ca.pem' -> './ca.pem'
'/tmp/label-webhook-key.pem' -> './label-webhook-key.pem'
'/tmp/label-webhook.csr' -> './label-webhook.csr'
'/tmp/label-webhook.pem' -> './label-webhook.pem'
root#83faa77cd5f6:/work# pwd
/work
chmod -R 777 tls/
helm upgrade --install mutation chart --namespace label-webhook --create-namespace --set secret.cert=$(cat tls/label-webhook.pem | base64 | tr -d '\n') --set secret.key=$(cat tls/label-webhook-key.pem | base64 | tr -d '\n') --set secret.cabundle=$(openssl base64 -A <"tls/ca.pem")
I am totally confused as to how to get flux working?
Flux doesn't trust the certificate presented by your git server github.xxxxxx.com
Quick workaround is to use --insecure-skip-tls-verify flag as described here: https://fluxcd.io/docs/cmd/flux_bootstrap_git/
Full command:
flux create source helm label-webhook --url https://github.xxxxxx.com/user1/test-repo/tree/main/chart --namespace label-webhook --cert-file=./tls/label-webhook.pem --key-file=./tls/label-webhook-key.pem --ca-file=./tls/ca.pem --verbose --insecure-skip-tls-verify
It's interesting there wasn't problem with flux bootstrap git step but it probably just create configuration for repository in this step and not establish connection to it.
Whatever certificates you are generating don't have anything to do with your GIT server TLS certificate. Seems you're doing some admission webhook magic but the certs you generate there have nothing in common with github.xxxxxx.com certificate so there is no need to specify if in --ca-file flag.
Permanent solution is to get the CA certificate that signed the github.xxxxxx.com so you need to ask the administrators of the GIT server to provide you CA file and specify that one in --ca-file flag. Not the one you created for your webhook experiments.
Related
Team,
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
cannot run above file but I can cat it well. I tried my best and still trying to find but no luck so far..
my requirement is to mount bash script from config map to a directory inside container and run it to clone a repo but am getting below message.
cron job
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
spec:
template:
metadata:
spec:
containers:
- args:
- -c
- |
set -x
pwd && ls
ls -ltr /
cat /repo/clone.sh
./repo/clone.sh
pwd
command:
- /bin/bash
envFrom:
- configMapRef:
name: sonarscanner-configmap
image: artifactory.build.team.com/product-containers/user/sonarqube-scanner:4.7.0.2747
imagePullPolicy: IfNotPresent
name: sonarqube-sonarscanner
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /repo
name: repo-checkout
dnsPolicy: ClusterFirst
initContainers:
- args:
- -c
- cd /
command:
- /bin/sh
image: busybox
imagePullPolicy: IfNotPresent
name: clone-repo
securityContext:
privileged: true
volumeMounts:
- mountPath: /repo
name: repo-checkout
readOnly: true
restartPolicy: OnFailure
securityContext:
fsGroup: 0
volumes:
- configMap:
defaultMode: 420
name: product-configmap
name: repo-checkout
schedule: '*/1 * * * *'
ConfigMap
kind: ConfigMap
metadata:
apiVersion: v1
data:
clone.sh: |-
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd ${CODE_REPO_NAME}
pwd
output pod describe
Warning FailedCreatePodSandBox 1s kubelet, node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "sonarqube-cronjob-1670256720-fwv27": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\"": unknown
pod logs
+ pwd
+ ls
/usr/src
+ ls -ltr /repo/clone.sh
lrwxrwxrwx 1 root root 15 Dec 5 16:26 /repo/clone.sh -> ..data/clone.sh
+ ls -ltr
total 60
.
drwxr-xr-x 2 root root 4096 Aug 9 08:58 sbin
drwx------ 2 root root 4096 Aug 9 08:58 root
drwxr-xr-x 2 root root 4096 Aug 9 08:58 mnt
drwxr-xr-x 5 root root 4096 Aug 9 08:58 media
drwxrwsrwx 3 root root 4096 Dec 5 16:12 repo <<<<< MY MOUNTED DIR
.
+ cat /repo/clone.sh
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd code_dir
+ ./repo/clone.sh
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
+ pwd
pwd/usr/src
Assuming the working directory is different thant /:
If you want to source your script in the current process of bash (shorthand .) you have to add a space between the dot and the path:
. /repo/clone.sh
If you want to execute it in a child process, remove the dot:
/repo/clone.sh
I struggled a bit to make PyGreSQL to work in AWS Lambda (Python 3.9) to connect to an Aurora PostgreSQL instance. Searching google and stack overflow didn't return any relevant results. Most of the hits were for making psycopg2 to work with AWS Lambda. So leaving the following out here for anybody else having the same issue and trying to figure a solution.
Here is my Lambda code.
import boto3
import cfnresponse
import logging
import os
import sys
# import DB-API 2.0 compliant module for PygreSQL
from pgdb import connect
from botocore.exceptions import ClientError
import json
logger = logging.getLogger()
logger.setLevel(logging.INFO)
DBHOST = os.environ['DBHost']
DBPORT = os.environ['DBPort']
DBNAME = os.environ['DBName']
DBUSER = os.environ['DBUser']
SECRET_ARN = os.environ['Secret_ARN']
REGION_NAME = os.environ['Region_Name']
def handler(event, context):
try:
responseData = {}
try:
DBPASS = get_secret(SECRET_ARN,REGION_NAME)
# Connection to SSL enabled Aurora PG database using RDS root certificate
HOSTPORT=DBHOST + ':' + str(DBPORT)
my_connection = connect(database=DBNAME, host=HOSTPORT, user=DBUSER, password=DBPASS, sslmode='require', sslrootcert = 'rds-combined-ca-bundle.pem')
logger.info("SUCCESS: Connection to RDS PG instance succeeded")
except Exception as e:
logger.error('Exception: ' + str(e))
logger.error("ERROR: Unexpected error: Couldn't connect to Aurora PostgreSQL instance.")
responseData['Data'] = "ERROR: Unexpected error: Couldn't connect to Aurora PostgreSQL instance."
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "None")
sys.exit()
if event['RequestType'] == 'Create':
try:
with my_connection.cursor() as cur:
#Execute bootstrap SQLs
cur.execute("create extension if not exists pg_stat_statements")
cur.execute("create extension if not exists pgaudit")
my_connection.commit()
cur.close()
my_connection.close()
responseData['Data'] = "SUCCESS: Executed SQL statements successfully."
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "None")
except Exception as e:
logger.error('Exception: ' + str(e))
responseData['Data'] = "ERROR: Exception encountered!"
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "None")
else:
responseData['Data'] = "{} is unsupported stack operation for this lambda function.".format(event['RequestType'])
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "None")
except Exception as e:
logger.error('Exception: ' + str(e))
responseData['Data'] = str(e)
cfnresponse.send(event, context, cfnresponse.SUCCESS, responseData, "None")
def get_secret(secret_arn,region_name):
# Create a Secrets Manager client
session = boto3.session.Session()
client = session.client(
service_name='secretsmanager',
region_name=region_name
)
try:
get_secret_value_response = client.get_secret_value(
SecretId=secret_arn
)
except ClientError as e:
if e.response['Error']['Code'] == 'DecryptionFailureException':
logger.error("Secrets Manager can't decrypt the protected secret text using the provided KMS key")
elif e.response['Error']['Code'] == 'InternalServiceErrorException':
logger.error("An error occurred on the server side")
elif e.response['Error']['Code'] == 'InvalidParameterException':
logger.error("You provided an invalid value for a parameter")
elif e.response['Error']['Code'] == 'InvalidRequestException':
logger.error("You provided a parameter value that is not valid for the current state of the resource")
elif e.response['Error']['Code'] == 'ResourceNotFoundException':
logger.error("We can't find the resource that you asked for")
else:
# Decrypts secret using the associated KMS CMK.
secret = json.loads(get_secret_value_response['SecretString'])['password']
return secret
I used my Cloud9 Amazon Linux 2 instance to create the lambda zip package. Installed Python 3.9 following https://computingforgeeks.com/how-to-install-python-on-amazon-linux/ and installed PyGreSQL using the following commands:
mkdir pygresql
pip3.9 install --target ./pygresql PyGreSQL
I included the contents of pygresql directory in the lambda package containing the lambda code.
Lambda was showing the following error during my test:
Cannot import shared library for PyGreSQL probably because no libpq.so is installed libldap_r-2.4.so.2: cannot open shared object file: No such file or directory
This is because AWS Lambda is missing the required PostgreSQL libraries in the AMI image. To fix this, I had to do the following:
Install PostgreSQL 14.3 on my cloud9. Its important to run the configure command with with-openssl option if you want to connect to an RDS/Aurora PostgreSQL instance where rds.force_ssl is set to 1.
sudo yum -y group install "Development Tools"
sudo yum -y install readline-devel
sudo yum -y install openssl-devel
mkdir /home/ec2-user/postgresql
cd /home/ec2-user/postgresql
curl https://ftp.postgresql.org/pub/source/v14.3/postgresql-14.3.tar.gz -o postgresql-14.3.tar.gz >> /debug.log
tar -xvf postgresql-14.3.tar.gz
cd postgresql-14.3
sudo ./configure --with-openssl
sudo make -C src/bin install
sudo make -C src/include install
sudo make -C src/interfaces install
sudo make -C doc install
sudo /sbin/ldconfig /usr/local/pgsql/lib
Then I copied the following files from /usr/local/pgsql/lib/ directory and included them in the lib directory of the lambda package containing the lambda code.
-rw-r--r-- 1 ec2-user ec2-user 287982 Aug 2 06:15 libpq.a
-rwxr-xr-x 1 ec2-user ec2-user 332432 Aug 2 06:15 libpq.so
-rwxr-xr-x 1 ec2-user ec2-user 332432 Aug 2 06:15 libpq.so.5
-rwxr-xr-x 1 ec2-user ec2-user 332432 Aug 2 06:16 libpq.so.5.14
Here are the contents of my lambda package:
drwxr-xr-x 1 1049089 0 Aug 1 15:25 PyGreSQL-5.2.4-py3.9.egg-info/
drwxr-xr-x 1 1049089 0 Aug 1 15:25 __pycache__/
-rw-r--r-- 1 1049089 345184 Aug 2 05:16 _pg.cpython-39-x86_64-linux-gnu.so
drwxr-xr-x 1 1049089 0 Aug 1 15:20 certifi/
drwxr-xr-x 1 1049089 0 Aug 1 15:20 certifi-2019.11.28.dist-info/
-rw-r--r-- 1 1049089 1845 Mar 23 2020 cfnresponse.py
drwxr-xr-x 1 1049089 0 Aug 1 15:20 chardet/
drwxr-xr-x 1 1049089 0 Aug 1 15:22 chardet-3.0.4.dist-info/
-rw-r--r-- 1 1049089 4391 Mar 23 2020 dbbootstrap.py
-rw-r--r-- 1 1049089 2094165 Aug 1 23:20 dbbootstrap.zip
drwxr-xr-x 1 1049089 0 Aug 1 15:22 idna/
drwxr-xr-x 1 1049089 0 Aug 1 15:22 idna-2.8.dist-info/
drwxr-xr-x 1 1049089 0 Aug 1 15:23 lib/
-rwxr-xr-x 1 1049089 104780 Mar 26 17:20 pg.py*
-rwxr-xr-x 1 1049089 66051 Mar 26 17:20 pgdb.py*
-rw-r--r-- 1 1049089 65484 Mar 23 2020 rds-combined-ca-bundle.pem
drwxr-xr-x 1 1049089 0 Aug 1 15:23 requests/
drwxr-xr-x 1 1049089 0 Aug 1 15:23 requests-2.22.0.dist-info/
drwxr-xr-x 1 1049089 0 Aug 1 15:23 urllib3/
drwxr-xr-x 1 1049089 0 Aug 1 15:25 urllib3-1.25.8.dist-info/
AWS Lambda was happy after this and was able to connect to the PostgreSQL instance.
Thanks #Arabinda for this, nothing else really worked for me while trying to use psycopg2 and I accidentally found your guide. I was able to import psycopg2 using this way but just using
pip3.9 install --target ./pygresql PyGreSQL psycopg2
instead of
pip3.9 install --target ./pygresql PyGreSQL
After that, I just zipped the contents under python\lib\python3.9\site-packages and uploaded it as a Layer.
We stumbled across the same problem and didn't want to provision a whole new server. So we ended up with a dockerfile building the layer zip for us. In case somebody could use it.
# https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-image-repositories.html
FROM public.ecr.aws/sam/build-python3.9:latest
# https://ftp.postgresql.org/pub/source/
ARG postgresql_version=14.6
RUN yum upgrade -y && \
yum install -y openssl-devel openssl-static
WORKDIR /tmp
ENV PREFIX /tmp/local
RUN \
curl -fsSL https://ftp.postgresql.org/pub/source/v${postgresql_version}/postgresql-${postgresql_version}.tar.bz2 \
-o postgresql-${postgresql_version}.tar.bz2
RUN \
mkdir ${PREFIX} && \
tar jxf postgresql-${postgresql_version}.tar.bz2 && \
cd postgresql-${postgresql_version} && \
./configure \
--prefix=${PREFIX} \
--with-openssl \
--without-readline \
&& \
make install
RUN \
export PATH="$PATH:${PREFIX}/bin/" && \
mkdir /tmp/python && \
mkdir /tmp/lib && \
mkdir /tmp/lib64 && \
pip3 install --target /tmp/python PyGreSQL
RUN \
cd ${PREFIX} && \
cp lib/libpq.so* /tmp/lib && \
cp /lib64/libssl.so.10 /tmp/lib64
RUN \
cd /tmp && \
zip -r /tmp/libs.zip ./python ./lib ./lib64
ENTRYPOINT ["cat", "/tmp/libs.zip"]
Run
docker run myBuild:latest > layer.zip
to extract the zip from the dockerimage.
When I start a linux server with Cloud-init, I have a few scripts in /etc/cloud/cloud.cfg.d/ and they run in reverse alphabetical order
# ll /etc/cloud/cloud.cfg.d/
total 28
-rw-r--r-- 1 root root 173 Dec 10 12:38 00-cloudinit-lifecycle-hook.cfg
-rw-r--r-- 1 root root 2120 Jun 1 2021 05_logging.cfg
-rw-r--r-- 1 root root 590 Oct 26 17:55 10_aws_yumvars.cfg
-rw-r--r-- 1 root root 29 Dec 1 18:22 20_amazonlinux_repo_https.cfg
-rw-r--r-- 1 root root 586 Dec 10 12:38 50-cloudinit-tomcat.cfg
-rw-r--r-- 1 root root 585 Dec 10 12:40 60-cloudinit-newrelic.cfg
The last to execute is 00-cloudinit-lifecycle-hook.cfg, in which I complete the lifecycle for the Auto Scaling Group with a CONTINUE. The ASG fails if it doesn't receive this signal after a given time out.
The issue is that even if there's an error in 50-cloudinit-tomcat.cfg, it still runs 00-cloudinit-lifecycle-hook.cfg instead of stopping
How can I ensure cloud-init stops and never reaches the last script? I would like the ASG to never receive the CONTINUE signal if there's any error.
Here are the files:
EC2 instance user-data:
#cloud-config
bootcmd:
- [cloud-init-per, once, "app-volume", mkfs, -t, "ext4", "/dev/nvme1n1"]
mounts:
- ["/dev/nvme1n1", "/app-volume", "ext4", "defaults,nofail", "0", "0"]
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
50-cloudinit-tomcat.cfg
#cloud-config
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
runcmd:
- "#!/bin/bash -e"
- set +x
- echo ' '
- echo '# ===================================='
- echo '# Tomcat Cloud Init '
- echo '# /etc/cloud/cloud.cfg.d/'
- echo '# ===================================='
- echo ' '
- echo '#===================================='
- echo '# Run Ansible'
- echo '#===================================='
- echo ' '
- set -x
- ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml
when I run ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml directly in the instance I get an error, and I know it returns 2
ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml #shows errors
echo $? # shows "2"
00-cloudinit-lifecycle-hook.cfg
#cloud-config
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
runcmd:
- "/opt/lifecycles/lifecycle-hook-continue.sh"
An alternative I can think of, is to send a ABANDON signal instead of CONTINUE as soon as there's en error in one of the cloud-init config. But I can't find in the documentation on to define if there's an error
I tried to setup a persistent data store for REST server but was unable to do it.I am posting the steps which I have followed to do it.
Steps which I followed to set a persistent data store for REST server.
Started an instance of MongoDB:
root#ubuntu:~# docker run -d --name mongo --network composer_default -p 27017:27017 mongo
dda3340e4daf7b36a244c5f30772f50a4ee1e8f81cc7fc5035f1090cdcf46c58
Created a new, empty directory. Created a new file named Dockerfile the new directory, with the following contents:
FROM hyperledger/composer-rest-server
RUN npm install --production loopback-connector-mongodb passport-github && \
npm cache clean && \
ln -s node_modules .node_modules
Changed into the directory created in step 2, and build the Docker image:
root#ubuntu:~# cd examples/dir/
root#ubuntu:~/examples/dir# ls
Dockerfile ennvars.txt
root#ubuntu:~/examples/dir# docker build -t myorg/my-composer-rest-server .
Sending build context to Docker daemon 4.096 kB
Step 1/2 : FROM hyperledger/composer-rest-server
---> 77cd6a591726
Step 2/2 : RUN npm install --production loopback-connector-couch passport-github && npm cache clean && ln -s node_modules .node_modules
---> Using cache
---> 2ff9537656d1
Successfully built 2ff9537656d1
root#ubuntu:~/examples/dir#
Created file named ennvars.txt in the same directory.
The contents are as follows:
COMPOSER_CONNECTION_PROFILE=hlfv1
COMPOSER_BUSINESS_NETWORK=blockchainv5
COMPOSER_ENROLLMENT_ID=admin
COMPOSER_ENROLLMENT_SECRET=adminpw
COMPOSER_NAMESPACES=never
COMPOSER_SECURITY=true
COMPOSER_CONFIG='{
"type": "hlfv1",
"orderers": [
{
"url": "grpc://localhost:7050"
}
],
"ca": {
"url": "http://localhost:7054",
"name": "ca.example.com"
},
"peers": [
{
"requestURL": "grpc://localhost:7051",
"eventURL": "grpc://localhost:7053"
}
],
"keyValStore": "/home/ubuntu/.hfc-key-store",
"channel": "mychannel",
"mspID": "Org1MSP",
"timeout": "300"
}'
COMPOSER_DATASOURCES='{
"db": {
"name": "db",
"connector": "mongodb",
"host": "mongo"
}
}'
COMPOSER_PROVIDERS='{
"github": {
"provider": "github",
"module": "passport-github",
"clientID": "a88810855b2bf5d62f97",
"clientSecret": "f63e3c3c65229dc51f1c8964b05e9717bf246279",
"authPath": "/auth/github",
"callbackURL": "/auth/github/callback",
"successRedirect": "/",
"failureRedirect": "/"
}
}'
Loaded the env variables by the following command.
root#ubuntu:~/examples/dir# source ennvars.txt
Started the docker container by the below command
root#ubuntu:~/examples/dir# docker run \
-d \
-e COMPOSER_CONNECTION_PROFILE=${COMPOSER_CONNECTION_PROFILE} \
-e COMPOSER_BUSINESS_NETWORK=${COMPOSER_BUSINESS_NETWORK} \
-e COMPOSER_ENROLLMENT_ID=${COMPOSER_ENROLLMENT_ID} \
-e COMPOSER_ENROLLMENT_SECRET=${COMPOSER_ENROLLMENT_SECRET} \
-e COMPOSER_NAMESPACES=${COMPOSER_NAMESPACES} \
-e COMPOSER_SECURITY=${COMPOSER_SECURITY} \
-e COMPOSER_CONFIG="${COMPOSER_CONFIG}" \
-e COMPOSER_DATASOURCES="${COMPOSER_DATASOURCES}" \
-e COMPOSER_PROVIDERS="${COMPOSER_PROVIDERS}" \
--name rest \
--network composer_default \
-p 3000:3000 \
myorg/my-composer-rest-server
942eb1bfdbaf5807b1fe2baa2608ab35691e9b6912fb0d3b5362531b8adbdd3a
It got executed successfully. So now I should be able to access the persistent and secured REST server by going to explorer page of loopback
But when tried to open the above url got the below error.
Error Image
Have I missed any step or done something wrong.
Two things:
You need to put export in front of the envvars in your envvars.txt file.
Check the version of Composer you are running. The FROM hyperledger/composer-rest-server command will pull the latest version of the rest server down, and if your composer version is not updated, the two will be incompatible.
On a RHEL6 system, I followed the steps laid out here to create a repository and capture a snapshot prior to my upgrade. I verified the existence of the snap shot:
curl 'localhost:9200/_snapshot/_all?pretty=true'
Which gave me the following result:
{ "upgrade_backup" : {
"type" : "fs",
"settings" : {
"compress" : "true",
"location" : "/tmp/elasticsearch-backup"
} } }
After upgrading Elasticsearch via yum, I went to restore my snapshot but none are showing up:
curl 'localhost:9200/_snapshot/_all?pretty=true'
{ }
I checked on the file system and see the repository files:
ls -lrt /tmp/elasticsearch-backup
total 24
-rw-r--r--. 1 elasticsearch elasticsearch 121 Apr 7 14:42 meta-snapshot-number-one.dat
drwxr-xr-x. 3 elasticsearch elasticsearch 21 Apr 7 14:42 indices
-rw-r--r--. 1 elasticsearch elasticsearch 191 Apr 7 14:42 snap-snapshot-number-one.dat
-rw-r--r--. 1 elasticsearch elasticsearch 37 Apr 7 14:42 index
-rw-r--r--. 1 elasticsearch elasticsearch 188 Apr 7 14:51 index-0
-rw-r--r--. 1 elasticsearch elasticsearch 8 Apr 7 14:51 index.latest
-rw-r--r--. 1 elasticsearch elasticsearch 29 Apr 7 14:51 incompatible-snapshots
I made sure elasticsearch.yml still has the "data.repo" tag, so I'm not sure where to look or what to do to determine what happened, but somehow my snapshots vanished!
You need to add following line to elasticsearch.yml:
path.repo: ["/tmp/elasticsearch-backup"]
Then restart Elastic service and create a new snapshots repository:
curl -XPUT "http://localhost:92000/_snapshot/backup" -H 'Content-Type: application/json' -d '{
"type": "fs",
"settings": {
"location": "/tmp/elasticsearch-backup",
"compress": true
}
}'
Now you should be able to list all snapshots in your repository and eventually restore them:
curl -s -XGET "localhost:9200/_snapshot/backup/_all" | jq .