consul-boshrelease deployment errors - amazon-ec2

I followed their readme:
https://github.com/cloudfoundry-community/consul-boshrelease/
Result:
Deploying
---------
Director task 1990
Deprecation: Ignoring cloud config. Manifest contains 'networks' section.
Started preparing deployment > Preparing deployment. Done (00:00:00)
Started preparing package compilation > Finding packages to compile. Done (00:00:00)
Started compiling packages
Started compiling packages > envconsul/90d4cc3b4e290c3833cf5e32d0b5c99f4a63c0be
Started compiling packages > consul-template/561a4a5d99c375822876d5482ed24f790a0e216b
Started compiling packages > consul/30f12d1e70d89f28b34a433d2b885a03ae41adae
Failed compiling packages > consul-template/561a4a5d99c375822876d5482ed24f790a0e216b: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method (00:00:12)
Failed compiling packages > envconsul/90d4cc3b4e290c3833cf5e32d0b5c99f4a63c0be: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method (00:00:12)
Failed compiling packages > consul/30f12d1e70d89f28b34a433d2b885a03ae41adae: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method (00:00:12)
Failed compiling packages (00:00:12)
Error 100: Unknown CPI error 'InvalidCall' with message 'Arguments are not correct, details: 'expected params[:filters][0][:values][0] to be a String, got value nil (class: NilClass) instead.'' in 'create_vm' CPI method
Task 1990 error
I've tried to track down this Unknown CPI error, to no avail.

I had the same error message during I was deploying bosh to aws. The reason was a mistake in my bosh.yml manifest.
Instead
cloud_properties: {subnet: subnet-6b54e7f1}
I was write
cloud_properties: {subnet-6b54e7f1}
The other reason was that my instance type was m3. It has to be m4.
After my corrections this error message disappeared.

In the end, the problem was errors in their generated manifest file, that their readme tells you to generate by running templates/make_manifest aws-ec2. Aside from the manifest being wrong, the proper command there is actually just templates/make_manifest aws (without the "ec2" portion).
Anyways, here's the manifest file that got this deployed for me. Mind you, the Consul cluster wasn't actually working (500 errors on the dashboard), but, that's a story for another post. Look for the redacted $FOO items, to replace with your own.
compilation:
cloud_properties:
instance_type: m3.medium
availability_zone: us-east-1d
network: consul1
reuse_compilation_vms: true
workers: 6
director_uuid: $YOUR_ID
jobs:
- instances: 3
name: consul
networks:
- name: consul1
persistent_disk: 4096
properties:
consul:
join_host: 0.consul-z1.consul1.consul-aws.microbosh
services:
example: {}
networks:
apps: consul1
resource_pool: small_z1
templates:
- consumes:
consul_servers:
from: consul_leaders
name: consul
provides:
consul_servers:
as: consul_leaders
release: consul
update:
canaries: 0
max_in_flight: 50
name: consul-aws
networks:
- cloud_properties: {}
name: floating
type: vip
- cloud_properties:
subnet: $YOUR_SUBNET
security_groups:
- default
availability_zone: us-east-1d
name: consul1
type: dynamic
properties: {}
releases:
- name: consul
version: latest
resource_pools:
- cloud_properties:
instance_type: m3.medium
availability_zone: us-east-1d
name: small_z1
network: consul1
stemcell:
name: bosh-aws-xen-hvm-ubuntu-trusty-go_agent
version: latest
update:
canaries: 0
canary_watch_time: 1000-60000
max_in_flight: 50
serial: true
update_watch_time: 1000-60000

Related

Failed to get network: Failed to create new channel client: event service creation failed: could not get chConfig cache reference

Failed to get network: Failed to create new channel client: event service creation failed: could not get chConfig cache reference: QueryBlockConfig failed: QueryBlockConfig failed: queryChaincode failed: Transaction processing for endorser [peer-node-endpoint]: Endorser Client Status Code: (2) CONNECTION_FAILED. Description: dialing connection on target [peer-node-endpoint]: connection is in TRANSIENT_FAILURE
Getting this error when trying to connect fabric-sdk-go with network using connection-profile.yaml in Hyperledger fabric.
NOTE: chaincode is deployed and works just fine when I hit transaction from terminal.So no doubts on this side.
I saw same problem is posted on stack-overflow already but thats outdated as hyperledger-fabric v2.2 changes a lot as compared to v1.
Here is my connection profile from fabric-samples test-network.(only difference is I gave path of tls-cert file instead of pasting private key.
Here is connection-profile.yaml of test-network which works fine on local machine.
---
name: test-network-org1
version: 1.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.example.com
certificateAuthorities:
- ca.org1.example.com
peers:
peer0.org1.example.com:
url: grpcs://localhost:7051
tlsCACerts:
path: /path/to/cert/file
grpcOptions:
ssl-target-name-override: peer0.org1.example.com
hostnameOverride: peer0.org1.example.com
certificateAuthorities:
ca.org1.example.com:
url: https://localhost:7054
caName: ca-org1
tlsCACerts:
path: /path/to/cert/file
httpOptions:
verify: false
But if just change peer (logical) name i.e peer0.org1.example.com. from
peer0.org1.example.com to peer0.org1.com i.e just remove "example" word from name it gives same error I posted above.
So i am just wondering why is it the case because in hyperledger-fabric connection-profile documentation it say that this name is just logical name and nothing else and we can give it any name and all that matters is (peer/ca) url endpoint.
And new connection-profile looks like this:
---
name: test-network-org1
version: 1.0.0
client:
organization: Org1
connection:
timeout:
peer:
endorser: '300'
organizations:
Org1:
mspid: Org1MSP
peers:
- peer0.org1.com
certificateAuthorities:
- ca.org1.example.com
peers:
peer0.org1.com:
url: grpcs://localhost:7051
tlsCACerts:
path: /path/to/cert/file
grpcOptions:
ssl-target-name-override: peer0.org1.com
hostnameOverride: peer0.org1.com
certificateAuthorities:
ca.org1.example.com:
url: https://localhost:7054
caName: ca-org1
tlsCACerts:
path: /path/to/cert/file
httpOptions:
verify: false
And in docker logs it shows this error:
peer0.org1.example.com|2022-10-03 06:47:51.442 UTC 0c95 ERRO [core.comm] ServerHandshake -> Server TLS handshake failed in 3.5334ms with error remote error: tls: bad certificate server=PeerServer remoteaddress=172.20.0.1:61090
Now the path to cert file is correct and thats I'm sure about.
So can anyone guide me why changing just peer (logical) name on local hit this error.
if someone want to create same case.This can also be produced on Local (by running test-network from fabric-samples).
And as far code is concerned its the same as in fabric-samples I am just trying to run it using connection-profile.yaml (assetTransdferBasic case).
if anyone want any help regarding more detail I will be available.

Docker mount volume error no such file or directory (Windows)

I am trying to set up an EMQx broker in the form of deploying it with Docker. One of my constraints is to do this on Windows. To be able to use TLS/SSL authentication there must be a place to put certs in the container therefore I'd like to mount a volume.
I have tried several ways and read myriad of comments but I cannot make it work consistently. I always bump into "no such file or directory" message.
More interestingly once I got it to work and also saved the .yml file right after, but next time when I used the command docker-compose up with this yaml("YAML that worked once"), I received the usual and same message ("Resulting error message").
Path where the certs reside -> c:\Users\danha\Desktop\certs
Lines in question (please see the entire YAML below):
volumes:
- vol-emqx-conf://C//Users//danha//Desktop//certs
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: /Users/danha/Desktop/certs
o: bind
YAML that worked once:
version: '3.4'
services:
emqx:
image: emqx/emqx:4.3.10-alpine-arm32v7
container_name: "emqx"
hostname: "emqx"
restart: always
environment:
EMQX_NAME: lms_emqx
EMQX_HOST: 127.0.0.1
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_LOADED_PLUGINS: "emqx_auth_mnesia"
EMQX_LOADED_MODULES: "emqx_mod_topic_metrics"
volumes:
- vol-emqx-conf://C//Users//danha//Desktop//certs
labels:
NAME: "emqx"
ports:
- 18083:18083
- 1883:1883
- 8081:8081
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: //C//Users//danha//Desktop//certs
o: bind
Resulting error message
C:\Users\danha\Desktop\dc>docker-compose up
Creating network "dc_default" with the default driver
Creating volume "dc_vol-emqx-conf" with default driver
Creating emqx ... error
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount \\c\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount \\c\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: Encountered errors while bringing up the project.
I have also played around with forward and back slashes but these did not bring any success. At the end I entered a path which gave the result mostly resembling on the correct path in the error message:
YAML neglecting C: from the beginning of path:
version: '3.4'
services:
emqx:
image: emqx/emqx:4.3.10-alpine-arm32v7
container_name: "emqx"
hostname: "emqx"
restart: always
environment:
EMQX_NAME: lms_emqx
EMQX_HOST: 127.0.0.1
EMQX_ALLOW_ANONYMOUS: "false"
EMQX_LOADED_PLUGINS: "emqx_auth_mnesia"
EMQX_LOADED_MODULES: "emqx_mod_topic_metrics"
volumes:
- vol-emqx-conf:/Users/danha/Desktop/certs
labels:
NAME: "emqx"
ports:
- 18083:18083
- 1883:1883
- 8081:8081
volumes:
vol-emqx-conf:
driver_opts:
type: none
device: /Users/danha/Desktop/certs
o: bind
Resulting error message
C:\Users\danha\Desktop\dc>docker-compose up
Creating volume "dc_vol-emqx-conf" with default driver
Creating emqx ... error
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount C:\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: for emqx Cannot start service emqx: error while mounting volume '/var/lib/docker/volumes/dc_vol-emqx-conf/_data': failed to mount local volume: mount C:\Users\danha\Desktop\certs:/var/lib/docker/volumes/dc_vol-emqx-conf/_data, flags: 0x1000: no such file or directory
ERROR: Encountered errors while bringing up the project.
That also got me thinking that this issue might be related to access rights and file sharing between Windows and WLS2 and the CMD was run in admin mode too, however I could find any answer further down the line that would have helped.
Probably this is pretty a newbie question but any help would be greatly appreciated.

Error deploying SonarCube on an OpenShift Cluster

I have added a SonarQube operator (https://github.com/RedHatGov/sonarqube-operator) in my cluster and when I want to let a Sonar instance out of the operator, the container terminates with the Fail Message:
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [253832] is too low, increase to at least [262144]
The problem lies in the fact that the operator refers to the label;
tuned.openshift.io/elasticsearch
which leaves me to do the necessary tuning, but there is no Elasticsearch operator or tuning on this pristine cluster.
I have created a tuning for Sonar but for whatever reason the thing does not want to be pulled. It currently looks like this:
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: sonarqube
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=A custom OpenShift SonarQube profile
include=openshift-control-plane
[sysctl]
vm.max_map_count=262144
name: openshift-sonarqube
recommend:
- match:
- label: tuned.openshift.io/sonarqube
match:
- label: node-role.kubernetes.io/master
- label: node-role.kubernetes.io/infra
type: pod
priority: 10
profile: openshift-sonarqube
and on the deployment I give the label;
tuned.openshift.io/sonarqube
But for whatever reason it is not pulled and I still get the above error message. Does anyone have an idea, and/or are these necessary steps? I followed the documentation > (https://access.redhat.com/documentation/en-us/openshift_container_platform/4.8/html/scalability_and_performance/using-node-tuning-operator) and it didn't work with the customized example. I set the match in match again, but that didn't work either.
Any suggestions?
Maybe try this:
oc create -f - <<EOF
apiVersion: tuned.openshift.io/v1
kind: Tuned
metadata:
name: openshift-elasticsearch
namespace: openshift-cluster-node-tuning-operator
spec:
profile:
- data: |
[main]
summary=Optimize systems running ES on OpenShift nodes
include=openshift-node
[sysctl]
vm.max_map_count=262144
name: openshift-elasticsearch
recommend:
- match:
- label: tuned.openshift.io/elasticsearch
type: pod
priority: 20
profile: openshift-elasticsearch
EOF
(Got it from: https://github.com/openshift/cluster-node-tuning-operator/blob/master/examples/elasticsearch.yaml)

Can't run a serverless & bref basic example locally

I want to run a basic example of serverless & bref example.
What I did:
npm install -g serverless
composer require bref/bref
vendor/bin/bref init
serverless invoke local -f hello --docker
I getting this error:
Miroslavs-MacBook-Air:testing kosta90s$ serverless invoke local -f hello --docker
Serverless: Packaging service...
Serverless: Excluding development dependencies...
START RequestId: f815c369-8fa7-1671-cbbd-d623069bc9c2 Version: $LATEST
END RequestId: f815c369-8fa7-1671-cbbd-d623069bc9c2
REPORT RequestId: f815c369-8fa7-1671-cbbd-d623069bc9c2 Init Duration: 15.78 ms Duration: 1.35 ms Billed Duration: 100 ms Memory Size: 1024 MB Max Memory Used: 7 MB
{"errorType":"exitError","errorMessage":"RequestId: f815c369-8fa7-1671-cbbd-d623069bc9c2 Error: Couldn't find valid bootstrap(s): [/var/task/bootstrap /opt/bootstrap]"}
Error --------------------------------------------------
Error: Failed to run docker for provided image (exit code 1})
at /usr/local/lib/node_modules/serverless/lib/plugins/aws/invokeLocal/index.js:536:21
at runMicrotasks (<anonymous>)
at processTicksAndRejections (internal/process/task_queues.js:93:5)
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: darwin
Node Version: 14.14.0
Framework Version: 2.8.0
Plugin Version: 4.1.1
SDK Version: 2.3.2
Components Version: 3.2.7
serverless.yml
service: app
provider:
name: aws
region: us-east-1
runtime: provided
plugins:
- ./vendor/bref/bref
functions:
hello:
handler: index.php
description: ''
layers:
- ${bref:layer.php-74}
# Exclude files from deployment
package:
exclude:
- 'tests/**'
I working on MacOs Catalina.
serverless invoke local tries to use a Docker image named lambci/lambda:${runtime} where runtime is phpX.Y in your case.
https://github.com/serverless/serverless/blob/6a81137406fd2a2283663af93596ba79d23e38ef/lib/plugins/aws/invokeLocal/index.js#L478
There is no such image as you can see here:
https://hub.docker.com/r/lambci/lambda/tags
As the comments said, try without --docker. If you need Docker, you can follow the documentation and use the following docker-compose.yml:
version: "3.5"
services:
web:
image: bref/fpm-dev-gateway
ports:
- '8000:80'
volumes:
- .:/var/task
depends_on:
- php
environment:
HANDLER: index.php
php:
image: bref/php-74-fpm-dev
volumes:
- .:/var/task:ro

Gluster_Volume module in ansible

Request you to help me on the following Issue
I am writing a High available LAMPAPP on UBUNTU 14.04 with ansible (on my home lab). All the tasks are getting excecuted till the glusterfs installation however creating the Glusterfs Volume is a challenge for me since a week. If is use the command moudle the glusterfs volume is getting created
- name: Creating the Gluster Volume
command: sudo gluster volume create var-www replica 2 transport tcp server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/data/glusterfs/var-www/brick02/brick
But if i use the GLUSTER_VOLUME module i am getting the error
- name: Creating the Gluster Volume
gluster_volume:
state: present
name: var-www
bricks: /server01-private:/data/glusterfs/var-www/brick01/brick,/server02-private:/data/glusterfs/var-www/brick02/brick
replicas: 2
transport: tcp
cluster:
- server01-private
- server02-private
force: yes
run_once: true
The error is
"msg": "error running gluster (/usr/sbin/gluster --mode=script volume add-brick var-www replica 2 server01-private:/server01-private:/data/glusterfs/var-www/brick01/brick server01-private:/server02-private:/data/glusterfs/var-www/brick02/brick server02-private:/server01-private:/data/glusterfs/var-www/brick01/brick server02-private:/server02-private:/data/glusterfs/var-www/brick02/brick force) command (rc=1): internet address 'server01-private:/server01-private' does not conform to standards\ninternet address 'server01-private:/server02-private' does not conform to standards\ninternet address 'server02-private:/server01-private' does not conform to standards\ninternet address 'server02-private:/server02-private' does not conform to standards\nvolume add-brick: failed: Host server01-private:/server01-private is not in 'Peer in Cluster' state\n"
}
May i know the mistake i am committing
The bricks: declaration of Ansible gluster_volume module requires only the path of the brick. The nodes participating in the volume are identified as cluster:.
The <hostname>:<brickpath> format is required for the gluster command line. However when you use the Ansible module, this is not required.
So your task should be something like:
- name: Creating the Gluster Volume
gluster_volume:
name: 'var-www'
bricks: '/data/glusterfs/var-www/brick01/brick,/data/glusterfs/var-www/brick02/brick'
replicas: '2'
cluster:
- 'server01-private'
- 'server02-private'
transport: 'tcp'
state: 'present'

Resources