cannot create transaction block : cannot define a new channel in configtxgen - yaml

this question has been already asked source as the question is 10 month old and as there are newer versions on fabric i'm reposting this question.
for the following YAML FILE
Organizations:
- &Smartforce
Name: SmartforceMSP
ID: SmartforceMSP
MSPDir: /home/falcon/dev-iq-smartforce/crypto-config/ordererOrganizations/smartforce.com/msp
- &BusinessPartner1
Name: FalconMSP
ID: FalconMSP
MSPDir: /home/falcon/dev-iq-smartforce/crypto-config/peerOrganizations/falcon.com/msp
AnchorPeers:
- Host: localhost
Port: 7051
- &BusinessPartner2
Name: FrostMSP
ID: FrostMSP
MSPDir: /home/falcon/dev-iq-smartforce/crypto-config/peerOrganizations/frost.com/msp
AnchorPeers:
- Host: localhost
Port: 8051
# Configuration for the Orderer
Orderer: &OrdererDefaults #SampleInsecureSolo
OrdererType: solo
Addresses:
- localhost:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 98 MB
PreferredMaxBytes: 512 KB
Application: &ApplicationDefaults
Organizations:
Channel: &ChannelDefaults
Profiles:
TwoPartnerGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Smartforce
Application:
<<: *ApplicationDefaults
Organizations:
- <<: *BusinessPartner1
- <<: *BusinessPartner2
Consortiums:
TwoPartnerConsortium:
Organizations:
- *BusinessPartner1
- *BusinessPartner2
TwoOrgChannel:
Consortium: TwoPartnerConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- <<: *BusinessPartner1
- <<: *BusinessPartner2
the result for following file :
input :
configtxgen -outputCreateChannelTx ./TwoOrgChannel.tx -profile
TwoPartnerGenesis -channelID channel01
output :
configtxgen -outputCreateChannelTx ./TwoOrgChannel.tx -profile TwoPartnerGenesis -channelID channel01
2018-12-20 12:30:29.818 IST [common/tools/configtxgen] main -> INFO 001 Loading configuration
2018-12-20 12:30:29.824 IST [common/tools/configtxgen] doOutputChannelCreateTx -> INFO 002 Generating new channel configtx
2018-12-20 12:30:29.824 IST [common/tools/configtxgen] main -> CRIT 003 Error on outputChannelCreateTx: config update generation failure: cannot define a new channel with no Consortium value
please anyone help me to identify the error.
thanks in advance.

Try this:
configtxgen -outputCreateChannelTx ./TwoOrgChannel.tx -profile TwoOrgChannel -channelID channel01
I think you selected the wrong profile to create the channel transaction

Related

Greenplum Operator on kubernetes zapr error

I am trying to deploy Greenplum Operator on kubernetes and I get the following error:
kubectl describe pod greenplum-operator-87d989b4d-ldft6:
Name: greenplum-operator-87d989b4d-ldft6
Namespace: greenplum
Priority: 0
Node: node-1/some-ip
Start Time: Mon, 23 May 2022 14:07:26 +0200
Labels: app=greenplum-operator
pod-template-hash=87d989b4d
Annotations: cni.projectcalico.org/podIP: some-ip
cni.projectcalico.org/podIPs: some-ip
Status: Running
IP: some-ip
IPs:
IP: some-ip
Controlled By: ReplicaSet/greenplum-operator-87d989b4d
Containers:
greenplum-operator:
Container ID: docker://364997050b1f337ff61b8ce40534697bbc13aae29f7b9ae5255245375acce03f
Image: greenplum-operator:v2.3.0
Image ID: docker-pullable://greenplum-operator:v2.3.0
Port: <none>
Host Port: <none>
Command:
greenplum-operator
--logLevel
debug
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 23 May 2022 15:29:59 +0200
Finished: Mon, 23 May 2022 15:30:32 +0200
Ready: False
Restart Count: 19
Environment:
GREENPLUM_IMAGE_REPO: greenplum-operator:v2.3.0
GREENPLUM_IMAGE_TAG: v2.3.0
OPERATOR_IMAGE_REPO: greenplum-operator:v2.3.0
OPERATOR_IMAGE_TAG: v2.3.0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from greenplum-system-operator-token-xcz4q (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
greenplum-system-operator-token-xcz4q:
Type: Secret (a volume populated by a Secret)
SecretName: greenplum-system-operator-token-xcz4q
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 32s (x340 over 84m) kubelet Back-off restarting failed container
kubectl logs greenplum-operator-87d989b4d-ldft6
{"level":"INFO","ts":"2022-05-23T13:35:38.735Z","logger":"setup","msg":"Go Info","Version":"go1.14.10","GOOS":"linux","GOARCH":"amd64"}
{"level":"INFO","ts":"2022-05-23T13:35:41.242Z","logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"INFO","ts":"2022-05-23T13:35:41.262Z","logger":"setup","msg":"starting manager"}
{"level":"INFO","ts":"2022-05-23T13:35:41.262Z","logger":"admission","msg":"starting greenplum validating admission webhook server"}
{"level":"INFO","ts":"2022-05-23T13:35:41.262Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumpxfservice","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.264Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumplservice","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.264Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumcluster","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.262Z","logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"INFO","ts":"2022-05-23T13:35:41.265Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumtextservice","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.361Z","logger":"admission","msg":"CertificateSigningRequest: created"}
{"level":"INFO","ts":"2022-05-23T13:35:41.363Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumpxfservice","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.364Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumplservice","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.364Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumcluster","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.366Z","logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"greenplumtextservice","source":"kind source: /, Kind="}
{"level":"INFO","ts":"2022-05-23T13:35:41.464Z","logger":"controller-runtime.controller","msg":"Starting Controller","controller":"greenplumpxfservice"}
{"level":"INFO","ts":"2022-05-23T13:35:41.464Z","logger":"controller-runtime.controller","msg":"Starting Controller","controller":"greenplumplservice"}
{"level":"INFO","ts":"2022-05-23T13:35:41.465Z","logger":"controller-runtime.controller","msg":"Starting workers","controller":"greenplumplservice","worker count":1}
{"level":"INFO","ts":"2022-05-23T13:35:41.465Z","logger":"controller-runtime.controller","msg":"Starting Controller","controller":"greenplumcluster"}
{"level":"INFO","ts":"2022-05-23T13:35:41.465Z","logger":"controller-runtime.controller","msg":"Starting workers","controller":"greenplumpxfservice","worker count":1}
{"level":"INFO","ts":"2022-05-23T13:35:41.465Z","logger":"controller-runtime.controller","msg":"Starting workers","controller":"greenplumcluster","worker count":1}
{"level":"INFO","ts":"2022-05-23T13:35:41.466Z","logger":"controller-runtime.controller","msg":"Starting Controller","controller":"greenplumtextservice"}
{"level":"INFO","ts":"2022-05-23T13:35:41.466Z","logger":"controller-runtime.controller","msg":"Starting workers","controller":"greenplumtextservice","worker count":1}
{"level":"ERROR","ts":"2022-05-23T13:36:11.368Z","logger":"setup","msg":"error","error":"getting certificate for webhook: failure while waiting for approval: timed out waiting for the condition","errorCauses":[{"error":"getting certificate for webhook: failure while waiting for approval: timed out waiting for the condition"}],"stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr#v0.1.0/zapr.go:128\nmain.main\n\t/greenplum-for-kubernetes/greenplum-operator/cmd/greenplumOperator/main.go:35\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:203"}
I tried to redeploy the cert-manager and check logs but couldn't find anything. Documentation of the greenplum-for-kubernetes doesn't mention anything about that. Read the whole troubleshooting document from the pivotal website too

Not able to join EKS Nodegroup into the existing EKScluster in AWS CloudFormation

In this template we are creating node groups that are to be deployed in the existing EKS cluster and VPC. The stack gets deployed successfully but I don't see the node groups inside my existing EKS cluster.
AWSTemplateFormatVersion: "2010-09-09"
Description: Amazon EKS - Node Group
Metadata:
"AWS::CloudFormation::Interface":
ParameterGroups:
- Label:
default: EKS Cluster
Parameters:
- ClusterName
- ClusterControlPlaneSecurityGroup
- Label:
default: Worker Node Configuration
Parameters:
- NodeGroupName
- NodeAutoScalingGroupMinSize
- NodeAutoScalingGroupDesiredCapacity
- NodeAutoScalingGroupMaxSize
- NodeInstanceType
- NodeImageIdSSMParam
- NodeImageId
- NodeVolumeSize
- KeyName
- BootstrapArguments
- Label:
default: Worker Network Configuration
Parameters:
- VpcId
- Subnets
Parameters:
BootstrapArguments:
Type: String
Default: ""
Description: "Arguments to pass to the bootstrap script. See files/bootstrap.sh in https://github.com/awslabs/amazon-eks-ami"
ClusterControlPlaneSecurityGroup:
Type: "AWS::EC2::SecurityGroup::Id"
Description: The security group of the cluster control plane.
ClusterName:
Type: String
Description: The cluster name provided when the cluster was created. If it is incorrect, nodes will not be able to join the cluster.
KeyName:
Type: "AWS::EC2::KeyPair::KeyName"
Description: The EC2 Key Pair to allow SSH access to the instances
NodeAutoScalingGroupDesiredCapacity:
Type: Number
Default: 3
Description: Desired capacity of Node Group ASG.
NodeAutoScalingGroupMaxSize:
Type: Number
Default: 4
Description: Maximum size of Node Group ASG. Set to at least 1 greater than NodeAutoScalingGroupDesiredCapacity.
NodeAutoScalingGroupMinSize:
Type: Number
Default: 1
Description: Minimum size of Node Group ASG.
NodeGroupName:
Type: String
Description: Unique identifier for the Node Group.
NodeImageId:
Type: String
Default: ""
Description: (Optional) Specify your own custom image ID. This value overrides any AWS Systems Manager Parameter Store value specified above.
NodeImageIdSSMParam:
Type: "AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>"
Default: /aws/service/eks/optimized-ami/1.14/amazon-linux-2/recommended/image_id
Description: AWS Systems Manager Parameter Store parameter of the AMI ID for the worker node instances.
NodeInstanceType:
Type: String
Default: t3.medium
AllowedValues:
- a1.medium
- a1.large
- a1.xlarge
- a1.2xlarge
- a1.4xlarge
- c1.medium
- c1.xlarge
- c3.large
- c3.xlarge
- c3.2xlarge
- c3.4xlarge
- c3.8xlarge
- c4.large
- c4.xlarge
- c4.2xlarge
- c4.4xlarge
- c4.8xlarge
- c5.large
- c5.xlarge
- c5.2xlarge
- c5.4xlarge
- c5.9xlarge
- c5.12xlarge
- c5.18xlarge
- c5.24xlarge
- c5.metal
- c5d.large
- c5d.xlarge
- c5d.2xlarge
- c5d.4xlarge
- c5d.9xlarge
- c5d.18xlarge
- c5n.large
- c5n.xlarge
- c5n.2xlarge
- c5n.4xlarge
- c5n.9xlarge
- c5n.18xlarge
- cc2.8xlarge
- cr1.8xlarge
- d2.xlarge
- d2.2xlarge
- d2.4xlarge
- d2.8xlarge
- f1.2xlarge
- f1.4xlarge
- f1.16xlarge
- g2.2xlarge
- g2.8xlarge
- g3s.xlarge
- g3.4xlarge
- g3.8xlarge
- g3.16xlarge
- h1.2xlarge
- h1.4xlarge
- h1.8xlarge
- h1.16xlarge
- hs1.8xlarge
- i2.xlarge
- i2.2xlarge
- i2.4xlarge
- i2.8xlarge
- i3.large
- i3.xlarge
- i3.2xlarge
- i3.4xlarge
- i3.8xlarge
- i3.16xlarge
- i3.metal
- i3en.large
- i3en.xlarge
- i3en.2xlarge
- i3en.3xlarge
- i3en.6xlarge
- i3en.12xlarge
- i3en.24xlarge
- m1.small
- m1.medium
- m1.large
- m1.xlarge
- m2.xlarge
- m2.2xlarge
- m2.4xlarge
- m3.medium
- m3.large
- m3.xlarge
- m3.2xlarge
- m4.large
- m4.xlarge
- m4.2xlarge
- m4.4xlarge
- m4.10xlarge
- m4.16xlarge
- m5.large
- m5.xlarge
- m5.2xlarge
- m5.4xlarge
- m5.8xlarge
- m5.12xlarge
- m5.16xlarge
- m5.24xlarge
- m5.metal
- m5a.large
- m5a.xlarge
- m5a.2xlarge
- m5a.4xlarge
- m5a.8xlarge
- m5a.12xlarge
- m5a.16xlarge
- m5a.24xlarge
- m5ad.large
- m5ad.xlarge
- m5ad.2xlarge
- m5ad.4xlarge
- m5ad.12xlarge
- m5ad.24xlarge
- m5d.large
- m5d.xlarge
- m5d.2xlarge
- m5d.4xlarge
- m5d.8xlarge
- m5d.12xlarge
- m5d.16xlarge
- m5d.24xlarge
- m5d.metal
- p2.xlarge
- p2.8xlarge
- p2.16xlarge
- p3.2xlarge
- p3.8xlarge
- p3.16xlarge
- p3dn.24xlarge
- g4dn.xlarge
- g4dn.2xlarge
- g4dn.4xlarge
- g4dn.8xlarge
- g4dn.12xlarge
- g4dn.16xlarge
- g4dn.metal
- r3.large
- r3.xlarge
- r3.2xlarge
- r3.4xlarge
- r3.8xlarge
- r4.large
- r4.xlarge
- r4.2xlarge
- r4.4xlarge
- r4.8xlarge
- r4.16xlarge
- r5.large
- r5.xlarge
- r5.2xlarge
- r5.4xlarge
- r5.8xlarge
- r5.12xlarge
- r5.16xlarge
- r5.24xlarge
- r5.metal
- r5a.large
- r5a.xlarge
- r5a.2xlarge
- r5a.4xlarge
- r5a.8xlarge
- r5a.12xlarge
- r5a.16xlarge
- r5a.24xlarge
- r5ad.large
- r5ad.xlarge
- r5ad.2xlarge
- r5ad.4xlarge
- r5ad.12xlarge
- r5ad.24xlarge
- r5d.large
- r5d.xlarge
- r5d.2xlarge
- r5d.4xlarge
- r5d.8xlarge
- r5d.12xlarge
- r5d.16xlarge
- r5d.24xlarge
- r5d.metal
- t1.micro
- t2.nano
- t2.micro
- t2.small
- t2.medium
- t2.large
- t2.xlarge
- t2.2xlarge
- t3.nano
- t3.micro
- t3.small
- t3.medium
- t3.large
- t3.xlarge
- t3.2xlarge
- t3a.nano
- t3a.micro
- t3a.small
- t3a.medium
- t3a.large
- t3a.xlarge
- t3a.2xlarge
- u-6tb1.metal
- u-9tb1.metal
- u-12tb1.metal
- x1.16xlarge
- x1.32xlarge
- x1e.xlarge
- x1e.2xlarge
- x1e.4xlarge
- x1e.8xlarge
- x1e.16xlarge
- x1e.32xlarge
- z1d.large
- z1d.xlarge
- z1d.2xlarge
- z1d.3xlarge
- z1d.6xlarge
- z1d.12xlarge
- z1d.metal
ConstraintDescription: Must be a valid EC2 instance type
Description: EC2 instance type for the node instances
NodeVolumeSize:
Type: Number
Default: 20
Description: Node volume size
Subnets:
Type: "List<AWS::EC2::Subnet::Id>"
Description: The subnets where workers can be created.
VpcId:
Type: "AWS::EC2::VPC::Id"
Description: The VPC of the worker instances
Conditions:
HasNodeImageId: !Not
- "Fn::Equals":
- Ref: NodeImageId
- ""
Resources:
NodeInstanceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- "sts:AssumeRole"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
- "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
- "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
Path: /
NodeInstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Path: /
Roles:
- Ref: NodeInstanceRole
NodeSecurityGroup:
Type: "AWS::EC2::SecurityGroup"
Properties:
GroupDescription: Security group for all nodes in the cluster
Tags:
- Key: !Sub kubernetes.io/cluster/${ClusterName}
Value: owned
VpcId: !Ref VpcId
NodeSecurityGroupIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow node to communicate with each other
FromPort: 0
GroupId: !Ref NodeSecurityGroup
IpProtocol: "-1"
SourceSecurityGroupId: !Ref NodeSecurityGroup
ToPort: 65535
ClusterControlPlaneSecurityGroupIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow pods to communicate with the cluster API Server
FromPort: 443
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref NodeSecurityGroup
ToPort: 443
ControlPlaneEgressToNodeSecurityGroup:
Type: "AWS::EC2::SecurityGroupEgress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow the cluster control plane to communicate with worker Kubelet and pods
DestinationSecurityGroupId: !Ref NodeSecurityGroup
FromPort: 1025
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
ToPort: 65535
ControlPlaneEgressToNodeSecurityGroupOn443:
Type: "AWS::EC2::SecurityGroupEgress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow the cluster control plane to communicate with pods running extension API servers on port 443
DestinationSecurityGroupId: !Ref NodeSecurityGroup
FromPort: 443
GroupId: !Ref ClusterControlPlaneSecurityGroup
IpProtocol: tcp
ToPort: 443
NodeSecurityGroupFromControlPlaneIngress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow worker Kubelets and pods to receive communication from the cluster control plane
FromPort: 1025
GroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
ToPort: 65535
NodeSecurityGroupFromControlPlaneOn443Ingress:
Type: "AWS::EC2::SecurityGroupIngress"
DependsOn: NodeSecurityGroup
Properties:
Description: Allow pods running extension API servers on port 443 to receive communication from cluster control plane
FromPort: 443
GroupId: !Ref NodeSecurityGroup
IpProtocol: tcp
SourceSecurityGroupId: !Ref ClusterControlPlaneSecurityGroup
ToPort: 443
Problem seems to be over here
NodeLaunchConfig:
Type: "AWS::AutoScaling::LaunchConfiguration"
Properties:
AssociatePublicIpAddress: "true"
BlockDeviceMappings:
- DeviceName: /dev/xvda
Ebs:
DeleteOnTermination: true
VolumeSize: !Ref NodeVolumeSize
VolumeType: gp2
IamInstanceProfile: !Ref NodeInstanceProfile
ImageId: !If
- HasNodeImageId
- Ref: NodeImageId
- Ref: NodeImageIdSSMParam
InstanceType: !Ref NodeInstanceType
KeyName: !Ref KeyName
SecurityGroups:
- Ref: NodeSecurityGroup
UserData: !Base64
"Fn::Sub": |
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh ${ClusterName} ${BootstrapArguments}
/opt/aws/bin/cfn-signal --exit-code $? \
--stack ${AWS::StackName} \
--resource NodeGroup \
--region ${AWS::Region}
May be over here
NodeGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
DesiredCapacity: !Ref NodeAutoScalingGroupDesiredCapacity
LaunchConfigurationName: !Ref NodeLaunchConfig
MaxSize: !Ref NodeAutoScalingGroupMaxSize
MinSize: !Ref NodeAutoScalingGroupMinSize
Tags:
- Key: Name
PropagateAtLaunch: "true"
Value: !Sub ${ClusterName}-${NodeGroupName}-Node
- Key: !Sub kubernetes.io/cluster/${ClusterName}
PropagateAtLaunch: "true"
Value: owned
VPCZoneIdentifier: !Ref Subnets
UpdatePolicy:
AutoScalingRollingUpdate:
MaxBatchSize: "1"
MinInstancesInService: !Ref NodeAutoScalingGroupDesiredCapacity
PauseTime: PT5M
Outputs:
NodeInstanceRole:
Description: The node instance role
Value: !GetAtt NodeInstanceRole.Arn
NodeSecurityGroup:
Description: The security group for the node group
Value: !Ref NodeSecurityGroup
Well, though the template is getting deployed but the nodegroups aren't visible in my EKS Cluster. Please do let me know if there are any updations to be made so that the nodegroups get deployed in the cluster.
Okay, I was going through the same problem. The problem is with the type which you chosen for the nodegroup. It should be of AWS::EKS::Nodegroup. You have chosen the wrong type. Change it and your nodegroup will be visible in the cluster.
Here is the link for the same:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-nodegroup.html

Elasticsearch enable security issues

I have a Elasticsearch 7.6 cluster installed base on
https://github.com/openstack/openstack-helm-infra/tree/master/elasticsearch
Following is what I did to enable security:
a. Generate certificate
./bin/elasticsearch-certutil ca
File location: /usr/share/elasticsearch/elastic-stack-ca.p12
./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
File location: /usr/share/elasticsearch/elastic-certificates.p12
kubectl create secret generic elastic-certificates --from-file=elastic-certificates.p12
b. Enable Security on statefulset for master pod
kubectl edit statefulset elasticsearch-master
----
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: certificate
- name: xpack.security.transport.ssl.keystore.path
value: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
- name: xpack.security.transport.ssl.truststore.path
value: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
----
- mountPath: /usr/share/elasticsearch/config/certs
name: elastic-certificates
readOnly: true
----
- name: elastic-certificates
secret:
defaultMode: 444
secretName: elastic-certificates
c. Enable security on statefulset for data pod
kubectl edit statefulset elasticsearch-data
----
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: certificate
----
- mountPath: /usr/share/elasticsearch/config/certs
name: elastic-certificates
----
- name: elastic-certificates
secret:
defaultMode: 444
secretName: elastic-certificates
d. Enable security on deployment for client
kubectl edit deployment elasticsearch-client
----
- name: xpack.security.enabled
value: "true"
- name: xpack.security.transport.ssl.enabled
value: "true"
- name: xpack.security.transport.ssl.verification_mode
value: certificate
- name: xpack.security.transport.ssl.keystore.path
value: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
- name: xpack.security.transport.ssl.truststore.path
value: /usr/share/elasticsearch/config/certs/elastic-certificates.p12
----
- mountPath: /usr/share/elasticsearch/config/certs
name: elastic-certificates
----
- name: elastic-certificates
secret:
defaultMode: 444
secretName: elastic-certificates
After pods restart, I got the following issue:
a. data pots are stuck in init stage
kubectl get pod |grep data
elasticsearch-data-0 1/1 Running 0 42m
elasticsearch-data-1 0/1 Init:0/3 0 10m
kubectl logs elasticsearch-data-1 -c init |tail -1
Entrypoint WARNING: <date/time> entrypoint.go:72: Resolving dependency Service elasticsearch-logging in namespace osh-infra failed: Service elasticsearch-logging has no endpoints .
b. Client pod errors regarding connection refused
Warning Unhealthy 18m (x4 over 19m) kubelet, s1-worker-2 Readiness probe failed: Get http://192.180.71.82:9200/_cluster/health: dial tcp 192.180.71.82:9200: connect: connection refused
Warning Unhealthy 4m17s (x86 over 18m) kubelet, s1-worker-2 Readiness probe failed: HTTP probe failed with statuscode: 401
c. Service "elasticsearch-logging" endpoints is empty
Any suggestions how to fix or what is wrong?
Thanks.

How do I install Hyperledger Fabric's binaries only?

I would like to install/download the HLF binaries, without the images and fabric-samples. How do I do that?
This is what I've tried so far:
I've followed the instruction on https://hyperledger-fabric.readthedocs.io/en/release-1.4/install.html, but that also installs the images (which is unwanted).
I've looked into the hlf repository, but the /bin/ directory is absent there and a name-search for 'contigtxgen' and others yielded no results other than it being used inside other scripts in the repo
googled for any mention of binary-only install of hlf, without positive results
Desired result would be a cli command with which I can suppress the installing of images, or something similar.
I am also in the process of setting up fabric without docker images.
This link has helped me a lot. Although it does not show how to set up orderer and node on host machine.
Following is my configuration and steps that I followed to run orderer and peer on host machine(make sure you have installed all the prerequisites for hyperledger fabric):-
First clone the fabric repository and run make.
git clone https://github.com/hyperledger/fabric.git
//cd into fabric folder and run
make release
The above will generate binaries in release folder for your host machine.
fabric
|
-- release
|
-- linux-amd64
|
-- bin
Copy this bin folder and into new folder mynetwork and create the following configuration files.
mynetwork
|
-- bin
-- crypto-config.yaml
-- configtx.yaml
-- order.yaml
-- core.yaml
Following are the configurations that I am using.
crypto-config.yaml
OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer
SANS:
- "localhost"
- "127.0.0.1"
PeerOrgs:
- Name: Org1
Domain: org1.example.com
EnableNodeOUs: true
Template:
Count: 1
SANS:
- "localhost"
- "127.0.0.1"
Users:
Count: 1
Next open terminal(lets call it terminal-1) and cd into mynetwork folder and run the cryptogen to generate the assets and keys.
./bin/cryptogen generate --config=./crypto-config.yaml
The above will create crypto-config folder in mynetwork containing all the network assets, in this case for ordererOrganization and peerOrganization.
mynetwork
|
-- crypto-config
|
-- ordererOrganizations
-- peerOrganizations
Next you need to create configtx.yaml
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/org1.example.com/msp
Policies:
Readers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.peer', 'Org1MSP.client')"
Writers:
Type: Signature
Rule: "OR('Org1MSP.admin', 'Org1MSP.client')"
Admins:
Type: Signature
Rule: "OR('Org1MSP.admin')"
AnchorPeers:
- Host: 127.0.0.1
Port: 7051
Capabilities:
Channel: &ChannelCapabilities
V1_3: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_3: true
V1_2: false
V1_1: false
Application: &ApplicationDefaults
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ApplicationCapabilities
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Organizations:
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
BlockValidation:
Type: ImplicitMeta
Rule: "ANY Writers"
Channel: &ChannelDefaults
Policies:
Readers:
Type: ImplicitMeta
Rule: "ANY Readers"
Writers:
Type: ImplicitMeta
Rule: "ANY Writers"
Admins:
Type: ImplicitMeta
Rule: "MAJORITY Admins"
Capabilities:
<<: *ChannelCapabilities
Profiles:
OneOrgOrdererGenesis:
<<: *ChannelDefaults
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
SampleConsortium:
Organizations:
- *Org1
OneOrgChannel:
Consortium: SampleConsortium
<<: *ChannelDefaults
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
Capabilities:
<<: *ApplicationCapabilities
Then on terminal-1 run the following few commands in sequence
export FABRIC_CFG_PATH=$PWD
mkdir channel-artifacts
./bin/configtxgen -profile OneOrgOrdererGenesis -channelID myfn-sys-channel -outputBlock ./channel-artifacts/genesis.block
export CHANNEL_NAME=mychannel
./bin/configtxgen -profile OneOrgChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME
./bin/configtxgen -profile OneOrgChannel -outputAnchorPeersUpdate ./channel-artifacts/Org1MSPanchors.tx -channelID $CHANNEL_NAME -asOrg Org1MSP
Next create orderer.yaml, and change the certificate paths according to your host and folder location.
General:
LedgerType: file
ListenAddress: 127.0.0.1
ListenPort: 7050
TLS:
Enabled: true
PrivateKey: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
Certificate: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
RootCAs:
- /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt
ClientAuthRequired: false
Keepalive:
ServerMinInterval: 60s
ServerInterval: 7200s
ServerTimeout: 20s
GenesisMethod: file
GenesisProfile: OneOrgOrdererGenesis
GenesisFile: channel-artifacts/genesis.block
LocalMSPDIR: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp
LocalMSPID: OrdererMSP
Authentication:
TimeWindow: 15m
FileLedger:
Location: /home/fabric-release/data/orderer
Prefix: hyperledger-fabric-ordererledger
Operations:
ListenAddress: 127.0.0.1:8443
TLS:
Enabled: true
Certificate: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
PrivateKey: /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.key
ClientAuthRequired: false
ClientRootCAs:
- crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
Start the orderer on terminal-1
./bin/orderer
Next open another terminal(Terminal-2) and go to mynetwork folder. Create core.yaml(similarly you'll need to change the certificate and key path's).
peer:
id: peer1
networkId: myfn
listenAddress: 127.0.0.1:7051
address: 127.0.0.1:7051
addressAutoDetect: false
gomaxprocs: -1
keepalive:
minInterval: 60s
client:
interval: 60s
timeout: 20s
deliveryClient:
interval: 60s
timeout: 20s
gossip:
bootstrap: 127.0.0.1:7051
externalEndpoint: 127.0.0.1:7051
useLeaderElection: true
orgLeader: false
tls:
enabled: true
clientAuthRequired: false
cert:
file: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
key:
file: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
rootcert:
file: crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
clientRootCAs:
file:
- crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
authentication:
timewindow: 15m
fileSystemPath: /home/fabric-release/data
mspConfigPath: /home/fabric-release/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
localMspId: Org1MSP
client:
connTimeout: 3s
deliveryclient:
reconnectTotalTimeThreshold: 3600s
connTimeout: 3s
profile:
enabled: false
listenAddress: 0.0.0.0:6060
handlers:
authFilters:
- name: DefaultAuth
- name: ExpirationCheck
decorators:
- name: DefaultDecorator
endorsers:
escc:
name: DefaultEndorsement
library:
validators:
vscc:
name: DefaultValidation
library:
discovery:
enabled: true
authCacheEnabled: true
authCacheMaxSize: 1000
authCachePurgeRetentionRatio: 0.75
orgMembersAllowedAccess: false
vm:
endpoint: unix:///var/run/docker.sock
docker:
tls:
enabled: false
ca:
file:
cert:
file:
key:
file:
attachStdout: false
hostConfig:
NetworkMode: host
Dns:
# - 192.168.0.1
LogConfig:
Type: json-file
Config:
max-size: "50m"
max-file: "5"
Memory: 2147483648
chaincode:
id:
path:
name:
builder: $(DOCKER_NS)/fabric-ccenv:latest
pull: true
java:
runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-1.4.1
#runtime: $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION)
startuptimeout: 300s
executetimeout: 30s
mode: net
keepalive: 0
system:
cscc: enable
lscc: enable
escc: enable
vscc: enable
qscc: enable
logging:
level: info
shim: warning
format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}'
ledger:
blockchain:
state:
stateDatabase: goleveldb
totalQueryLimit: 100000
couchDBConfig:
couchDBAddress: 127.0.0.1:5984
username:
password:
maxRetries: 3
maxRetriesOnStartup: 12
requestTimeout: 35s
internalQueryLimit: 1000
maxBatchUpdateSize: 1000
warmIndexesAfterNBlocks: 1
createGlobalChangesDB: false
history:
enableHistoryDatabase: true
Start the peer node on terminal-2
./bin/peer node start
Next open another terminal(Terminal-3) and go to mynetwork folder. Run the following commands in sequence.
export CORE_PEER_MSPCONFIGPATH=/home/fabric-release/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp
export CORE_PEER_ADDRESS=127.0.0.1:7051
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=/home/fabric-release/mynetwork/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export CHANNEL_NAME=mychannel
Create channel
/bin/peer channel create -o 127.0.0.1:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /home/fabric-release/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
Join the channel
./bin/peer channel join -b mychannel.block
If you made it this far, your network is up and you can start installing chaincodes. I am still in the processes of experimenting chaincodes. However, I Hope this helps.
If you download this script (and set execute permission):
https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh
Then run the script with -h you will see the options to suppress download of Binaries or Docker Images.

configure confixtx.yaml file?

I'm trying to build a Hyperledger Fabric network with the following
Smartforce[Orderer Org]
Falcon.io [ORG1]
Frost.io [ORG2]
I have generated all cryptographic materials using cryptogen tool.
now looking to build gensis block using configtxgen tool.
Here is configtx.yaml:
Profiles:
TwoOrgOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Smartforce
Consortiums:
SampleConsortium:
Organizations:
- *BusinessPartner1
- *BusinessPartner2
TwoOrgChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *BusinessPartner1
- *BusinessPartner2
Organizations:
- &Smartforce
Name: smartforce
ID: SmartforceMSP
MSPDir: /home/falcon/iq-smartforce/crypto-config/ordererOrganizations/smartforce.io/msp
- &BusinessPartner1
Name: BusinessPartner1
ID: FalconMSP
MSPDir: /home/falcon/iq-smartforce/crypto-config/peerOrganizations/falcon.io/msp
- &BusinessPartner2
Name: BusinessPartner2
ID: FrostMSP
MSPDir: /home/frost/iq-smartforce/crypto-config/peerOrganizations/frost.io/msp
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer.smartforce.io:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Organizations:
Application: &ApplicationDefaults
Organizations:
When I run the command :
configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
I get the following error:
2018-12-12 14:55:55.834 IST [common/tools/configtxgen] main -> WARN 001 Omitting the channel ID for configtxgen is deprecated. Explicitly passing the channel ID will be required in the future, defaulting to 'testchainid'.
2018-12-12 14:55:55.834 IST [common/tools/configtxgen] main -> INFO 002 Loading configuration
2018-12-12 14:55:55.834 IST [common/tools/configtxgen/localconfig] Load -> CRIT 003 Error reading configuration: While parsing config: yaml: unknown anchor 'OrdererDefaults' referenced
2018-12-12 14:55:55.834 IST [common/tools/configtxgen] func1 -> CRIT 004 Error reading configuration: While parsing config: yaml: unknown anchor 'OrdererDefaults' referenced
panic: Error reading configuration: While parsing config: yaml: unknown anchor 'OrdererDefaults' referenced [recovered]
panic: Error reading configuration: While parsing config: yaml: unknown anchor 'OrdererDefaults' referenced
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panic(0xc4201abe30, 0xc42048fd10, 0x1, 0x1)
/w/workspace/fabric-nightly-release-job-release-1.2-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:188 +0xbd
main.main.func1()
/w/workspace/fabric-nightly-release-job-release-1.2-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:254 +0x1ae
panic(0xc6ea00, 0xc42048fd00)
/opt/go/go1.10.linux.amd64/src/runtime/panic.go:505 +0x229
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panic(0xc4201abc80, 0xc420484ae0, 0x2, 0x2)
/w/workspace/fabric-nightly-release-job-release-1.2-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:188 +0xbd
github.com/hyperledger/fabric/common/tools/configtxgen/localconfig.Load(0x7ffdcf041294, 0x15, 0x0, 0x0, 0x0, 0x1)
/w/workspace/fabric-nightly-release-job-release-1.2-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/localconfig/config.go:277 +0x469
main.main()
/w/workspace/fabric-nightly-release-job-release-1.2-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:265 +0xce7
In YAML all anchors ( those tokens starting with &) need to precede any references to them (using aliases, the tokens starting with *) in the file.
So in the root-level mapping you should put your key Profiles and its value after the key Organizations, Orderer and Application (and their values):
Organizations:
- &Smartforce
Name: smartforce
ID: SmartforceMSP
MSPDir: /home/falcon/iq-smartforce/crypto-config/ordererOrganizations/smartforce.io/msp
- &BusinessPartner1
Name: BusinessPartner1
ID: FalconMSP
MSPDir: /home/falcon/iq-smartforce/crypto-config/peerOrganizations/falcon.io/msp
- &BusinessPartner2
Name: BusinessPartner2
ID: FrostMSP
MSPDir: /home/frost/iq-smartforce/crypto-config/peerOrganizations/frost.io/msp
Orderer: &OrdererDefaults
OrdererType: solo
Addresses:
- orderer.smartforce.io:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Organizations:
Application: &ApplicationDefaults
Organizations:
Profiles:
TwoOrgOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *Smartforce
Consortiums:
SampleConsortium:
Organizations:
- *BusinessPartner1
- *BusinessPartner2
TwoOrgChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *BusinessPartner1
- *BusinessPartner2

Resources