Deploy API REST IBM Hyperledger Composer Blockchain - hyperledger-composer

I'm developing a POC over IBM HyperLedger Blockchain. I have a business network developed and deployed in IBM Cloud. I can generate a working local API REST, but cannot make it work on cloud, on the deployed IP.
I'm following this guide:
https://ibm-blockchain.github.io/interacting/
You just have to execute the following command:
./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME
But it doesn't deploy anything, and get the following (more related to kubernetes than blockchain).
Preparing yaml file for create composer-rest-server
Creating composer-rest-server pod
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
the server doesn't have a resource type "svc"
Creating composer-rest-server service
Running: kubectl create -f /Users/sm/jsblock/ibm-container-service/cs-offerings/scripts/../kube-configs/composer-rest-server-services-free.yaml
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Composer rest server created successfully
Any ideas? Thanks too much.

You need to ensure you have a correct kube config setup. Step 10 in https://ibm-blockchain.github.io/setup/ provides the details to set up KUBECONFIG as the error suggests that either it is not configured or not configured correctly.

The document you refer to https://ibm-blockchain.github.io/interacting/ is being updated and should be available soon.
When you run the command ./create/create_composer-rest-server.sh --business-network-card MY_BIZNET_CARD_NAME - should be the name of the Network Admin for the network you deployed, NOT the PeerAdmin card so it will be something like ./create/create_composer-rest-server.sh --business-network-card admin#perishable-network

Look like it's an issue of acceess control. You should make sure again you are running with Local Admin configuration.it will help you to run queries

Related

Kubernetes fails to start on Docker Desktop without direct internet access

I'm running Docker Desktop 3.6.0 on Windows 10 with WSL2.
When I try to enable Kubernetes I only see "Failed to start" within the Docker Desktop UI.
Docker itself works fine. Not sure how I can get any further logs.
Here the output from kubectl version:
kubectl version
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"windows/amd64"}
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding
From other posts it seems that and internet connections is required for initial setup:
https://stackoverflow.com/a/52765732/1100559
https://stackoverflow.com/a/63318739/1100559
Direct internet connection is not possible on my work environment, I can only manually copy required images on my pc.
I also do not have admin access.
Is there a way to manually setup Kubernetes on Docker Desktop or somehow indicate where the required images can be found?
I have a nexus Docker repository where I can push required images to.
I have changed the ~\.docker\daemon.json and added my docker repository in insecure-registries. After first login docker is able to pull images from there and run them.
Already tried to reset or enable and disable Kubernetes. Also deleting ~/.kube/config did not work.
High level answer...
Get a docker registry
If you work for an old skool cool enterprise; use JFrog Artifactory
If you just want to get it to work; use Harbor
GitHub and GitLab (depending on license) have registries available too...
Edit the docker daemon on the kubernetes nodes (your workstation) to only pull from these registries.
if redhat; /etc/containers/registries.conf
if debain; /etc/docker/daemon.json
you might be able to hack a /etc/hosts entry too...
Populate the new registry
Run kubernetes and yoiu should be good to go. Depending on the configuration you choose you may need to add a registry credential secret.

SSH timeout error on Azure DevOps CD pipeline

I am getting an timeout error when trying to deploy to an VM instance hosted on AWS. Manually I can log ing using
ssh -i myKeyFile.pem myuser#IP
Once I accessed the remote machine I can execute some docker commands and everything works fine. But now that I need to automated that on the CD pipeline is where I am getting the following error:
2020-06-02T21:37:12.6877276Z Trying to establish an SSH connection to ***#IP:port
2020-06-02T21:38:52.4629461Z ##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake.
2020-06-02T21:38:52.4685976Z ##[section]Finishing: Run shell commands on remote machine
The steps I follow to make the SSH connection are:
I created a SSH service connection on the project settings in Azure DevOps
I created the CD pipeline
I added a SSH task with the following parameters
When I manually trigger it to test if it works, the release start working fine but after 1:43 minutes more or less is when I got the error:
Then, when I review the logs, it is the same error I pasted at the beginning:
[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: Timed out while waiting for handshake
I've increase the handshake timeout settings from the default one (20000) to 90000, but no luck.
Any one has face this problem before?
Seems there is an ongoing error with the default agent pools from Azure DevOps. Lot of people have been reported this and Azure DevOps teams is working on it at the time this post is been written (I couldn't find the post where all that is details. I will add this later on).
The workaround is
To create a self-hosted agent.
After this has been created you will need to re-create your CD pipeline using the new self-hosted agent.
The rest of the SSH task configuration depends on your needs. But if you want to test the SSH connection works, just print something:
echo 'I'm connected'
After this you CD pipeline should be working fine.
More details on how to created the Self-Hosted Agent on Windows. There are also links for Linux and Mac.
I had a similar issue with a VM in Azure. It turned out I had set the security group to only allow SSH in from my local network and Azure Dev-Ops agents obviously run in a Microsoft network and were coming from a different IP Address range. The solution was to open up SSH to all source IP Addresses. You can get the list of IP address ranges Dev-Ops agents use but they appear to change every week which isn't very helpful.
See https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops#microsoft-hosted-agents

Authorization failure when creating new business network in local playground

I am trying follow the steps of Composer Playground tutorial for local version of the playground and using local fabric peer.
I have done prescribed sequence of steps: downloadFabric.sh, startFabric.sh, createPeerAdminCard.sh. Once I bring up the playground, I can see network card PeerAdmin#hlfv1 showing no business network attached to it.
Then I click on Deploy a New Business Network, and select "vehicles-lifecycle-network" example. The dialog looks similar to the one in tutorial, but it has additional fields asking for credentials for network administrator, and I am not sure what to put there.
I tried copying in key and certificate that were generated by createPeerAdminCard script, and I also tried using ID and Secret option putting in PeerAdmin or PeerAdmin#hlfv1. I found this answer which indicates that it does not matter what secret you specify as the user is already imported into the keystore -- not sure if it is relevant. It certainly did not make any difference.
When I click deploy, the new network seems to appear in the composer dashboard. However, if I click on "Connect Now", a popup shows "Error trying to login and get user context" and [[{"code":400,"message":"Authorization Failure"}]].
It seems I am missing something very basic, but cannot really figure it out on my own.
Edit:
Simple steps to reproduce (assuming basic-sample-network.bna is available locally):
> composer runtime install -c PeerAdmin#hlfv1 -n basic-sample-network
> composer network start -a <path to basic-sample-network.bna> -A admin -c PeerAdmin#hlfv1 -C <path to PeerAdmin certificate> -f admin.card
> composer card import -f admin.card
> composer network ping -c admin#basic-sample-network
Last command produces the same error as above in the console.
Edit 2:
If I open up ~/.composer/cards/PeerAdmin#hlfv1/metadata.json and add "businessNetwork":"basic-sample-network" parameter, I am able to do composer network ping -c PeerAdmin#hlfv1 successfully, and also can connect to the network from the Playground -- this will do as a workaround for now. However, I must be doing something wrong with the way I create new network and its admin card.
The Playground Tutorial assumes that you are connecting to an Online Hosted Playground hosted on IBM Cloud (Bluemix). For the Online Playground the underlying Fabric is 'Web' - i.e. the Fabric is stored only in the local browser. This document may help explain the different Fabric Runtimes: Typical Solution Architecture
The Local Playground gives you the additional option of deploying a Business Network to an hlfv1 Fabric, using the PeerAdmin card that you created with the createPeerAdmin.sh script.
After creating the PeerAdmin card you should be able to start Playground locally with the composer-playground command and you should be able to deploy a Business Network. In this development scenario the Credentials for the Network Administrator should be Id and Secret specifying admin / adminpw. There is no need to run CLI commands prior to starting local playground. (createPeerAdminCard.sh is not a CLI command but is a Dev environment setup script - and it should be run.)
If you want to go down the CLI route please see the Developer Tutorial

Kubernetes Service Resolution from application.properties

I have configured mysql cluster service and I am using the that service instead of hostname in jdbc url in my application.properties. It is not resolving. But when I use the minikube URL, it is connecting correctly. Shouldn't DNS resolution happen for jdbc url as well in application.properties for a java project ?
Just as #sfgroups mentioned, it is highly likely that the service has not been properly registered. Maybe you are using a different namespace or simply the service is not available. In order to check that:
Run kubectl get svc and kubectl get endpoints to check if the service is registered and the mysql pods selected. It may sound silly but I advise you to check if the service name you are using is correct.
If it is registered, try kubectl get pods, get the ID of your jdbc pod and launch kubectl exec -ti <ID> nslookup <servicename>. This will give you a hint if the dns resolution is working or not.
If it is not resolving, then check in minikube addons list that dns is enabled. If it is disabled, enable it (you will need to wait a little bit) and try again.

Informatica : Not able to connect to Integration Service

I am new to informatica and looking for assistance from available experts.
I am able to login to admin console (http://localhost:6008/administrator/#admin) where I can see my node is available, my repository service is available, my integration service is available.
Through Power center desiner tool, I am able to view my mappings. Also, I am able to connect to Powercenter Workflow manager. However, while trying to execute my workflow, it says that cannot connect to integration service.
I am getting following error in log :
CCM_10322
The following error occurred while logging to Log Service: [[DOM_10022] The master gateway node for the domain is not available.
Electing another master gateway. Wait for the election of the master gateway node to complete.
If the problem persists, verify that the master gateway node is running.].
Thanks,
Manish.
Pre-requisites:
1)Once the Repository files has been restored make sure all service(rep & IS services) are up.
2)Make sure the INFA_DOMAIN_FILE env variable has been created and has the right values. And make sure file with path added in the PATH, in both server & client machines.
Solution :Last resort, Upgrade the domain info in server & Client machines as shown below, then restart the infaservices & server. Now it works!!!
Upgrade the Domains:
Cd $INFA_HOME/isp/bin
Sh infacmd.sh updateGatewayInfo –dn Domain_name –dg Servername.net:6005
in server:
Cd $INFA_HOME/isp/bin
Sh infacmd.sh updateGatewayInfo –dn Domain_name –dg Servername.net:6005
in client:
cd e:\Informatica\9.0.1\clients\PowercenterClient\CommandLineUtilities\Pc\server\bin\
`infacmd updateGatewayInfo –dn Domain_name –dg servername.net:6005'
Restart the Server:
Cd $INFA_HOME/tomcat/bin
Sh infaservices.sh shutdown
Sh startserver.sh shutdown
Sh startserver.sh startup
Sh infaservices.sh startup
Done. Now check!!!
You must define one node as Master Gateway using infacmd or infasetup then you can run all services from admin console.

Resources