I am using Hyperledger Fabric SDK for node.js to enroll a user. I am using this code to deploy in fabric. It uses FileKeyValueStore (uses files to store the key values) to store client's user credential.
I want to use CouchDBKeyValueStore to store user key in CouchDB database instance. What changes do i need to make in client connection profile configuration file for credential store and in code to do so. Any link to sample code will also help.
There is no built-in support in the connection profile for using the CouchDBKeyValueStore, but you can still use the connection profile for the rest of the Fabric network configuration. You'll then need to use the Client APIs to configure the stores. Something like
const Client = require('fabric-client');
const CDBKVS = require('fabric-client/lib/impl/CouchDBKeyValueStore.js');
var client = Client.loadFromConfig('test/fixtures/network.yaml');
// Set the state store
let stateStore = await new CDBKVS({url: 'https://<USERNAME>:<PASSWORD>#<URL>', name: '<DB_NAME>'})
client.setStateStore(stateStore);
// Set the crypto store
const crypto = Client.newCryptoSuite();
let cryptoKS = Client.newCryptoKeyStore(
CDBKVS,
{
url: 'https://<USERNAME>:<PASSWORD>#<URL>.cloudant.com',
name: '<DB_NAME>'
}
);
crypto.setCryptoKeyStore(cryptoKS);
client.setCryptoSuite(crypto);
Official document Reference
Store Hyperledger Fabric certificates and keys in IBM Cloudant with Fabric Node SDK
Related
I have a .pfx file that I use for communicating with a web service. I load it from classpath in development environment like this:
application.yml
my-config:
certificate: classpath:/certificate/dev/mycertificate.pfx
Service.java
SSLContext sslContext = SSLContext.getInstance(SSL_CONTEXT_PROTOCOL);
KeyManagerFactory keyManagerFactory = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm());
KeyStore keystore = KeyStore.getInstance("JKS");
Resource certificateResource = myConfig.getCertificate();
keystore.load(certificateResource.getInputStream(), myConfig.getCertPassword().toCharArray());
certificateResource.getInputStream().close()
keyManagerFactory.init(keystore, myConfig.getCertPassword().toCharArray());
sslContext.init(keyManagerFactory.getKeyManagers(), null, null);
requestContext.put(SSL_SOCKET_FACTORY, sslContext.getSocketFactory());
This works fine in development environment. The problem is, I do not want to just push the certificate resource to git repo. Also I cannot put the file inside the server because we use pivotal application service for hosting the app. So is there any way I can securely store the certificate file in the config server or anywhere else?
Thanks.
You could put the cert into Spring Cloud Config Server. If you are using Spring Cloud Services for VMware Tanzu you can follow these instructions and store the value into CredHub through SCS.
Alternatively, you could store encrypted values in a Git backend and SCS will decrypt them for you. See instructions here. You could also store things in Vault, but Vault is not provided by the SCS for VMware Tanzu tile. You'd have to run your own Vault server. Instructions for using Vault. Both of these options, I feel, are a bit more work than using SCS's support for CredHub.
If you are trying to use only OSS Spring Cloud Config, you can do that too, but it's more work, more than I can cover here. That said, all three of these options are available there as well:
CredHub backend w/SCS.
Git + encrypted properties.
Vault backend w/SCS.
Vault and CredHub both have certificate types specifically for storing certificates. I do not believe SCS exposes these options, so you would be just storing the text representation of your certificate.
All of these options assume that you want to use Spring Cloud Config server. If you wanted an option not tied to Spring, you could use the CredHub Service Broker tile. This allows you to store items in CredHub and then present them as bound services. With it, you could create a bound service that represents your certificate, bind that to the apps that require it, and then fetch your certificate from VCAP_SERVICES like any other bound service.
The downside of this approach is that VCAP_SERVICES is an environment variable, so it's storing text only and there are limits to how much information can be stored.
We have a web api(.NET 5) which access some secrets from the Azure KeyVault.
In local machine for development, since I am the owner the new vault created, my email has access privilege to keyvault.
Hence I selected my account though VS -->Tools> Options-->Azure Service Authentication-->Account Selection--> "myemail#.com"
I have the below code to fetch secrets from Keyvault and access through configuration like we access the appsettings value.
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((context, config) =>
{
var appSettings = config.Build();
var credentialOptions = new DefaultAzureCredentialOptions();
var credential = new DefaultAzureCredential(credentialOptions);
config.AddAzureKeyVault(new Uri(appSettings["Url:KeyVault"]), credential);
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
}
We access the secret value like _configuration["secret"] in service and controller layer.
My queries are
1, If I move deploy this code to on premise server how it will work (dev env is on-premise server)?
2, If I deploy this web API to Azure, how to use identity AD App to access the key vault without any code change. We have AD app registered which has read access to this particular Vault.
I want the code to seamlessly work for local and Azure.
DefaultAzureCredential is the new and unified way to connect and retrieve tokens from Azure Active Directory and can be used along with resources that need them
The DefaultAzureCredential gets the token based on the environment the application is running
The following credential types if enabled will be tried, in order - EnvironmentCredential, ManagedIdentityCredential, SharedTokenCacheCredential, InteractiveBrowserCredential
IF I move deploy this code to on premise server how it will work (dev env is on-premises server)
When executing this in a development machine (on-premises server), you need to first configure the environment setting the variables AZURE_CLIENT_ID, AZURE_TENANT_ID and AZURE_CLIENT_SECRET to the appropriate values for your service principal (app registered in Azure AD)
If I deploy this web app to Azure, how to use identity AD App to access the key vault without any code change. We have AD app
registered which have read access to this Vault
You can enable System assigned Managed Identity for your web app. Add access policy for this identity in your Azure Key Vault to read the secrets. Now without making any changes in your code, your web app would be able to read the key vault secrets
I have a python 3.8 application deployed on a kubernetes cluster on azure that has to access a blob storage container in an account in a different resource group. I'm using a managed identity to authenticate and query the container:
from azure.storage.blob import BlobServiceClient
creds = ManagedIdentityCredential()
url_template = task_config["ACCOUNT_ADDRESS_TEMPLATE"]
account_name = task_config["BLOB_STORAGE_ACCOUNT"]
account_url = url_template.replace("*", account_name)
blob_service_client = BlobServiceClient(account_url=account_url, credential=creds)
if container not in [c.name for c in blob_service_client.list_containers()]:
raise BlobStorageContainerDoesNotExistError(
f"Container {container} does not exist"
)
self.client: ContainerClient = blob_service_client.get_container_client(
container=container
I have verified that the managed identity has been assigned the Storage Blob Data Contributor role in the storage account, and also at the level of the resource group. I have verified that the token generated when instantiating the ManagedIdentityCredential() object references the right managed identity, and I have whitelisted the outbound IP (and every other possible IP just in case) of my python application. Nevertheless, I keep getting this error when attempting to list the containers in the account:
Http ResponseError(response=response, model=error)\nazure.core.exceptions.HttpResponseError: Operation returned an invalid status 'This request is not authorized to perform this operation.'
Could anyone point me in the right direction?
Specs:
azure-identity = "1.5"
azure-storage-blob= "12.8.1"
python = "3.8"
platform: linux docker containers running on kubernetes cluster deployed on azure.
I have tested in my environment
It seems you are using Storage Account to allow access from Selected Networks.
Please make sure to allow access from your AKS VMSS virtual network :
Then you can use the below python script to list the blob containers in the Storage Account :
from azure.storage.blob import BlobServiceClient
from azure.identity import ManagedIdentityCredential
creds = ManagedIdentityCredential ()
blob_service_client = BlobServiceClient(account_url="https://StorageAccountName.blob.core.windows.net/", credential=creds)
test = blob_service_client.list_containers()
for container in test :
print(container.name)
Is there any option to create a channel dynamically via Composer ??
This is similar to creating channels via SDK code. I am unable to find the documents in the composer tutorial site.
(Edited rsp:) I fully endorse what david_k above has written firstly. The methodgetNativeAPI() in Composer client, enables access to the Fabric client API from composer-client after connecting to an existing business network. That is, Composer client APIs (admin connection and business network connection specifically) offer access to the underlying Fabric client API: eg. calling client API method to read channel info : getChannel :
const bnc = new BusinessNetworkConnection();
await bnc.connect('admin #sample-network');
const fc = bnc.getNativeAPI();
const channel = fc.getChannel('defaultchannel');
const info = await channel.queryInfo();
console.log('block height', info.height);
Composer is not about creating and managing a hyperledger fabric network. It is a business network framework that utilises a pre-defined fabric network and requires an already created channel, so will not provide these kinds of hyperledger fabric admin capabilities. As you correctly state the fabric node sdk provides the ability to provide that kind of administrative capability and so that is the API you should use to perform those kind of activities, such as creating channels, joining peers to channels or configuration updates.
As Paul states, it is possible to gain access to the underlying fabric node sdk client instance which it is currently using to interact with the fabric network, but that does require an already existing business network and as such interacting with the node sdk via this route may not be applicable.
I am trying to build a solution using the UCMA sdk but do not have access to the config store. Is it possible to use UCMA without it? I have a username/password I can use to log into the lync network and thought perhaps I could access things like that.
Yep, you can do this with a UserEndpoint. It doesn't require any replication with the config store (as long as you have a username and password which you've said you have).
I have a comparison between application & user endpoints here: http://blog.thoughtstuff.co.uk/2014/01/ucma-endpoints-choosing-between-application-and-user-endpoints/
and a worked example of using User Endpoints to send an IM here:http://blog.thoughtstuff.co.uk/2013/03/creating-ucma-applications-with-a-userapplication-instance-example-sending-ims/
UCMA applications can run in two different modes:
Untrusted (Client) Application.
In this mode you can't create "ApplicationEndpoint"'s but you can create "UserEndpoint"'s if you have the sip address and password for the user.
Trusted (Server) Application.
In this mode you can create "ApplicationEndpoint"'s and you can create impersonate any user with "UserEndpoint" without needing the user password.
There are two types of setups for Trusted Application's.
2.1. Auto Provisioned Trusted Application
This one is very easy to setup with code but very hard to setup to run on the machine. I don't really recommend this setup as the machine setup requirements are very high.
2.2. Manual Provisioned Trusted Application
This one has a lot more "setup" code but is easier to setup a machine to run on. I would recommend this setup as I find it far easier to setup overall.
Both types of Trusted Applications require you to setup the Trusted Application details within Lync before you can run them.
Which UCMA application setup you use is based on how you configure the CollaborationPlatform instance.
Basic Untrusted (Client) Application:
var clientPlatformSettings = new ClientPlatformSettings("lync.front.end.server.address", SipTransportType.Tls)
var collaborationPlatform = new CollaborationPlatform(clientPlatformSettings);
...
await Task.Factory.FromAsync(collaborationPlatform.BeginStartup, collaborationPlatform.EndStartup, null);
Auto Provisioned Trusted Application:
var serverPlatformSettings = new ProvisionedApplicationPlatformSettings("lync.front.end.server.address", "trusted application id")
var collaborationPlatform = new CollaborationPlatform(serverPlatformSettings);
...
await Task.Factory.FromAsync(collaborationPlatform.BeginStartup, collaborationPlatform.EndStartup, null);
Manual Provisioned Trusted Application:
var certificate = CertificateHelper.GetLocalCertificate("trusted application pool qfdn");
var settings = new ServerPlatformSettings("lync.front.end.server.address", Dns.GetHostEntry("localhost").HostName, trusted_application_port, trusted_application_gruu, certificate);
...
await Task.Factory.FromAsync(collaborationPlatform.BeginStartup, collaborationPlatform.EndStartup, null);
There are a lot of missing details. Once you know what type of UCMA application you want to develop, you can search on the internet for specific examples of that type.