I understand that it is possible to specify the endorsement policy using -o or -O options during composer network deploy or composer network start, but given that composer connection profile contains the connection information for only one MSP/Org, how does composer sends the transaction proposal to the Peers of other Orgs in the endorsement policy?
The connection profile has information which represents the organisation to which you belong (mspID), this doesn't restrict from sending transaction proposals to other organisations' peers. To do so all you need to do is add those peer definitions to your connection profile, but do not include the eventURL for those peers as you are unlikely to be authorised to listen for events from those peers, so for example your connection profile may look like
{
"type": "hlfv1",
"orderers": [
{ "url" : "grpc://consortium_orderer:7050" }
],
"ca": { "url": "http://myorg_ca:7054",
"name": "ca.example.com"
},
"peers": [
{
"requestURL": "grpc://myorg_peer:7051",
"eventURL": "grpc://myorg_peer:7053"
},
{
"requestURL": "grpc://otherorg_peer:7051"
}
],
"keyValStore": "/home/vagrant/.composer-credentials",
"channel": "mychannel",
"mspID": "Org1MSP",
"timeout": 300
}
Related
Im working on cognos dashboard embedded using the reference from -
Cognos Dashboard embedded.
but instead of csv i'm working on JDBC data sources.
i'm trying to connect to JDBC data source as -
"module": {
"xsd": "https://ibm.com/daas/module/1.0/module.xsd",
"source": {
"id": "StringID",
"jdbc": {
"jdbcUrl": "jdbcUrl: `jdbc:db2://DATABASE-HOST:50000/YOURDB`",
"driverClassName": "com.ibm.db2.jcc.DB2Driver",
"schema": "DEFAULTSCHEMA"
},
"user": "user_name",
"password": "password"
},
"table": {
"name": "ROLE",
"description": "description of the table for visual hints ",
"column": [
{
"name": "ID",
"description": "String",
"datatype": "BIGINT",
"nullable": false,
"label": "ID",
"usage": "identifier",
"regularAggregate": "countDistinct",
},
{
"name": "NAME",
"description": "String",
"datatype": "VARCHAR(100)",
"nullable": true,
"label": "Name",
"usage": "identifier",
"regularAggregate": "countDistinct"
}
]
},
"label": "Module Name",
"identifier": "moduleId"
}
Note - here my database is hosted on private network on not hosted on public IP address.
So when i add the above code to add datasources, then the data is not loading from my DB,
even though i mentioned correct user and password for jdbc connection in above code then also when i drag and drop any field from data sources then it opens a pop up and which asks me for userID and Password.
and even after i filled userID and Password details again in popup i'm still unable to load the data.
Errors -
1 . when any module try to fetch data then calls API -
'https://dde-us-south.analytics.ibm.com/daas/v1/data?moduleUrl=%2Fda......'
but in my case this API is failing and giving the error - Status Code: 403 Forbidden
In SignOnDialog.js
At line - 98 call for saveDataSourceCredential method fails and it says saveDataSourceCredential is not a function.
Expectation -
It should not open a pop to asks for userID and password. and data will load directly just as it happens for database hosted on public IP domains.
This does not work in general. If you are using any type of functionality hosted outside your network that needs to access an API or data on your private network, there needs to be some communication channel.
That channel could be established by setting up a VPN, using products like IBM Secure Gateway to create a client / server connection between the IBM Cloud and your Db2 host, or by even setting up a direct link between your company network and the (IBM) cloud.
I followed the steps documented here to convert my existing ARM template to use the commonname setting instead of thumbprint. The deployment was successful and I was able to connect to the Service Fabric Explorer using my browser after the typical certificate selection popup. Next, I tried to deploy an application to the cluster just like I had been previously. Even though I can see the cluster connection endpoint URI in the VS public service fabric application dialog, VS fails to connect to the cluster. Before, I would get a prompt to permit VS to access the local certificate. Does anyone know how to get VS to deploy an application to a service fabric cluster setup using the certificate common name?
Extracts from the MS link above:
"virtualMachineProfile": {
"extensionProfile": {
"extensions": [`enter code here`
{
"name": "[concat('ServiceFabricNodeVmExt','_vmNodeType0Name')]",
"properties": {
"type": "ServiceFabricNode",
"autoUpgradeMinorVersion": true,
"protectedSettings": {
"StorageAccountKey1": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
"StorageAccountKey2": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
},
"publisher": "Microsoft.Azure.ServiceFabric",
"settings": {
"clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
"nodeTypeRef": "[variables('vmNodeType0Name')]",
"dataPath": "D:\\SvcFab",
"durabilityLevel": "Bronze",
"enableParallelJobs": true,
"nicPrefixOverride": "[variables('subnet0Prefix')]",
"certificate": {
"commonNames": [
"[parameters('certificateCommonName')]"
],
"x509StoreName": "[parameters('certificateStoreValue')]"
}
},
"typeHandlerVersion": "1.0"
}
},
and
{
"apiVersion": "2018-02-01",
"type": "Microsoft.ServiceFabric/clusters",
"name": "[parameters('clusterName')]",
"location": "[parameters('clusterLocation')]",
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', variables('supportLogStorageAccountName'))]"
],
"properties": {
"addonFeatures": [
"DnsService",
"RepairManager"
],
"certificateCommonNames": {
"commonNames": [
{
"certificateCommonName": "[parameters('certificateCommonName')]",
"certificateIssuerThumbprint": ""
}
],
"x509StoreName": "[parameters('certificateStoreValue')]"
},
...
I found the solution for Visual Studio. I needed to add/update to the PublishProfiles/Cloud.xml file. I replaced ServerCertThumbprint with ServerCommonName, and then used the certificate CN for the new property and the existing FindValue property. Additionally, I changed the property for FindType to FindBySubjectName. I am now able to successfully connect and publish my application to the cluster.
<ClusterConnectionParameters
ConnectionEndpoint="sf-commonnametest-scus.southcentralus.cloudapp.azure.com:19000"
X509Credential="true"
ServerCommonName="sfrpe2eetest.southcentralus.cloudapp.azure.com"
FindType="FindBySubjectName"
FindValue="sfrpe2eetest.southcentralus.cloudapp.azure.com"
StoreLocation="CurrentUser"
StoreName="My" />
I have a web app registered on Azure with the goal of being able to read and write the calendars of other users. To do so, I set these permissions for this app on Azure.
However, when I try to, for example, create a new event for a given user, I get an error message. Here's what I'm using:
Endpoint
https://graph.microsoft.com/v1.0/users/${requester}/calendar/events
HTTP Header
Content-Type application/json
Request Body
{
"subject": "${subject}",
"body": {
"contentType": "HTML",
"content": "${remarks}"
},
"start": {
"dateTime": "${startTime}",
"timeZone": "${timezone}"
},
"end": {
"dateTime": "${endTime}",
"timeZone": "${timezone}"
},
"location": {
"displayName": "${spaceName}",
"locationEmailAddress": "${spaceEmail}"
},
"attendees": [
{
"emailAddress": {
"address": "${spaceEmail}",
"name": "${spaceName}"
},
"type": "resource"
}
]
}
Error message
{
"error": {
"code": "ErrorItemNotFound",
"message": "The specified object was not found in the store.",
"innerError": {
"request-id": "XXXXXXXXXXXXXXXX",
"date": "2018-07-11T09:16:19"
}
}
}
Is there something I'm missing? Thanks in advance for any help!
Solution update
I managed to solve the problem by following the steps described in this link:
https://developer.microsoft.com/en-us/graph/docs/concepts/auth_v2_service
From your screenshot it's visible that you used application permission (although it'd be nice to include this information in your question):
Depending on kind of the permission you have given, you need to use proper flow to obtain access token (on behalf of a user or as a service. For application permissions you have to use flow for service, not on behalf of a user.
You can also check your token using jwt.io and make sure it's payload contains appropriate role. If it doesn't, it's very likely you used incorrect flow.
Regarding the expiration time of it, you may have found the information about refresh token (for example here). Keep in mind that it applies only to rights granted on behalf of a user. For access without a user you should make sure that you know when your token is going to expire and request a new one accordingly.
I am registering an external service in consul through Catalog API http://127.0.0.1:8500/v1/catalog/register with a payload as follows :
{
"Datacenter": "dc1",
"Node": "pedram",
"Address": "www.google.com",
"Service": {
"ID": "google",
"Service": "google",
"Address": "www.google.com",
"Port": 80
},
"Check": {
"Node": "pedram",
"CheckID": "service:google",
"Status": "passing",
"ServiceID": "google",
"script": "curl www.google.com > /dev/null 2>&1",
"interval": "10s"
}
}
The external service registers successfully and I see it in the list of registered services, but after a while it disappears. It seems that it's got unregistered automatically.
I am running the consul in -dev mode.
What's the problem?
I found that I should register external services in separate node. My application's local services are getting registered in a node named
"Node": "pedram"
when I register external services in this node, they will be get removed automatically.
But when I register my external services in a new node, all the new external services are get registered durably and ready to be used as all other local services.
my new payload is as follows :
{
"Datacenter": "dc1",
"Node": "newNode",
"Address": "www.google.com",
"Service": {
"ID": "google",
"Service": "google",
"Address": "www.google.com",
"Port": 80
},
"Check": {
"Node": "newNode",
"CheckID": "service:google",
"Status": "passing",
"ServiceID": "google"
}
}
This is excepted behavior. In Consul Anti-Entropy docs
If any services or checks exist in the catalog that the agent is not aware of, they will be automatically removed to make the catalog reflect the proper set of services and health information for that agent. Consul treats the state of the agent as authoritative; if there are any differences between the agent and catalog view, the agent-local view will always be used.
In your settings, the agent in the host 'pedram' didn't aware of the service register. so the anti-entropy strategy removes the service.
You shouldn't be using -dev mode, except for testing/playing around. for your health check, I'd recommend not using a "script": "curl www.google.com > /dev/null 2>&1",
Instead I'd recommend using a http health check:
"http": "https://www.google.com",
More about health checks is available here: https://www.consul.io/docs/agent/checks.html
Also, you should probably move to HTTPS (on port 443) if you can.
it also might help to save this as a .JSON file, and let consul read it as part of it's startup, as I'm guessing you want this to be a long-running external service. You can do that with a command like:
/usr/local/bin/consul agent -config-dir=/etc/consul/consul.d
and every .json file in /etc/consul/consul.d/ will be read as part of it's config. If you change the files, consul reload will restart.
I'd make those changes(not run in dev mode, etc) and see if the problem still exists. I'm guessing it won't.
I use GCE V1 rest api to launch instances. I rarely use google developer console. I created windows VM instance through rest api. I passed windows initial username and password in metadata property. Windows VM created successfully. I also able to get those credentials in response, which I sent while creating VM. But I couldn't connect the VM using that username and password. I read the doc about how to reset password from developer console. It works fine. But we would like to rest apis for all. I mean to created/manage GCE resources. So can anyone help to fix this issue?
The image I used to launch a vm is "windows-server-2012-r2-dc-v20150511"
"metadata": {
"items": [
{
"key": "gce-initial-windows-user",
"value": "administrator"
},
{
"key": "gce-initial-windows-password",
"value": "twxsFL3U-/,*"
}
]
}
Note: I created many VMs through rest api. All instances have the same issue. When reseting the password from developer console, it works.
The credentials didn't work. I am able to reset them from developer console. But that will not fix my problem. Because we have our own system to launch VMs and other services. For that I'm building a connector. Here is the sample request I send from node.js script.
Request :
***********
options : {
"host": "www.googleapis.com",
"path": "/compute/v1/projects/project-id/zones/us-central1-f/instances",
"method": "POST",
"headers": {
"Authorization": "Bearer ya29.lQGsX8hwdWKaDDwOFnDIZB49eir-c2TUBqYpaVvir7C430Quy8kIWsL4rXv7qjSVQZJKK5e1BdxNug",
"Content-Type": "application/json charset=utf-8"
}
}
body : {
"name": "rin2qvxkz-e",
"zone": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f",
"machineType": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f/machineTypes/n1-standard-2",
"metadata": {
"items": [
{
"key": "gce-initial-windows-user",
"value": "administrator"
},
{
"key": "gce-initial-windows-password",
"value": "%1zuV27$.:?*"
}
]
},
"tags": {
"items": [
"default"
]
},
"disks": [
{
"type": "PERSISTENT",
"boot": true,
"mode": "READ_WRITE",
"deviceName": "rin2qvxkz-e",
"autoDelete": true,
"initializeParams": {
"sourceImage": "https://www.googleapis.com/compute/v1/projects/windows-cloud/global/images/windows-server-2012-r2-dc-v20150511",
"diskType": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f/diskTypes/pd-standard"
}
}
],
"canIpForward": false,
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/project-id/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
],
"description": "rin2qvxkz-e",
"scheduling": {
"preemptible": false,
"onHostMaintenance": "MIGRATE",
"automaticRestart": true
}
}
Thanks.
You are using a new Windows image "windows-server-2012-r2-dc-v20150511" with an updated GCEAgent that doesn't look at the gce-initial-windows-user/gce-initial-windows-password instance metadata keys which were used by the old authentication scheme.
Here are explanations of how the new authentication works, starting from the "windows-server-2012-r2-dc-v20150511" image and onwards.
Please note that the initial Windows authentication and GCE API v1 are two separate topics and GCE API v1 has not changed as part of the authentication update.
The earlier answer didn't really explain when this changed. I did more research and found a note in the change log for Google Windows Images.
Metadata items gce-initial-windows-user and gce-initial-windows-password will no longer work for images v20150511 and later
https://cloud.google.com/compute/docs/release-notes-archive#february_2015
June 03, 2015
Updated Windows authentication process. Windows images v20150511 and
later will use the new scheme by default. gcloud will now generate a
random password for Windows login; it is no longer possible to
manually set a Windows password through gcloud but you can set a
custom password in the instance.
Here are some links that detail how to Add users to windows Images now
You can use the gcloud command line tool
https://cloud.google.com/sdk/gcloud/reference/compute/reset-windows-password
gcloud compute reset-windows-password INSTANCE_NAME [--user=USER]
[--zone=ZONE] [GCLOUD_WIDE_FLAG …]
You can call the API, They give GO and Python examples
They also detail a Step-By-Step manual process, in case you want more details
https://cloud.google.com/compute/docs/instances/windows/automate-pw-generation