How to Get Visual Studio to Publish an Application to Service Fabric Cluster Secured by Certificate Common Name Instead of Thumbprint? - visual-studio

I followed the steps documented here to convert my existing ARM template to use the commonname setting instead of thumbprint. The deployment was successful and I was able to connect to the Service Fabric Explorer using my browser after the typical certificate selection popup. Next, I tried to deploy an application to the cluster just like I had been previously. Even though I can see the cluster connection endpoint URI in the VS public service fabric application dialog, VS fails to connect to the cluster. Before, I would get a prompt to permit VS to access the local certificate. Does anyone know how to get VS to deploy an application to a service fabric cluster setup using the certificate common name?
Extracts from the MS link above:
"virtualMachineProfile": {
"extensionProfile": {
"extensions": [`enter code here`
{
"name": "[concat('ServiceFabricNodeVmExt','_vmNodeType0Name')]",
"properties": {
"type": "ServiceFabricNode",
"autoUpgradeMinorVersion": true,
"protectedSettings": {
"StorageAccountKey1": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key1]",
"StorageAccountKey2": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('supportLogStorageAccountName')),'2015-05-01-preview').key2]"
},
"publisher": "Microsoft.Azure.ServiceFabric",
"settings": {
"clusterEndpoint": "[reference(parameters('clusterName')).clusterEndpoint]",
"nodeTypeRef": "[variables('vmNodeType0Name')]",
"dataPath": "D:\\SvcFab",
"durabilityLevel": "Bronze",
"enableParallelJobs": true,
"nicPrefixOverride": "[variables('subnet0Prefix')]",
"certificate": {
"commonNames": [
"[parameters('certificateCommonName')]"
],
"x509StoreName": "[parameters('certificateStoreValue')]"
}
},
"typeHandlerVersion": "1.0"
}
},
and
{
"apiVersion": "2018-02-01",
"type": "Microsoft.ServiceFabric/clusters",
"name": "[parameters('clusterName')]",
"location": "[parameters('clusterLocation')]",
"dependsOn": [
"[concat('Microsoft.Storage/storageAccounts/', variables('supportLogStorageAccountName'))]"
],
"properties": {
"addonFeatures": [
"DnsService",
"RepairManager"
],
"certificateCommonNames": {
"commonNames": [
{
"certificateCommonName": "[parameters('certificateCommonName')]",
"certificateIssuerThumbprint": ""
}
],
"x509StoreName": "[parameters('certificateStoreValue')]"
},
...

I found the solution for Visual Studio. I needed to add/update to the PublishProfiles/Cloud.xml file. I replaced ServerCertThumbprint with ServerCommonName, and then used the certificate CN for the new property and the existing FindValue property. Additionally, I changed the property for FindType to FindBySubjectName. I am now able to successfully connect and publish my application to the cluster.
<ClusterConnectionParameters
ConnectionEndpoint="sf-commonnametest-scus.southcentralus.cloudapp.azure.com:19000"
X509Credential="true"
ServerCommonName="sfrpe2eetest.southcentralus.cloudapp.azure.com"
FindType="FindBySubjectName"
FindValue="sfrpe2eetest.southcentralus.cloudapp.azure.com"
StoreLocation="CurrentUser"
StoreName="My" />

Related

Visual Studio templating - conditionally hide parameter

I am attempting to create a custom Visual Studio project template and I have a template.json. What I am trying to achieve is to hide / disable the DoStuff parameter from the Visual Studio create project wizard if another parameter (in my case, ProjectType) was equal to something specific. It would essentially be something like the Docker OS parameter from the default Visual Studio API template.
As you can see, by default the dropdown (in my case, it would be a checkbox) is hidden / disabled, but if I check Enable Docker, it can be selected.
Below is my current template.json file which I can't seem to get right to have this feature.
{
"$schema": "http://json.schemastore.org/template",
"symbols": {
"ProjectType": {
"type": "parameter",
"datatype": "choice",
"choices": [
{
"choice": "Console"
},
{
"choice": "API"
}
],
"defaultValue": "API",
"description": "The type of the project you are building."
},
"DoStuff": {
"type": "parameter",
"datatype": "bool",
"defaultValue": "false",
// hide if ProjectType == API
}
}
}
I tried to combine it with ide.host.json to achieve this, but it's not working at all.
{
"$schema": "https://json.schemastore.org/ide.host.json",
"defaultSymbolVisibility": true,
"order": 2,
"icon": "icon.png",
"symbolInfo": [
{
"id": "DoStuff",
"isVisible": "(ProjectType == \"API\")"
}
]
}
After some more investigation and contacting Sayed Ibrahim Hashimi (sayedihashimi) on Twitter, it turns out this is currently not supported and there's likely no way to achieve it right now in .NET 6 and prior releases.
However, as per Chet Husk's (#ChetHusk) template engine upgrades comment (which redirects you to Github:dotnet/templating/docs/Conditions.md), templating conditions will be a feature added to .NET 7. The dotnet new cli already supports it, but the Visual Studio support has not yet been started on.

SAP Fiori Launchpad on Cloud Foundry - Role Configuration Issues

We have a range of apps deployed to our Fiori Launchpad (via an mta) file on Cloud Foundry.
I came across this blog that describes setting up role access on an app by app basis.
Configuring Roles – SAP Fiori Launchpad Cloudfoundry | SAP Blogs.
Firstly, I setup approuter/xs-app.json as follows. Note this has as single config_admin scope as opposed to the 2 (approver and user) in the blog. The reason for this is we only need a single configurable role at the moment, so I'm making the assumption we only need a single scope.
Does the below snippet look correct? I've used "srv_api" as the destination from the blog, but not sure If it needs to be something else.
{
"authenticationMethod": "route",
"welcomeFile": "/cp.portal",
"routes": [
{
"source": "^/catalog(.*)$",
"target": "/catalog$1",
"destination": "srv_api",
"authenticationType": "xsuaa",
"scope": {
"GET": ["$XSAPPNAME.config_admin"],
"PATCH": ["$XSAPPNAME.config_admin"],
"POST": ["$XSAPPNAME.config_admin"],
"PUT": ["$XSAPPNAME.config_admin"],
"DELETE": ["$XSAPPNAME.config_admin"],
"default": ["$XSAPPNAME.config_admin"]
}
}
],
"logout": {
"logoutEndpoint": "/do/logout"
}
}
Next up, xs-security.json in the project root.
{
"xsappname": "demo",
"tenant-mode": "dedicated",
"description": "Security profile of called application",
"scopes": [
{
"name": "uaa.user",
"description": "UAA"
},
{
"name": "$XSAPPNAME.config_admin",
"description": "UAA configuration admin"
}
],
"role-templates": [
{
"name": "Token_Exchange",
"description": "UAA",
"scope-references": ["uaa.user"]
},
{
"name": "ADMIN_USER",
"description": "UAA ADMIN_USER",
"scope-references": ["uaa.config_admin"]
}
]
}
... and finally the manifest.json of the app I would like to apply the role to:
"sap.platform.cf": { "oAuthScopes": ["$XSAPPNAME.config_admin"] }
The app exists in a Group containing only that app.
When deployed to SAP Cloud Foundry, the Group and app are hidden. Fine I thought, just needs the role configured on the BTP side?
In BTP, I setup the role collection with my user, and the the two roles, ADMIN_USER and Token_Exchange, which were deployed correctly to BTP in the previous step.
However, the app and it's Catalog are still hidden from view on the Fiori Launchpad. The only apps that do appear are the one's without the "sap.platform.cf" manifest entry.
Am I approaching this the correct way? Have I missed something?
Or do I need to setup two separate scope, as in the guide, and include the relevant scope in each and every app?
*Note - I've tried setting up the user without the Token_Exhange role, with the same result.
The answer is a typo in xs-security.json
Should be: "scope-references": ["$XSAPPNAME.config_admin"]

How to connect internal private DB2 to Cognos Dynamic Dashboard Embedded on IBM Cloud

Im working on cognos dashboard embedded using the reference from -
Cognos Dashboard embedded.
but instead of csv i'm working on JDBC data sources.
i'm trying to connect to JDBC data source as -
"module": {
"xsd": "https://ibm.com/daas/module/1.0/module.xsd",
"source": {
"id": "StringID",
"jdbc": {
"jdbcUrl": "jdbcUrl: `jdbc:db2://DATABASE-HOST:50000/YOURDB`",
"driverClassName": "com.ibm.db2.jcc.DB2Driver",
"schema": "DEFAULTSCHEMA"
},
"user": "user_name",
"password": "password"
},
"table": {
"name": "ROLE",
"description": "description of the table for visual hints ",
"column": [
{
"name": "ID",
"description": "String",
"datatype": "BIGINT",
"nullable": false,
"label": "ID",
"usage": "identifier",
"regularAggregate": "countDistinct",
},
{
"name": "NAME",
"description": "String",
"datatype": "VARCHAR(100)",
"nullable": true,
"label": "Name",
"usage": "identifier",
"regularAggregate": "countDistinct"
}
]
},
"label": "Module Name",
"identifier": "moduleId"
}
Note - here my database is hosted on private network on not hosted on public IP address.
So when i add the above code to add datasources, then the data is not loading from my DB,
even though i mentioned correct user and password for jdbc connection in above code then also when i drag and drop any field from data sources then it opens a pop up and which asks me for userID and Password.
and even after i filled userID and Password details again in popup i'm still unable to load the data.
Errors -
1 . when any module try to fetch data then calls API -
'https://dde-us-south.analytics.ibm.com/daas/v1/data?moduleUrl=%2Fda......'
but in my case this API is failing and giving the error - Status Code: 403 Forbidden
In SignOnDialog.js
At line - 98 call for saveDataSourceCredential method fails and it says saveDataSourceCredential is not a function.
Expectation -
It should not open a pop to asks for userID and password. and data will load directly just as it happens for database hosted on public IP domains.
This does not work in general. If you are using any type of functionality hosted outside your network that needs to access an API or data on your private network, there needs to be some communication channel.
That channel could be established by setting up a VPN, using products like IBM Secure Gateway to create a client / server connection between the IBM Cloud and your Db2 host, or by even setting up a direct link between your company network and the (IBM) cloud.

Update Azure Event Grid function subscription with dead-letter storage

I have successfully created an event trigger on storage blob creation, on a storage account called receivingtestwesteurope, under resource group omni-test, which is received via a function called ValidateMetadata. I created this via the portal GUI. However I now want to add deadletter/retry policies, which can only be done via the CLI.
The working trigger is like this:
{
"destination": {
"endpointBaseUrl": "https://omnireceivingprocesstest.azurewebsites.net/admin/extensions/EventGridExtensionConfig",
"endpointType": "WebHook",
"endpointUrl": null
},
"filter": {
"includedEventTypes": [
"Microsoft.Storage.BlobCreated"
],
"isSubjectCaseSensitive": null,
"subjectBeginsWith": "/blobServices/default/containers/snapshots/blobs/",
"subjectEndsWith": ".png"
},
"id": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/Microsoft.Storage/StorageAccounts/receivingtestwesteurope/providers/Microsoft.EventGrid/eventSubscriptions/png",
"labels": [
""
],
"name": "png",
"provisioningState": "Succeeded",
"resourceGroup": "omni-test",
"topic": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope",
"type": "Microsoft.EventGrid/eventSubscriptions"
}
First I thought I could update the existing event with a deadletter queue:
az eventgrid event-subscription update --name png --deadletter-endpoint receivingtestwesteurope/blobServices/default/containers/eventgrid
Which returns:
az: error: unrecognized arguments: --deadletter-endpoint
receivingtestwesteurope/blobServices/default/containers/eventgrid
Then I tried via REST Patch:
https://learn.microsoft.com/en-us/rest/api/eventgrid/eventsubscriptions/update
scope: /subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope
eventSubscriptionName: png
api-version: 2018-05-01-preview
Body:
"deadletterdestination": {
"endpointType": "StorageBlob",
"properties": {
"blobContainerName": "eventgrid",
"resourceId": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope"
}}
Which returns
"Model state is invalid."
===================
Final working solution:
{
"deadletterdestination": {
"endpointType": "StorageBlob",
"properties": {
"blobContainerName": "eventgrid",
"resourceId": "/subscriptions/fa6409ab-1234-1234-1234-85dd2b3ceab4/resourceGroups/omni-test/providers/microsoft.storage/storageaccounts/receivingtestwesteurope"
}
}
}
have a look at Manage Event Grid delivery settings, where in details is described turning-on a dead-lettering. Note, you have to install an eventgrid extension
az extension add --name eventgrid
also, you can use a REST API for updating your event subscription for dead-lettering.
besides that, I have just released my tinny tool Azure Event Grid Tester for helping with an Azure Event Grid model on the local machine.
Update:
The following is a deadletterdestination property:
"deadletterdestination": {
"endpointType": "StorageBlob",
"properties": {
"blobContainerName": "{containerName}",
"resourceId": "/subscriptions/{subscriptionId}/resourceGroups/{resgroup}/providers/Microsoft.Storage/storageAccounts/{storageAccount}"
}
}
you can use the Event Subscriptions - Update (REST API PATCH) with the above property. Note, that the api-version=2018-05-01-preview must be used.

I couldn't connect GCE windows instance from remmina RDP

I use GCE V1 rest api to launch instances. I rarely use google developer console. I created windows VM instance through rest api. I passed windows initial username and password in metadata property. Windows VM created successfully. I also able to get those credentials in response, which I sent while creating VM. But I couldn't connect the VM using that username and password. I read the doc about how to reset password from developer console. It works fine. But we would like to rest apis for all. I mean to created/manage GCE resources. So can anyone help to fix this issue?
The image I used to launch a vm is "windows-server-2012-r2-dc-v20150511"
"metadata": {
"items": [
{
"key": "gce-initial-windows-user",
"value": "administrator"
},
{
"key": "gce-initial-windows-password",
"value": "twxsFL3U-/,*"
}
]
}
Note: I created many VMs through rest api. All instances have the same issue. When reseting the password from developer console, it works.
The credentials didn't work. I am able to reset them from developer console. But that will not fix my problem. Because we have our own system to launch VMs and other services. For that I'm building a connector. Here is the sample request I send from node.js script.
Request :
***********
options : {
"host": "www.googleapis.com",
"path": "/compute/v1/projects/project-id/zones/us-central1-f/instances",
"method": "POST",
"headers": {
"Authorization": "Bearer ya29.lQGsX8hwdWKaDDwOFnDIZB49eir-c2TUBqYpaVvir7C430Quy8kIWsL4rXv7qjSVQZJKK5e1BdxNug",
"Content-Type": "application/json charset=utf-8"
}
}
body : {
"name": "rin2qvxkz-e",
"zone": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f",
"machineType": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f/machineTypes/n1-standard-2",
"metadata": {
"items": [
{
"key": "gce-initial-windows-user",
"value": "administrator"
},
{
"key": "gce-initial-windows-password",
"value": "%1zuV27$.:?*"
}
]
},
"tags": {
"items": [
"default"
]
},
"disks": [
{
"type": "PERSISTENT",
"boot": true,
"mode": "READ_WRITE",
"deviceName": "rin2qvxkz-e",
"autoDelete": true,
"initializeParams": {
"sourceImage": "https://www.googleapis.com/compute/v1/projects/windows-cloud/global/images/windows-server-2012-r2-dc-v20150511",
"diskType": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f/diskTypes/pd-standard"
}
}
],
"canIpForward": false,
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/project-id/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
],
"description": "rin2qvxkz-e",
"scheduling": {
"preemptible": false,
"onHostMaintenance": "MIGRATE",
"automaticRestart": true
}
}
Thanks.
You are using a new Windows image "windows-server-2012-r2-dc-v20150511" with an updated GCEAgent that doesn't look at the gce-initial-windows-user/gce-initial-windows-password instance metadata keys which were used by the old authentication scheme.
Here are explanations of how the new authentication works, starting from the "windows-server-2012-r2-dc-v20150511" image and onwards.
Please note that the initial Windows authentication and GCE API v1 are two separate topics and GCE API v1 has not changed as part of the authentication update.
The earlier answer didn't really explain when this changed. I did more research and found a note in the change log for Google Windows Images.
Metadata items gce-initial-windows-user and gce-initial-windows-password will no longer work for images v20150511 and later
https://cloud.google.com/compute/docs/release-notes-archive#february_2015
June 03, 2015
Updated Windows authentication process. Windows images v20150511 and
later will use the new scheme by default. gcloud will now generate a
random password for Windows login; it is no longer possible to
manually set a Windows password through gcloud but you can set a
custom password in the instance.
Here are some links that detail how to Add users to windows Images now
You can use the gcloud command line tool
https://cloud.google.com/sdk/gcloud/reference/compute/reset-windows-password
gcloud compute reset-windows-password INSTANCE_NAME [--user=USER]
[--zone=ZONE] [GCLOUD_WIDE_FLAG …]
You can call the API, They give GO and Python examples
They also detail a Step-By-Step manual process, in case you want more details
https://cloud.google.com/compute/docs/instances/windows/automate-pw-generation

Resources