How to use Polkadot-js API with Substrate Node Template? - substrate

In the Substrate ecosystem, it is common to begin writing a new blockchain node by forking the Substrate Node Template. There are a few options for user interfaces (such as Apps and front-end-template) both of which are based on the same underlying Polkadot-JS API.
Some versions of the API work with some versions of the node template without any custom configuration, but in general the API must be supplied with information about which types the node uses. The process of supplying types is documented https://polkadot.js.org/api/start/types.extend.html#impact-on-extrinsics but which types do I need to supply?

There was a type-incompatible change introduced to the Substrate node template here on March 10th 2020. I'll use the terms "old" and "new" to refer to before and after this date.
Using the API Directly
If you're using an new node template with the polkadot-js API, you will need to use the following types as documented here
{
"Address": "AccountId",
"LookupSource": "AccountId"
}
Using a Front End Package
The frontends mentioned in the question have both been updated in an attempt to ease the life of users. The Apps UI here and the front-end-template here. However if you're trying to use an old node-template with anew front-end or vice versa, you will need to do some custom type injection.
Old node template, Old front end
No custom types necessary
New node template, New front end
No custom types necessary
Old node template, New front end
{
"Address": "GenericAddress",
"LookupSource": "Address"
}
New node template, Old front end
{
"Address": "AccountId",
"LookupSource": "AccountId"
}
How to Inject Types
In Apps
Go to the Settings tab on the left and the Developer tab on the top. Paste the types in.
In Front End Template
Modify this file https://github.com/substrate-developer-hub/substrate-front-end-template/blob/dff9783e29123f49a19cbc43f5df7ae010c92775/src/config/common.json#L4

Related

Questions about [#slack/bolt] package for configuring for Redistribution to make available for multiple workspaces

Looking to see if anyone could point me to any documentation on if Bolt [#slack/bolt] is configurable for using with multiple workspaces? I am currently utilizing the Slack Bolt NPM package SDK to develop a Slack App, and have it working for a single workspace - but now I want to make it available for multiple workspaces. I have it currently configured in the API settings for allowing Redistribution, but not understanding what needs to be done inside the App to enable the App to pick up event(s) triggered across multiple workspaces from the App.
I only seem to be able to make it work with 1 single workspace at a time.
I have scoured all available sources but can't find anything that can point me to the right direction. I feel like there is something that I am not understanding or if it isn't a possibility with Bolt...
Development environment (NodeJS), working with the [Slack Bolt SDK from Slack]
Do things change for how you configure the client for a redistributable app? (no longer passing in the signingSecret and token), and having to be initialized a different way? I feel like I am missing something. Can anyone point me to some documentation on how this is accomplished for making the app functional for multiple workspaces with the Bolt framework (if it is). Or point me to a project with existing implementation of a project that currently exists as an example.
Thank you, I appreciate any help/feedback.
Does that requirere that new individual app instances of the app would need to be setup and running for each workspace (one app can’t handle all workspaces). Also does that mean that a new app has to be setup for each workspace so it can be configured with where to point to for it’s event handlers? I was initially thinking that a single app could be the central app to handle multiple workspaces - and that additional work spaces would be able to add the app from the ‘Add Slack App’ button from the Redistribution settings of the single app.
I do currently have it setup to pull in and store the additional workspaces IDs and tokens and initializing them through the app const values (but having to setup each one as it own individual running app per workspace. But the big issue I am having is that the redistribution settings for the API only seem to let you specify 1 App endpoint to direct events to, but each app is setup as it’s own app in this pattern (so I would need to have 1 for each)
If you build using the OAuth examples here, then it will work across multiple workspaces using a single instance of the app.
Instantiating the app would look like this in your code -
const app = new App({
signingSecret: process.env.SLACK_SIGNING_SECRET,
clientId: process.env.SLACK_CLIENT_ID,
clientSecret: process.env.SLACK_CLIENT_SECRET,
stateSecret: 'my-state-secret',
scopes: ['channels:read', 'groups:read', 'channels:manage', 'chat:write', 'incoming-webhook'],
installationStore: {
storeInstallation: async (installation) => {
// change the line below so it saves to your database
return await database.set(installation.team.id, installation);
},
fetchInstallation: async (InstallQuery) => {
// change the line below so it fetches from your database
return await database.get(InstallQuery.teamId);
},
storeOrgInstallation: async (installation) => {
// include this method if you want your app to support org wide installations
// change the line below so it saves to your database
return await database.set(installation.enterprise.id, installation);
},
fetchOrgInstallation: async (InstallQuery) => {
// include this method if you want your app to support org wide installations
// change the line below so it fetches from your database
return await database.get(InstallQuery.enterpriseId);
},
},
});
One App ID === One instance, which can be installed on multiple workspaces once you activate public distribution.

Clear explanation of Dialog implementation on bot framework V4

Is there someone who can explain how to code dialog from bot framework properly?
I'm trying to code from scratch using the empty bot template to understand every pieces of code and why and how they piece together. But even after reading so many times I don't how it is properly implemented or coding for dialogs in Microsoft Bot Framework. I've read the documentation from microsoft many times and many version or doc from microsoft but still can't comprehend how it link together every piece of code. Even blogs or website i found did not explain why such pieces of code is needed but just ask you to add this and that. I understand concept but not the mechanics. The code seems to span from startup.cs, yourMainBotLogic.cs, dialogClassName.cs, BotAccessors.cs which confuse me which are the steps the program is run on and how.
Please explain in detail why the pieces of code/components is needed/what use it has and why it has to be there in such files (e.g. Startup.cs). For example;
var accessors = new BotAccessors(conversationState) { ConversationDialogState = conversationState.CreateProperty<DialogState>("DialogState"), }; return accessors;
Create a accessor for the DialogState and then return it. This is just example and my description of the code might not be right.
Your question about how everything fits together is a bit broad, but I will attempt some explanation:
startup.cs: bot configuration should be loaded here, and singletons created. Including IStatePropertyAccessors. Many of the samples contain a BotConfig file with bot specific setup code, and call it from startup. Many samples also contain a bot file. Bot files can make loading some bot services easier. But, they aren't necessary. Ids and passwords can still be retrieved from App Settings, or web.config and your code can create the services.
Some things usually initialized in startup are:
ICredentialProvider is used by the sdk to create the BotAdapter and provide JWT Token Auth. For single appid/password bots, the SDK provides a SimpleCredentialProvider. If your bot is using the integration libraries, you can create one during the IBot initialization, or just supply the botConfig with appid/pass:
webapi:
public static void Register(HttpConfiguration config)
{
config.MapBotFramework(botConfig =>
{
var appId = ConfigurationManager.AppSettings[MicrosoftAppCredentials.MicrosoftAppIdKey];
var pass = ConfigurationManager.AppSettings[MicrosoftAppCredentials.MicrosoftAppPasswordKey];
botConfig.UseMicrosoftApplicationIdentity(appId, pass);
}
}
netcore:
public void ConfigureServices(IServiceCollection services)
{
services.AddBot(options =>
{
options.CredentialProvider = new SimpleCredentialProvider(appId, appPassword);
});
}
IStorage is an implementation for interacting with a state store. The sdk provides MemoryStorage CosmosDbStorage and AzureBlobStorage These are each just using JsonSerializer to store and retrieve objects from the underlying storage.
BotState are objects that provide keys into the IStorage implementation. The SDK provides three examples:
ConversationState scoped by {channelId}/conversations/{conversationId}
UserState scoped by {channelId}/users/{userId}
PrivateConversationState scoped by {channelId}/conversations/{conversationId}/users/{userId}
IStatePropertyAccessors These are an implementation layer providing for typed access into the scoped BotState explained above. When a get/set is performed, the actual state store is queried and persisted (through an internal cache provided by the sdk).
BotAccessors.cs is just a container to hold the state classes and IStatePropertyAccessors. This isn't needed, but is for convenience.
yourMainBotLogic.cs: this is where the adapter's OnTurn implementation exists, and should load the dialog stack and process the user's message. The dialog stack is managed by a dialog set that contains an IStatePropertyAccessor of DialogData type. When a get is performed on this property accessor by calling create context, the state store is queried to fill the dialog stack of the DialogContext.
dialogClassName.cs is a dialog implementation. The specific dialog types are delineated here: Dialog types Examples of how to use them are in the samples on github and documentation.
As with other asp.net applications, startup is run when the application first loads (see aspnet-web-api-poster or lifecycle-of-an-aspnet-mvc-5-application and note that the Microsoft.Bot.Builder.Integration.AspNet.Core uses an IApplicationBuilder extension to add the message handler to the request pipeline ApplicationBuilderExtensions and Microsoft.Bot.Builder.Integration.AspNet.WebApi uses an HttpMessageHandler implementation). However, you can choose to not use the integration libraries, and create your own controllers. Like this sample: MVC-Bot
V4 additional references
Implement sequential conversation flow
Create advanced conversation flow using branches and loops
Gather user input using a dialog prompt
Managing state
Save user and conversation data
Implement custom storage for your bot

In strapi what are api templates for?

I've started playing with strapi to see what it can do.
When generating an api through strapi studio, it generates a set of base files to handle the model and api calls.
In the entity folder (e.g. article), there's a templates/default folder created with a default template. For an article entity, I get a ArticleDefault.template.json file with this:
{
"default": {
"attributes": {
"title": {},
"content": {}
},
"displayedAttribute": "title"
}
}
In strapi studio I also then add additional templates for each entity, given it multiple templates.
The command line api generator does not create the templates folder.
I couldn't find anything about it in the documentation I read.
What are the generated templates for?
When would I use them, and how would I choose a particular template if I have multiple?
I'm one of the authors of Strapi.
A template is like a schema of data. Let’s take a simple example. You have an API called Post, sometimes your post have a title and a content attribute, but other times, your post have a title, a subtitle, a cover and a content attribute. In both cases, we’re talking about the same API Post but your schema of data is different. That’s why we implemented the templates! Your needs could be different for the same content.
Then, as you said the CLI doesn't generate a template folder in project. The Studio doesn't use the same generator as the CLI but the behavior of your API is the same.

Example for the RAML file in Anypoint studio

I currently follow this tutorial: https://docs.mulesoft.com/anypoint-platform-for-apis/creating-an-apikit-project-with-maven but I have a problem in creating the RAML file I don't know how to do this and I have to take the information from these two APIs:
• http://www.programmableweb.com/api/wikipedia
• http://www.programmableweb.com/api/weather-channel
#%RAML 0.8
title: Title
version: 1.0
baseUri: http://server/api/
schemas:
- Countries: |
{
"$schema": "which link",
"type" : "",
"properties" : {
}
}
Is Schemas what need to use?
The final goal is to create an API giving some information about cities and countries. In order to do that, I need to communicate with some others API providers (the two links above) toobtain information and craft the JSON response to return the required information..
The RAML is the contract of the RESTful API you want to expose. So, first you need to understand how to code a RAML:
http://raml.org/
Then you can use the component APIKit in Anypoint Studio that generates the flows based on your RAML.
https://docs.mulesoft.com/anypoint-platform-for-apis/apikit-tutorial
After that you will want to connect to third party APIs like wikipedia or weather channel. For that you can use the HTTP Request Connector if those APIs are REST.
https://docs.mulesoft.com/mule-user-guide/v/3.7/http-request-connector
If those APIs are SOAP based you have to use the Webservice Consumer component, that automatically infers the content of the wsdl and you can choose wich method to invoke, and set the necessary parameters.
https://docs.mulesoft.com/mule-user-guide/v/3.7/web-service-consumer
To do the transformations from your received data to the third party APIs data, you should use Dataweave
https://docs.mulesoft.com/mule-user-guide/v/3.7/dataweave
I also recommend the walkthrough tutorials, for designing, building and deploying a new API.
https://docs.mulesoft.com/anypoint-platform-for-apis/anypoint-platform-for-apis-walkthrough

Programmatically setting instance name with the OpenStack Nova API

I have resigned myself to the fact that many of the features that EC2 users are accustomed to (in particular, tagging) do not exist in OpenStack. There is, however, one piece of functionality whose absence is driving me crazy.
Although OpenStack doesn't have full support for instance tags (like EC2 does), it does have the notion of an instance name. This name is exposed by the Web UI, which even allows you to set it:
This name is also exposed through the nova list command line utility.
However (and this is my problem) this field is not exposed through the nova-ec2 API layer. The cleanest way for them to integrate this with existing EC2 platform tools would be to simulate an instance Tag with name "Name", but they don't do this. What's more, I can't figure out which Nova API endpoint I can use to read and write the name (it doesn't seem to be documented on the API reference); but of course it must be somehow possible since the web client and nova-client can both somehow do it.
At the moment, I'm forced to change it manually from the website every time I launch a new instance. (I can't do it during instance creation because I use the nova-ec2 API, not the nova command line client).
My question is:
Is there a way to read/write the instance name through the EC2 API layer?
Failing that, what is the REST endpoint to set it programmatically?
(BONUS): What is Nova's progress on supporting general instance tagging?
The Python novaclient.v1_1 package has a method on the server object:
def update(self, server, name=None):
"""
Update the name or the password for a server.
:param server: The :class:`Server` (or its ID) to update.
:param name: Update the server's name.
"""
if name is None:
return
body = {
"server": {
"name": name,
},
}
self._update("/servers/%s" % base.getid(server), body)
This indicates that you can update the name of a server by POST-ing
the following JSON to http://nova-api:port/v2.0/servers/{server-id}:
{
"server": {
"name": "new_name"
}
}
Of course, all of the usual authentication headers (namely X-Auth-Token
from your Keystone server) are still required, so it is probably easier to
use a client library for whatever language you prefer to manage all that.

Resources