Elsa workflow designer errors - elsa-workflows

I have completed the below tutorial to correctly configure a working elsa server
Part 2 of Building Workflow Driven .NET Applications with Elsa 2
I made modifications for running it with docker-compose allong with the dependant services.
Everything works as expected except the intellisense in the designer window.
Ive noticed a couple of errors in the browser console as below
this is my startup class
public class Startup
{
public Startup(IConfiguration configuration, IWebHostEnvironment environment)
{
Configuration = configuration;
Environment = environment;
}
private IConfiguration Configuration { get; }
private IWebHostEnvironment Environment { get; }
public void ConfigureServices(IServiceCollection services)
{
var dbConnectionString = Configuration.GetConnectionString("Sqlite");
// Razor Pages (for UI).
services.AddRazorPages();
// Hangfire (for background tasks).
AddHangfire(services, dbConnectionString);
// Elsa (workflows engine).
AddWorkflowServices(services, dbConnectionString);
// Allow arbitrary client browser apps to access the API for demo purposes only.
// In a production environment, make sure to allow only origins you trust.
services.AddCors(cors => cors.AddDefaultPolicy(policy => policy.AllowAnyHeader().AllowAnyMethod().AllowAnyOrigin().WithExposedHeaders("Content-Disposition")));
}
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
else
{
app.UseExceptionHandler("/Error");
app.UseHsts();
}
app
.UseStaticFiles()
.UseCors(cors => cors
.AllowAnyHeader()
.AllowAnyMethod()
.SetIsOriginAllowed(_ => true)
.AllowCredentials())
.UseRouting()
.UseHttpActivities() // Install middleware for triggering HTTP Endpoint activities.
.UseEndpoints(endpoints =>
{
endpoints.MapRazorPages();
endpoints.MapControllers(); // Elsa API Endpoints are implemented as ASP.NET API controllers.
});
}
private void AddHangfire(IServiceCollection services, string dbConnectionString)
{
services
.AddHangfire(config => config
// Use same SQLite database as Elsa for storing jobs.
.UseSQLiteStorage(dbConnectionString)
.UseSimpleAssemblyNameTypeSerializer()
// Elsa uses NodaTime primitives, so Hangfire needs to be able to serialize them.
.UseRecommendedSerializerSettings(settings => settings.ConfigureForNodaTime(DateTimeZoneProviders.Tzdb)))
.AddHangfireServer((sp, options) =>
{
// Bind settings from configuration.
Configuration.GetSection("Hangfire").Bind(options);
// Configure queues for Elsa workflow dispatchers.
options.ConfigureForElsaDispatchers(sp);
});
}
private void AddWorkflowServices(IServiceCollection services, string dbConnectionString)
{
services.AddWorkflowServices(dbContext => dbContext.UseSqlite(dbConnectionString));
// Configure SMTP.
services.Configure<SmtpOptions>(options => Configuration.GetSection("Elsa:Smtp").Bind(options));
// Configure HTTP activities.
services.Configure<HttpActivityOptions>(options => Configuration.GetSection("Elsa:Server").Bind(options));
// Elsa API (to allow Elsa Dashboard to connect for checking workflow instances).
services.AddElsaApiEndpoints();
}
}
this is my docker compose
version: '3.4'
services:
workflow.web:
image: ${DOCKER_REGISTRY-}workflowweb
ports:
- 5000:80
- 5001:443
build:
context: .
dockerfile: src/Workflow.Web/Dockerfile
networks:
- testnet
email.service:
image: rnwood/smtp4dev:linux-amd64-3.1.0-ci0856
ports:
- 3000:80
- "2525:25"
networks:
- testnet
elsa.dashboard:
image: elsaworkflows/elsa-dashboard:latest
ports:
- "14000:80"
environment:
ELSA__SERVER__BASEADDRESS: "http://localhost:5000"
networks:
- testnet
networks:
testnet:
driver: bridge
Any ideas

Most likely the issue is that the docker image for the dashboard is not compatible with the workflow server hosted by your application.
The cause of this mismatch is that the blog post references Elsa 2.3 NuGet packages, while the dashboard docker image is built from the latest source code in the master branch (which is something that should be fixed to avoid confusion like you're experiencing).
To make the dashboard work (which is built against latest source code), you need to update your workflow server app to reference the latest Elsa preview packages from MyGet (which are also built against latest source code from the master branch).
The following documentation describes how to reference the MyGet feed: https://elsa-workflows.github.io/elsa-core/docs/next/installation/installing-feeds#myget

Related

An error occurred applying migrations, try applying them from the command line

I am on visual studio 2019 for mac running a blazor server app with .net core 3.1 and Individual Authentication (In-app) turned on.
When i go to register and enter new users details i am presented with the following error when clicking the apply migrations button
In the appsettings.json i have the following set.
{
"ConnectionStrings": {
"DefaultConnection": "Server=localhost;Database=Test; user=SA; password=P#55word; Trusted_Connection=False;"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft": "Warning",
"Microsoft.Hosting.Lifetime": "Information"
}
},
"AllowedHosts": "*"
}
Startup.cs
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Components.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Hosting;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using CMUI.Areas.Identity;
using CMUI.Data;
namespace CMUI
{
public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
// For more information on how to configure your application, visit https://go.microsoft.com/fwlink/?LinkID=398940
public void ConfigureServices(IServiceCollection services)
{
services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(
Configuration.GetConnectionString("DefaultConnection")));
services.AddDefaultIdentity<IdentityUser>(options => options.SignIn.RequireConfirmedAccount = false)
.AddEntityFrameworkStores<ApplicationDbContext>();
services.AddRazorPages();
services.AddServerSideBlazor();
services.AddScoped<AuthenticationStateProvider, RevalidatingIdentityAuthenticationStateProvider<IdentityUser>>();
services.AddSingleton<WeatherForecastService>();
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
app.UseDatabaseErrorPage();
}
else
{
app.UseExceptionHandler("/Error");
// The default HSTS value is 30 days. You may want to change this for production scenarios, see https://aka.ms/aspnetcore-hsts.
app.UseHsts();
}
app.UseHttpsRedirection();
app.UseStaticFiles();
app.UseRouting();
app.UseAuthentication();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
endpoints.MapBlazorHub();
endpoints.MapFallbackToPage("/_Host");
});
}
}
}
The the SQL server i am running is 2019 mssql in docker using the following command
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=P#55word' -p 1433:1433 -d --name=mssqlserver2019 mcr.microsoft.com/mssql/server:2019-latest
The database is working okay as i can perform crud actions via an webapi in another solution using the same connection string. Not sure if this is a mac thing or if i have missed something silly.
Thanks.
You can try using the command line and navigating to the project root of the project that connects to that db, and then running dotnet ef database update which should run that migration and build your identity tables. Then fire the app up again and as long as it's connecting (which is looks like you are) you should be able to register users.
Further reading on migrations here. You may need to install the command line tools mentioned in this article.
I'm not familiar with VS for MacOS, but in the windows version you can go to Package Manager Console, make sure the default project in the console is set to your DB access project, and then run the command update-database. This might work for you as well.

Examples of integrating moleculer-io with moleculer-web using moleculer-runner instead of ServiceBroker?

I am having fun with using moleculer-runner instead of creating a ServiceBroker instance in a moleculer-web project I am working on. The Runner simplifies setting up services for moleculer-web, and all the services - including the api.service.js file - look and behave the same, using a module.exports = { blah } format.
I can cleanly define the REST endpoints in the api.service.js file, and create the connected functions in the appropriate service files. For example aliases: { 'GET sensors': 'sensors.list' } points to the list() action/function in sensors.service.js . It all works great using some dummy data in an array.
The next step is to get the service(s) to open up a socket and talk to a local program listening on an internal set address/port. The idea is to accept a REST call from the web, talk to a local program over a socket to get some data, then format and return the data back via REST to the client.
BUT When I want to use sockets with moleculer, I'm having trouble finding useful info and examples on integrating moleculer-io with a moleculer-runner-based setup. All the examples I find use the ServiceBroker model. I thought my Google-Fu was pretty good, but I'm at a loss as to where to look to next. Or, can i modify the ServiceBroker examples to work with moleculer-runner? Any insight or input is welcome.
If you want the following chain:
localhost:3000/sensor/list -> sensor.list() -> send message to local program:8071 -> get response -> send response as return message to the REST caller.
Then you need to add a socket io client to your sensor service (which has the list() action). Adding a client will allow it to communicate with "outside world" via sockets.
Check the image below. I think it has everything that you need.
As a skeleton I've used moleculer-demo project.
What I have:
API service api.service.js. That handles the HTTP requests and passes them to the sensor.service.js
The sensor.service.js will be responsible for communicating with remote socket.io server so it needs to have a socket.io client. Now, when the sensor.service.js service has started() I'm establishing a connection with a remote server located at port 8071. After this I can use this connection in my service actions to communicate with socket.io server. This is exactly what I'm doing in sensor.list action.
I've also created remote-server.service.js to mock your socket.io server. Despite being a moleculer service, the sensor.service.js communicates with it via socket.io protocol.
It doesn't matter if your services use (or not) socket.io. All the services are declared in the same way, i.e., module.exports = {}
Below is a working example with socket.io.
const { ServiceBroker } = require("moleculer");
const ApiGateway = require("moleculer-web");
const SocketIOService = require("moleculer-io");
const io = require("socket.io-client");
const IOService = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
};
const HelloService = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
};
const broker = new ServiceBroker();
broker.createService(IOService);
broker.createService(HelloService);
broker.start().then(async () => {
const socket = io("http://localhost:3000", {
reconnectionDelay: 300,
reconnectionDelayMax: 300
});
socket.on("connect", () => {
console.log("Connection with the Gateway established");
});
socket.emit("call", "hello.greeter", (error, res) => {
console.log(res);
});
});
To make it work with moleculer-runner just copy the service declarations into my-service.service.js. So for example, your api.service.js could look like:
// api.service.js
module.exports = {
name: "api",
// SocketIOService should be after moleculer-web
// Load the HTTP API Gateway to be able to reach "greeter" action via:
// http://localhost:3000/hello/greeter
mixins: [ApiGateway, SocketIOService]
}
and your greeter service:
// greeter.service.js
module.exports = {
name: "hello",
actions: {
greeter() {
return "Hello Via Socket";
}
}
}
And run npm run dev or moleculer-runner --repl --hot services

Auto Reload of gateway for schema changes in federated service apollo GraphQL

In Apollo Federation, I am facing this problem:
The gateway needs to be restarted every time we make a change in the schema of any federated service in service list.
I understand that every time a gateway starts and it collects all the schema and aggregates the data graph. But is there a way this can be handled automatically without restarting the Gateway as it will down all other unaffected GraphQL Federated services also
Apollo GraphQL , #apollo/gateway
There is an experimental poll interval you can use:
const gateway = new ApolloGateway({
serviceList: [
{ name: "products", url: "http://localhost:4002" },
{ name: "inventory", url: "http://localhost:4001" },
{ name: "accounts", url: "http://localhost:4000" }
],
debug: true,
experimental_pollInterval:3000
})
the code above will pull every 3 seconds
I don't know other ways to automatically reload the gateway other than polling.
I made a reusable docker image and i will keep updating it if new ways to reload the service emerge. For now you can use the POLL_INTERVAL env var to periodically check for changes.
Here is an example using docker-compose:
version: '3'
services:
a:
build: ./a # one service implementing federation
b:
build: ./b
gateway:
image: xmorse/apollo-federation-gateway
ports:
- 8000:80
environment:
CACHE_MAX_AGE: '5' # seconds
POLL_INTERVAL: '30' # seconds
URL_0: "http://a"
URL_1: "http://b"
You can use express to refresh your gateway's schema. ApolloGateway has a load() function that go out to fetch the schemas from implementing services. This HTTP call could potentially be part of a deployment process if something automatic is needed. I wouldn't go with polling or something too automatic. Once the implementing services are deployed, the schema is not going to change until it's updated and deployed again.
import { ApolloGateway } from '#apollo/gateway';
import { ApolloServer } from 'apollo-server-express';
import express from 'express';
const gateway = new ApolloGateway({ ...config });
const server = new ApolloServer({ gateway, subscriptions: false });
const app = express();
app.post('/refreshGateway', (request, response) => {
gateway.load();
response.sendStatus(200);
});
server.applyMiddleware({ app, path: '/' });
app.listen();
Update: The load() function now checks for the phase === 'initialized' before reloading the schema. A work around might be to use gateway.loadDynamic(false) or possibly change gateway.state.phase = 'initialized';. I'd recommend loadDyamic() because change state might cause issues down the road. I have not tested either of those solutions since I'm not working with Apollo Federation at the time of this update.

How to get Visual Studio to launch correct url when using docker-compose with https

I've created a .net core 2.0 project and configured it to run over HTTPS, however I cannot get Visual Studio to launch the browser with the correct scheme/port when running in Docker debug mode.
The current behaviour is VS always launches on port 80 (HTTP), and I therefore have to manually change the url each time, which is cumbersome.
Program.cs
public class Program {
public static void Main (string[] args) {
BuildWebHost(args).Run ();
}
public static IWebHost BuildWebHost (string[] args) =>
WebHost.CreateDefaultBuilder (args)
.UseKestrel (options => {
options.Listen (IPAddress.Any, GetPort(), listenOptions => {
// todo: Change this for production
listenOptions.UseHttps ("dev-cert.pfx", "idsrv3test");
});
})
.UseStartup<Startup> ()
.Build ();
public static int GetPort() => int.Parse(Environment.GetEnvironmentVariable("Port") ?? "443");
}
Dockerfile
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 443
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "MyApp.dll"]
docker-compose.override.yml
version: '3'
services:
myapp:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- Port=443
ports:
- 443
networks:
default:
external:
name: nat
Okay I have found out how to solve this myself.
Right-click the docker-compose project and go to properties.
There you can configure the Service URL that gets launched on run.
For anyone looking for code solution, what causing random port is this line in docker-compose.override.yml:
ports:
- "80"
Just remove it and add your port, for example:
ports:
- "8080:80"

Configuring catalog items through manifest.yml file

Using spring-cloud-cloudfoundry-service-broker we developed a service broker.
Initially we defined catalog items within application.yml file which gets bundled inside jar and this all works great.
Instead of bundling catalog items within jar file, we thought of supplying through manifest.yml file while pushing the service to cloud foundry.
But unfortunately application is not getting the catalog items specified in manigest.yml file. Could you please let us know how do we supply catalog items through manifest.yml file?
I have copied my code snippet here.
CatalogConfig.java
#ConfigurationProperties(prefix = "catalog")
#Component
public class CatalogConfig {
private List<ServiceDefinitionProxy> services;
public CatalogConfig() {
super();
}
#Bean
Catalog catalog() {
return new Catalog(services.stream().map(s -> s.unproxy())
.collect(Collectors.toList()));
}
public CatalogConfig(List<ServiceDefinitionProxy> services) {
super();
this.services = services;
}
public List<ServiceDefinitionProxy> getServices() {
return services;
}
public void setServices(List<ServiceDefinitionProxy> services) {
this.services = services;
}
public ServiceDefinitionProxy findServiceDefinition(String serviceId) {
return services.stream().filter(s -> s.getId().equals(serviceId))
.findFirst().get();
}
}
Manifest.yml
---
applications:
- name: my-service-broker
memory: 512M
instances: 1
host: my-service-broker
path: target/my-service-broker-1.0.0-SNAPSHOT.jar
env:
SPRING_PROFILES_DEFAULT: cloud
catalog:
services:
- id: f1478faa-d980-11e5-b5d2-0a1d41d68578
name: api-marketpace
description: API Marketplace
bindable: true
planUpdatable: true
head-type: api
tags:
- api
- Manage API Marketplace
metadata:
displayName: API Marketplace
imageUrl: https://my-service-broker.cf.com/images/logo.PNG
longDescription: API Marketplace.
providerDisplayName: API Team
documentationUrl: https://wikihub.com/display/ASC/Training
supportUrl: https://wikihub.com/display/ASC/Training
plans:
- id: f1478faa-d980-11e5-b5d2-0a1d41d68579
name: unlimited
description: free
metadata:
costs:
- amount:
usd: 0.00
unit: PER MONTH
bullets:
- Basic Unlimited
dashboardClient:
id: api-marketpace
secret: secret
redirectUrl: https://api.cf.com/
That won't work.
The manifest.yml file is used exclusively by the cf CLI to provide options when pushing apps to CF. Deployed applications never see this file or any of its contents. In fact the CF platform itself never sees the file or its contents - it is purely processed by the CLI on the client side.
The application.yml file is used by Spring Boot and the contents are provided to the app via #ConfigurationProperties and other means.
These are two completely separate concepts and mechanisms, both of which happen to use the YAML data format.

Resources