MassTransit error at cfg.ConfigureEndpoints(context) - masstransit

See my code below the error above happens at cfg.ConfigureEndpoints(context);
The error is
"No service for type
'MassTransit.Saga.ISagaRepository`1[SlideX.Core.StateMachines.OrderState]'
has been registered.'
The docker compose is below
> version: '3.4'
>
> services:
> hostedservice:
> image: ${DOCKER_REGISTRY-}hostedservice
> build:
> context: .
> dockerfile: HostedService/Dockerfile
> rabbitmq:
> image: masstransit/rabbitmq:latest
> ports:
> - "5672:5672"
> - "15672:15672"
> - "15692:15692"
{
services.AddMassTransit(x =>
{
x.AddDelayedMessageScheduler();
x.SetKebabCaseEndpointNameFormatter();
// By default, sagas are in-memory, but should be changed to a durable
// saga repository.
x.SetInMemorySagaRepositoryProvider();
x.AddSagaStateMachine<OrderStateMachine, OrderState>(typeof(OrderStateMachineDefinition));
x.UsingRabbitMq((context, cfg) =>
{
if (IsRunningInContainer)
cfg.Host("rabbitmq");
cfg.UseDelayedMessageScheduler();
cfg.ConfigureEndpoints(context);
});
});
services.AddLogging();
services.AddMassTransitHostedService(true);
});

The use of:
x.SetInMemorySagaRepositoryProvider()
is only called when using the non-generic AddSaga and AddSagaStateMachine methods. So you'd need to change to:
x.AddSagaStateMachine(typeof(OrderStateMachine), typeof(OrderStateMachineDefinition));
The providers were intended for use with bulk registration, since the extension methods for specifying the saga repository aren't available in the bulk methods.

Related

How can I pass a gradle property to jib via skaffold

I have a gradle build file with the following jib definition:
def baseImage = 'ghcr.io/tobias-neubert/eclipse-temurin:17.0.2_8-jre'
jib {
from {
image = baseImage
auth {
username = githubUser
password = githubPassword
}
}
to {
image = 'ghcr.io/tobias-neubert/motd-service:0.0.1'
auth {
username = githubUser
password = githubPassword
}
}
}
and the following skaffold.yaml:
apiVersion: skaffold/v4beta1
kind: Config
metadata:
name: motd-service
build:
artifacts:
- image: ghcr.io/tobias-neubert/motd-service
jib:
args:
- "-PgithubUser=tobias-neubert"
- "-PgithubPassword=secret"
manifests:
rawYaml:
- k8s/deployment.yaml
- k8s/istio.yaml
It seems that the arguments are not passed to gradle because I get the error:
Could not get unknown property 'githubPassword'
Why? What am I doing wrong and/or what have I misunderstood?
If I define the property like so:
ext {
githubPassword = System.getProperty('githubPassword', '')
}
I have to pass that property as system property vi -DgithubPassword not as -P

Runtime.ImportModuleError Serverless & Lamda

In the process of updating serverless from V2 to V3 and have run into this issue where I am getting the following error
Runtime.ImportModuleError: Error: Cannot find module 'get-fixtures-today'
My serverless.yml looks like
functions:
get-fixtures-today:
handler: src/get-fixtures/today/get-fixtures-today.run
name: get-fixtures-today
description: Get fixtures from api for today's date
package:
patterns:
- '!src/**'
- src/get-fixtures/today/**'
- src/helpers/get-fixtures.js
- src/helpers/utilities.js
- src/helpers/upcoming-fixtures.js
- src/sql/queries.js
- src/sql/query_template.js
- src/test/**
- src/config/**
get-fixtures-today
module.exports.run = async (event, context) => {
try {
await....
} catch (error) {
console.log(`Error');
}
};
Am i potentially missing something obvious?

Terraform stuck on `Refreshing state...` when running against `localstack`

I am using Terraform to publish lambda to AWS. It works fine when I deploy to AWS but stuck on "Refreshing state..." when running against localstack.
Below is my .tf config file as you can see I configured the lambda endpoint to be http://localhost:4567.
provider "aws" {
profile = "default"
region = "ap-southeast-2"
endpoints {
lambda = "http://localhost:4567"
}
}
variable "runtime" {
default = "python3.6"
}
data "archive_file" "zipit" {
type = "zip"
source_dir = "crawler/dist"
output_path = "crawler/dist/deploy.zip"
}
resource "aws_lambda_function" "test_lambda" {
filename = "crawler/dist/deploy.zip"
function_name = "quote-crawler"
role = "arn:aws:iam::773592622512:role/LambdaRole"
handler = "handler.handler"
source_code_hash = "${data.archive_file.zipit.output_base64sha256}"
runtime = "${var.runtime}"
}
Below is docker compose file for localstack:
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4583:4567-4583"
- '8055:8080'
environment:
- SERVICES=${SERVICES-lambda }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker-reuse }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
Does anyone know how to fix the issue?
This is how i fixed similar issue :
Set export TF_LOG=TRACE which is the most verbose logging.
Run terraform plan ....
In the log, I got the root cause of the issue and it was :
dag/walk: vertex "module.kubernetes_apps.provider.helmfile (close)" is waiting for "module.kubernetes_apps.helmfile_release_set.metrics_server"
From logs, I identify the state which is the cause of the issue: module.kubernetes_apps.helmfile_release_set.metrics_server.
I deleted its state :
terraform state rm module.kubernetes_apps.helmfile_release_set.metrics_server
Now run terraform plan again should fix the issue.
This is not the best solution, that's why I contacted the owner of this provider to fix the issue without this workaround.
The reason I failed because terraform tries to check credentials against AWS. Add below two lines in your .tf configuration file solves the issue.
skip_credentials_validation = true
skip_metadata_api_check = true
I ran into the same issue and fixed it by logging into the aws dev profile from the console.
So don't forget to log in.
provider "aws" {
region = "ap-southeast-2"
profile = "dev"
}

Serverless YML toUpperCase

I want to reuse my serverless.yml in different environments (dev, test, prod).
In the config I have:
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${self:provider.stage}
Right now the value will be dev, test or prod (all in lower-case).
Is there a way to convert it toUpperCase() in a way that the input and self:provider:stage will stay as it is (i.e. lower-case) but the value of NODE_ENV will be UPPER-CASE?
Update (2022-10-13)
This answer was correct at the time of its writing (circa 2018). A better answer now is to use serverless-plugin-utils as stated in #ShashankRaj's comment below.
varName: ${upper(value)}
AFAIK, there is no such function in YAML.
You can achieve what you want though by using a map between the lowercase and uppercase names.
custom:
environments:
dev: DEV
test: TEST
prod: PROD
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${self:custom.environments.${self:provider.stage}}
You can achieve something to this effect using the reference variables in javascript files functionality provided.
To take your example, this should work (assuming you're running in a node.js environment that supports modern syntax)
serverless.yml
...
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${file(./yml-helpers.js):provider.stage.uppercase}
...
yml-helpers.js (adjacent to serverless.yml)
module.exports.provider = serverless => {
// The `serverless` argument containers all the information in the .yml file
const provider = serverless.service.provider;
return Object.entries(provider).reduce(
(accumulator, [key, value]) => ({
...accumulator,
[key]:
typeof value === 'string'
? {
lowercase: value.toLowerCase(),
uppercase: value.toUpperCase()
}
: value
}),
{}
)
};
I arrived at something that works, via reading some source code and console logging the entire serverless object. This example applies a helper function to title-case some input option values (apply str.toUpperCase() instead, as required). There is a result of parsing the input options already available in the serverless object.
// serverless-helpers.js
function toTitleCase(word) {
console.log("input word: " + word);
let lower = word.toLowerCase();
let title = lower.replace(lower[0], lower[0].toUpperCase());
console.log("output word: " + title);
return title;
}
module.exports.dynamic = function(serverless) {
// The `serverless` argument contains all the information in
// the serverless.yaml file
// serverless.cli.consoleLog('Use Serverless config and methods as well!');
// this is useful for discovery of what is available:
// serverless.cli.consoleLog(serverless);
const input_options = serverless.processedInput.options;
return {
part1Title: toTitleCase(input_options.part1),
part2Title: toTitleCase(input_options.part2)
};
};
# serverless.yaml snippet
custom:
part1: ${opt:part1}
part2: ${opt:part2}
dynamicOpts: ${file(./serverless-helpers.js):dynamic}
combined: prefix${self:custom.dynamicOpts.part1Title}${self:custom.dynamicOpts.part2Title}Suffix
This simple example assumes the input options are --part1={value} and --part2={value}, but the generalization is to traverse the properties of serverless.processedInput.options and apply any custom helpers to those values.
Using Serverless Plugin Utils:
plugins:
- serverless-plugin-utils
provider:
name: aws
stage: ${opt:stage, 'dev'}
environment:
NODE_ENV: ${upper(${self:provider.stage})}
Thanks to #ShashankRaj...

How to get Visual Studio to launch correct url when using docker-compose with https

I've created a .net core 2.0 project and configured it to run over HTTPS, however I cannot get Visual Studio to launch the browser with the correct scheme/port when running in Docker debug mode.
The current behaviour is VS always launches on port 80 (HTTP), and I therefore have to manually change the url each time, which is cumbersome.
Program.cs
public class Program {
public static void Main (string[] args) {
BuildWebHost(args).Run ();
}
public static IWebHost BuildWebHost (string[] args) =>
WebHost.CreateDefaultBuilder (args)
.UseKestrel (options => {
options.Listen (IPAddress.Any, GetPort(), listenOptions => {
// todo: Change this for production
listenOptions.UseHttps ("dev-cert.pfx", "idsrv3test");
});
})
.UseStartup<Startup> ()
.Build ();
public static int GetPort() => int.Parse(Environment.GetEnvironmentVariable("Port") ?? "443");
}
Dockerfile
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 443
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "MyApp.dll"]
docker-compose.override.yml
version: '3'
services:
myapp:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- Port=443
ports:
- 443
networks:
default:
external:
name: nat
Okay I have found out how to solve this myself.
Right-click the docker-compose project and go to properties.
There you can configure the Service URL that gets launched on run.
For anyone looking for code solution, what causing random port is this line in docker-compose.override.yml:
ports:
- "80"
Just remove it and add your port, for example:
ports:
- "8080:80"

Resources