Can't change session service name in torii - session

I'm using ember-simple-auth and emberfire to authenticate users on my app. One thing I don't like about the defaults is that there is both a "session" service and a "session" object on the service. So, I opened config/environment.js and changed:
var Env = {
torii: {
sessionServiceName: 'session',
providers: {
'firebase-simple-auth': {}
}
}
...
to
var Env = {
torii: {
sessionServiceName: 'auth',
providers: {
'firebase-simple-auth': {}
}
}
...
But, the newly named "auth" service doesn't have the "invalidate" and "authenticate" methods. Those are still on the "session" service (which I'm surprised is still around).
How do I move the entire "session" service over to an "auth" service?
Thanks!

You are configuring torii, not ESA. When you use the 2 in combination though you're not actually using torii's session at all. Ember Simple Auth's session service cannot be renamed but that's also not necessary anyway as you explicitly inject it anyway and can specify a custom name when doing so.

Related

inject nestjs service to build context for graphql gateway server

In app.module.ts I have the following:
#Module({
imports: [
...,
GraphQLModule.forRoot<ApolloGatewayDriverConfig>({
server: {
context: getContext,
},
driver: ApolloGatewayDriver,
gateway: {
buildService: ({ name, url }) => {
return new RemoteGraphQLDataSource({
url,
willSendRequest({ request, context }: any) {
...
},
});
},
supergraphSdl: new IntrospectAndCompose({
subgraphs: [
{ name: 'iam', url: API_URL_IAM },
],
})
},
}),
]
...
})
here getContext is just a regular function which is not part of nestjs context (doesn't have injection, module capability) like below:
export const getContext = async ({ req }) => {
return {}
}
Is there any way to use nestjs services instead of plain old functional approach to build the context for graphql gateway in nestjs?
Thanks in advance for any kind of help.
I believe you're looking to create a service that is #Injectable and you can use that injectable service via a provider. What a provider will do is satisfy any dependency injection necessary.
In your scenario, I would import other modules as necessary. For building context, I would create a config file to create from env variables. Then create a custom provider that reads from the env variables and provides that implementation of the class/service to the other classes as their dependency injection.
For example, if I have a graphQL module. I would import the independent module. Then, I would provide in the providers section, the handler/service classes and the dependencies as an #injectable. Once your service class is created based on your config (which your provider class would handle), you would attach that service class to your GraphQL class to maybe lets say direct the URL to your dev/prod envs.

Spartacus Storefront Multisite I18n with Backend

We've run into some problems for our MultiSite Spartacus setup when doing I18n.
We'd like to have different translations for each site, so we put these on an API that can give back the messages dependent on the baseSite, eg: backend.org/baseSiteX/messages?group=common
But the Spartacus setup doesn't let us pass the baseSite? We can
pass {{lng}} and {{ns}}, but no baseSite.
See https://sap.github.io/spartacus-docs/i18n/#lazy-loading
We'd could do it by overriding i18nextInit, but I'm unsure how to achieve this.
In the documentation, it says you can use crossOrigin: true in the config, but that does not seem to work. The type-checking say it's unsupported, and it still shows uw CORS-issues
Does someone have ideas for these problems?
Currently only language {{lng}} and chunk name {{ns}} are supported as dynamic params in the i18n.backend.loadPath config.
To achieve your goal, you can implement a custom Spartacus CONFIG_INITIALIZER to will populate your i18n.backend.loadPath config based on the value from the BaseSiteService.getActive():
#Injectable({ providedIn: 'root' })
export class I18nBackendPathConfigInitializer implements ConfigInitializer {
readonly scopes = ['i18n.backend.loadPath']; // declare config key that you will resolve
readonly configFactory = () => this.resolveConfig().toPromise();
constructor(protected baseSiteService: BaseSiteService) {}
protected resolveConfig(): Observable<I18nConfig> {
return this.baseSiteService.getActive().pipe(
take(1),
map((baseSite) => ({
i18n: {
backend: {
// initialize your i18n backend path using the basesite value:
loadPath: `https://backend.org/${baseSite}/messages?lang={{lng}}&group={{ns}}`,
},
},
}))
);
}
}
and provide it in your module (i.e. in app.module):
#NgModule({
providers: [
{
provide: CONFIG_INITIALIZER,
useExisting: I18nBackendPathConfigInitializer,
multi: true,
},
],
/* ... */
})
Note: the above solution assumes the active basesite is set only once, on app start (which is the case in Spartacus by default).

How to configure rest client in quarkus microprofile case

When using Quarkus microprofile as a rest client, how can I configure underlying HttpClient?
Like number of retries, connection pool size per host and so on?
Also is it possible to force client restart somehow (so connections pool will be restarted)?
https://download.eclipse.org/microprofile/microprofile-rest-client-2.0-RC2/microprofile-rest-client-2.0-RC2.html#_configuration_keys outlines the full set of configuration keys that can be used.
The ones you're looking for are:
{packageName}.{interfaceName}/mp-rest/connectTimeout
{packageName}.{interfaceName}/mp-rest/readTimeout
The RestClientBuilder also has methods for setting those properties if you're using the programmatic API instead of the CDI approach.
I'm not aware of any means of restarting the underlying HTTP client connection pool. What would be the use case for such a situation that doesn't require the whole application to be restarted?
So... After a lot of digging, here is a solution I've found so far. It is not obvious apparently:
To make it work in pure Java (no native)
Under resources/META-INF/services directory add file named org.eclipse.microprofile.rest.client.spi.RestClientBuilderListener containing class name of your implementation of RestClientBuilderListener interface. For example my.test.MyBuilderListener. This will allow ServiceLocator to execute your listener
Refer a property you want to modify from ResteasyClientBuilder, for example to set your custom value to connectionTTL code will looks like this:
public void onNewBuilder(RestClientBuilder builder) {
log.info("Changing TTL for connections");
builder.property("resteasy.connectionTTL", List.of(2L, TimeUnit.SECONDS));
}
Ie. add a resteasy. prefix to a property name
Profit
Now native support:
After steps above:
Set both MyBuildListener and ResteasyClientBuilder available for reflection by creating a file reflection-config.json:
[
{
"name": "org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder",
"allDeclaredConstructors": true,
"allPublicConstructors": true,
"allDeclaredMethods": true,
"allPublicMethods": true,
"allDeclaredFields": true,
"allPublicFields": true
}, {
"name": "my.test.MyBuilderListener",
"allDeclaredConstructors": true,
"allPublicConstructors": true,
"allDeclaredMethods": true,
"allPublicMethods": true,
"allDeclaredFields": true,
"allPublicFields": true
}
]
Add service registration file to resources. Create a file named resources-config.json with content
{
"resources": [
{
"pattern": "META-INF/services/org\\.eclipse\\.microprofile\\.rest\\.client\\.spi\\.RestClientBuilderListener$"
}
]
}
register both files in application.yaml:
quarkus:
native:
additional-build-args: -H:ResourceConfigurationFiles=resources-config.json, -H:ReflectionConfigurationFiles=reflection-config.json
Native profit
Have fun

MassTransit endpoint name is ignored in ConsumerDefinition

The EndpointName property in a ConsumerDefinition file seems to be ignored by MassTransit. I know the ConsumerDefinition is being used because the retry logic works. How do I get different commands to go to a different queue? It seems that I can get them all to go through one central queue but I don't think this is best practice for commands.
Here is my app configuration that executes on startup when creating the MassTransit bus.
Bus.Factory.CreateUsingAzureServiceBus(cfg =>
{
cfg.Host(_config.ServiceBusUri, host => {
host.SharedAccessSignature(s =>
{
s.KeyName = _config.KeyName;
s.SharedAccessKey = _config.SharedAccessKey;
s.TokenTimeToLive = TimeSpan.FromDays(1);
s.TokenScope = TokenScope.Namespace;
});
});
cfg.ReceiveEndpoint("publish", ec =>
{
// this is done to register all consumers in the assembly and to use their definition files
ec.ConfigureConsumers(provider);
});
And my handler definition in the consumer (an azure worker service)
public class CreateAccessPointCommandHandlerDef : ConsumerDefinition<CreateAccessPointCommandHandler>
{
public CreateAccessPointCommandHandlerDef()
{
EndpointName = "specific";
ConcurrentMessageLimit = 4;
}
protected override void ConfigureConsumer(
IReceiveEndpointConfigurator endpointConfigurator,
IConsumerConfigurator<CreateAccessPointCommandHandler> consumerConfigurator
)
{
endpointConfigurator.UseMessageRetry(r =>
{
r.Immediate(2);
});
}
}
In my app that is sending the message I have to configure it to send to the "publish" queue, not "specific".
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:specific")); // does not work
EndpointConvention.Map<CreateAccessPointsCommand>(new Uri($"queue:publish")); // this does work
Because you are configuring the receive endpoint yourself, and giving it the name publish, that's the receive endpoint.
To configure the endpoints using the definitions, use:
cfg.ConfigureEndpoints(provider);
This will use the definitions that were registered in the container to configure the receive endpoints, using the consumer endpoint name defined.
This is also explained in the documentation.

DTO not working for microservice, but working for apis directly

I am developing apis & microservices in nestJS,
this is my controller function
#Post()
#MessagePattern({ service: TRANSACTION_SERVICE, msg: 'create' })
create( #Body() createTransactionDto: TransactionDto_create ) : Promise<Transaction>{
return this.transactionsService.create(createTransactionDto)
}
when i call post api, dto validation works fine, but when i call this using microservice validation does not work and it passes to service without rejecting with error.
here is my DTO
import { IsEmail, IsNotEmpty, IsString } from 'class-validator';
export class TransactionDto_create{
#IsNotEmpty()
action: string;
// #IsString()
readonly rec_id : string;
#IsNotEmpty()
readonly data : Object;
extras : Object;
// readonly extras2 : Object;
}
when i call api without action parameter it shows error action required but when i call this from microservice using
const pattern = { service: TRANSACTION_SERVICE, msg: 'create' };
const data = {id: '5d1de5d787db5151903c80b9', extras:{'asdf':'dsf'}};
return this.client.send<number>(pattern, data)
it does not throw error and goes to service.
I have added globalpipe validation also.
app.useGlobalPipes(new ValidationPipe({
disableErrorMessages: false, // set true to hide detailed error message
whitelist: false, // set true to strip params which are not in DTO
transform: false // set true if you want DTO to convert params to DTO class by default its false
}));
how will it work for both api & microservice, because i need all at one place and with same functionality so that as per clients it can be called.
ValidationPipe throws HTTP BadRequestException, where as the proxy client expects RpcException.
#Catch(HttpException)
export class RpcValidationFilter implements ExceptionFilter {
catch(exception: HttpException, host: ArgumentsHost) {
return new RpcException(exception.getResponse())
}
}
#UseFilters(new RpcValidationFilter())
#MessagePattern('validate')
async validate(
#Payload(new ValidationPipe({ whitelist: true })) payload: SomeDTO,
) {
// payload validates to SomeDto
. . .
}
I'm going out on a limb and assuming in you main.ts you have the line app.useGlobalPipes(new ValidationPipe());. From the documentation
In the case of hybrid apps the useGlobalPipes() method doesn't set up pipes for gateways and micro services. For "standard" (non-hybrid) microservice apps, useGlobalPipes() does mount pipes globally.
You could instead bind the pipe globally from the AppModule, or you could use the #UsePipes() decorator on each route that will be needing validation via the ValidationPipe
More info on binding pipes here
As I understood, useGlobalPipes is working fine for api but not for microservice.
Reason behind this, nest microservice is a hybrid application and it has some restrictions. Please refer below para.
By default a hybrid application will not inherit global pipes, interceptors, guards and filters configured for the main (HTTP-based) application. To inherit these configuration properties from the main application, set the inheritAppConfig property in the second argument (an optional options object) of the connectMicroservice() call.
Please refer this Nest Official Document
So, you need to add inheritAppConfig option in connectMicroservice() method.
const microservice = app.connectMicroservice(
{
transport: Transport.TCP,
},
{ inheritAppConfig: true },
);
It worked for me!

Resources