Spring #ConfigurationProperties configuration processor not generating correct metadata for Maps with enum keys and values - spring

I need to do some lightweight attribute-based access control in a Spring Boot application.
(The background is not related to my question, but here's a brief summary: There are different types of users, different types of resources (each with attributes), and different levels of access. I need to evaluate access to resources very dynamically - hence using enums to separate permission evaluation from other logic.)
I want to store the configuration for the permissions in application.yml so I can configure it easily, and change it without making code changes.
I wanted to set this up using #ConfigurationProperties, so that the config in application.yml had autocompletion and could be softly validated. This works very nicely for other properties, but not when I start using Maps.
I can change the keys to strings, and map them in the config class (even though I lose type-completion), but the documentation in a few places indicates that that Maps should use enum, and metadata will be generated correctly - so I am confused.
I have correctly set up spring-boot-configuration-processor (it works for other properties) and it is generating the metadata JSON (see below).
MyConfig.kt
package example.myapp
import org.springframework.boot.context.properties.ConfigurationProperties
import org.springframework.boot.context.properties.ConstructorBinding
#ConstructorBinding
#ConfigurationProperties("my-app")
data class MyConfig(
val permissions: Map<Role, Map<ResourceId, Set<Permission>>>,
val owner: Map<ResourceId, Set<Permission>>
) {
enum class Role {
OWNER,
VIEWER,
}
enum class Permission {
VIEW,
EDIT,
}
enum class ResourceId {
ARTICLE_TITLE,
ARTICLE_BODY,
}
}
spring-configuration-metadata.json
{
"groups": [
{
"name": "my-app",
"type": "example.myapp.MyConfig",
"sourceType": "example.myapp.MyConfig"
}
],
"properties": [
{
"name": "my-app.owner",
"type": "java.util.Map<example.myapp.MyConfig$ResourceId,? extends java.util.Set<? extends example.myapp.MyConfig.Permission>>",
"sourceType": "example.myapp.MyConfig"
},
{
"name": "my-app.permissions",
"type": "java.util.Map<example.myapp.MyConfig$Role,? extends java.util.Map<example.myapp.MyConfig.ResourceId,? extends java.util.Set<? extends example.myapp.MyConfig.Permission>>>",
"sourceType": "example.myapp.MyConfig"
}
],
"hints": []
}
Versions
Spring Boot 2.7.1
Kotlin 1.7.10
Update
Changing Map<> to HashMap<> seems to help avoid the errors in the keys, but doesn't affect the values - I get a misleading warning: Cannot resolve configuration property
#ConstructorBinding
#ConfigurationProperties("my-app")
data class MyConfig(
val permissions: HashMap<Role, HashMap<ResourceId, HashSet<Permission>>>,
val owner: HashMap<ResourceId, HashSet<Permission>>
)

Related

Kotlin OpenapiGenerator Any type generates into Map<String, JsonObject>

I'm struggling to find correct way to define Any object in Kotlin so that OpenApiGenerator would generate it as Object type.
I have a simple DTO object, payload is basically a map of object fields and values:
data class EventDto(
val payload: Map<String, Any>,
...other fields
)
Which gets converted to OpenAPI Spec and looks like this:
ommited code
...
"payload": {
"type": "object",
"additionalProperties": {
"type": "object"
}
},
But when I execute open-api-generator
this get's converted into Kotlin class looking like this:
#Serializable
public data class EventPayloadDto(
#SerialName(value = "payload")
val payload: kotlin.collections.Map<kotlin.String, kotlinx.serialization.json.JsonObject>? = null,
)
Which is not so nice convert into because each object value needs to be converted to JsonObject, is it possible to retain "Any" object when generating from OpenAPI docs or I must use Map<String, String>?
I tried using objectMapper.convert
objectMapper.convertValue(event, object : TypeReference<Map<String, JsonObject>>() {})
but since there are no serializers into JsonObject it had no effect and ended up in an error.

NestJS transform a property using ValidationPipe before validation execution during DTO creation

I'm using the built in NestJS ValidationPipe along with class-validator and class-transformer to validate and sanitize inbound JSON body payloads. One scenario I'm facing is a mixture of upper and lower case property names in the inbound JSON objects. I'd like to rectify and map these properties to standard camel-cased models in our new TypeScript NestJS API so that I don't couple mismatched patterns in a legacy system to our new API and new standards, essentially using the #Transform in the DTOs as an isolation mechanism for the rest of the application. For example, properties on the inbound JSON object:
"propertyone",
"PROPERTYTWO",
"PropertyThree"
should map to
"propertyOne",
"propertyTwo",
"propertyThree"
I'd like to use #Transform to accomplish this, but I don't think my approach is correct. I'm wondering if I need to write a custom ValidationPipe. Here is my current approach.
Controller:
import { Body, Controller, Post, UsePipes, ValidationPipe } from '#nestjs/common';
import { TestMeRequestDto } from './testmerequest.dto';
#Controller('test')
export class TestController {
constructor() {}
#Post()
#UsePipes(new ValidationPipe({ transform: true }))
async get(#Body() testMeRequestDto: TestMeRequestDto): Promise<TestMeResponseDto> {
const response = do something useful here... ;
return response;
}
}
TestMeModel:
import { IsNotEmpty } from 'class-validator';
export class TestMeModel {
#IsNotEmpty()
someTestProperty!: string;
}
TestMeRequestDto:
import { IsNotEmpty, ValidateNested } from 'class-validator';
import { Transform, Type } from 'class-transformer';
import { TestMeModel } from './testme.model';
export class TestMeRequestDto {
#IsNotEmpty()
#Transform((propertyone) => propertyone.valueOf())
propertyOne!: string;
#IsNotEmpty()
#Transform((PROPERTYTWO) => PROPERTYTWO.valueOf())
propertyTwo!: string;
#IsNotEmpty()
#Transform((PropertyThree) => PropertyThree.valueOf())
propertyThree!: string;
#ValidateNested({ each: true })
#Type(() => TestMeModel)
simpleModel!: TestMeModel
}
Sample payload used to POST to the controller:
{
"propertyone": "test1",
"PROPERTYTWO": "test2",
"PropertyThree": "test3",
"simpleModel": { "sometestproperty": "test4" }
}
The issues I'm having:
The transforms seem to have no effect. Class validator tells me that each of those properties cannot be empty. If for example I change "propertyone" to "propertyOne" then the class validator validation is fine for that property, e.g. it sees the value. The same for the other two properties. If I camelcase them, then class validator is happy. Is this a symptom of the transform not running before the validation occurs?
This one is very weird. When I debug and evaluate the TestMeRequestDto object, I can see that the simpleModel property contains an object containing a property name "sometestproperty", even though the Class definition for TestMeModel has a camelcase "someTestProperty". Why doesn't the #Type(() => TestMeModel) respect the proper casing of that property name? The value of "test4" is present in this property, so it knows how to understand that value and assign it.
Very weird still, the #IsNotEmpty() validation for the "someTestProperty" property on the TestMeModel is not failing, e.g. it sees the "test4" value and is satisfied, even though the inbound property name in the sample JSON payload is "sometestproperty", which is all lower case.
Any insight and direction from the community would be greatly appreciated. Thanks!
You'll probably need to make use of the Advanced Usage section of the class-transformer docs. Essentially, your #Transform() would need to look something like this:
import { IsNotEmpty, ValidateNested } from 'class-validator';
import { Transform, Type } from 'class-transformer';
import { TestMeModel } from './testme.model';
export class TestMeRequestDto {
#IsNotEmpty()
#Transform((value, obj) => obj.propertyone.valueOf())
propertyOne!: string;
#IsNotEmpty()
#Transform((value, obj) => obj.PROPERTYTWO.valueOf())
propertyTwo!: string;
#IsNotEmpty()
#Transform((value, obj) => obj.PropertyThree.valueOf())
propertyThree!: string;
#ValidateNested({ each: true })
#Type(() => TestMeModel)
simpleModel!: TestMeModel
}
This should take an incoming payload of
{
"propertyone": "value1",
"PROPERTYTWO": "value2",
"PropertyThree": "value3",
}
and turn it into the DTO you envision.
Edit 12/30/2020
So the original idea I had of using #Transform() doesn't quite work as envisioned, which is a real bummer cause it looks so nice. So what you can do instead isn't quite as DRY, but it still works with class-transformer, which is a win. By making use of #Exclude() and #Expose() you're able to use property accessors as an alias for the weird named property, looking something like this:
class CorrectedDTO {
#Expose()
get propertyOne() {
return this.propertyONE;
}
#Expose()
get propertyTwo(): string {
return this.PROPERTYTWO;
}
#Expose()
get propertyThree(): string {
return this.PrOpErTyThReE;
}
#Exclude({ toPlainOnly: true })
propertyONE: string;
#Exclude({ toPlainOnly: true })
PROPERTYTWO: string;
#Exclude({ toPlainOnly: true })
PrOpErTyThReE: string;
}
Now you're able to access dto.propertyOne and get the expected property, and when you do classToPlain it will strip out the propertyONE and other properties (if you're using Nest's serialization interceptor. Otherwise in a secondary pipe you could plainToClass(NewDTO, classToPlain(value)) where NewDTO has only the corrected fields).
The other thing you may want to look into is an automapper and see if it has better capabilities for something like this.
If you're interested, here's the StackBlitz I was using to test this out
As an alternative to Jay's execellent answer, you could also create a custom pipe where you keep the logic for mapping/transforming the request payload to your desired DTO. It can be as simple as this:
export class RequestConverterPipe implements PipeTransform{
transform(body: any, metadata: ArgumentMetadata): TestMeRequestDto {
const result = new TestMeRequestDto();
// can of course contain more sophisticated mapping logic
result.propertyOne = body.propertyone;
result.propertyTwo = body.PROPERTYTWO;
result.propertyThree = body.PropertyThree;
return result;
}
export class TestMeRequestDto {
#IsNotEmpty()
propertyOne: string;
#IsNotEmpty()
propertyTwo: string;
#IsNotEmpty()
propertyThree: string;
}
You can then use it like this in your controller (but you need to make sure that the order is correct, i.e. the RequestConverterPipe must run before the ValidationPipe which also means that the ValidationPipe cannot be globally set):
#UsePipes(new RequestConverterPipe(), new ValidationPipe())
async post(#Body() requestDto: TestMeRequestDto): Promise<TestMeResponseDto> {
// ...
}

Apollo Client 3: How to implement caching on client side for graphql interfaces?

I have a case where I have an interface, which has different type implementations defined in graphql. I may not be able to share the exact code. But the case looks something like:
interface Character {
name: String!
}
type Human implements Character {
name: String!
friends: [Character]
}
type Droid implements Character {
name: String!
material: String
}
There is query which returns either Human or Droid type in response.
Response may contain something like:
{
name: 'Human_01',
friends: []
__typename: 'Human'
}
or
{
name: 'Droid_01',
material: 'Aluminium'
__typename: 'Droid'
}
I am using Apollo Client 3 on client side for querying the data and have fragments for these like:
fragment Human on Human {
friends
}
fragment Droid on Droid {
material
}
fragment Character on Character {
name
...Human
...Droid
}
I am querying for the Character data as:
character {
...Character
}
Since, this is the case of interface, and as defined in the docs for Apollo client 3, we need to use possibleTypes in order to match the fragments in such cases. For caching purpose, I have defined InMemoryCache as:
new InMemoryCache({ possibleTypes: { Character: ['Human', 'Droid'] } })
The primary key field for a Character implementation is the name field, which I need to use in order to store its value in cache.
In Apollo client 3, it is mentioned to use typePolicies for defining keyFields for a type.
So, I need to ask as to whether I should define, type policy for both type implementations, specifying keyFields as name in both cases like:
new InMemoryCache({
possibleTypes: { Character: ['Human', 'Droid'] },
typePolicies: { Human: { keyFields: ['name'] }, Droid: { keyFields: ['name'] } }
});
In my example, I have provided only 2 such type implementations but there can be n number of type implementations corresponding to Character interface. So, in that case I will need to define keyFields as name in typePolicies for all the n type implementations.
So, does there exist any better way of implementing caching wrt these types of interface implementations ?
Any help would really be appreciated. Thanks!!!
Inheritance of type and field policies is coming in the next minor version of #apollo/client, v3.3!
You can try it out now by installing #apollo/client#3.3.0-beta.5.
To stay up to date on the progress of the v3.3 release, see this pull request.

How could I structure my graphql schema to allow for the retrieval of possible dropdown values?

I'm trying to get the possible values for multiple dropdown menus from my graphQL api.
for example, say I have a schema like so:
type Employee {
id: ID!
name: String!
jobRole: Lookup!
address: Address!
}
type Address {
street: String!
line2: String
city: String!
state: Lookup!
country: Lookup!
zip: String!
}
type Lookup {
id: ID!
value: String!
}
jobRole, city and state are all fields that have a predetermined list of values that are needed in various dropdowns in forms around the app.
What would be the best practice in the schema design for this case? I'm considering the following option:
query {
lookups {
jobRoles {
id
value
}
}
}
This has the advantage of being data driven so I can update my job roles without having to update my schema, but I can see this becoming cumbersome. I've only added a few of our business objects, and already have about 25 different types of lookups in my schema and as I add more data into the API I'll need to somehow to maintain the right lookups being used for the right fields, dealing with general lookups that are used in multiple places vs ultra specific lookups that will only ever apply to one field, etc.
Has anyone else come across a similar issue and is there a good design pattern to handle this?
And for the record I don't want to use enums with introspection for 2 reasons.
With the number of lookups we have in our existing data there will be a need for very frequent schema updates
With an enum you only get one value, I need a code that will be used as the primary key in the DB and a descriptive value that will be displayed in the UI.
//bad
enum jobRole {
MANAGER
ENGINEER
SALES
}
//needed
[
{
id: 1,
value: "Manager"
},
{
id: 2,
value: "Engineer"
},
{
id: 3,
value: "Sales"
}
]
EDIT
I wanted to give another example of why enums probably aren't going to work. We have a lot of descriptions that should show up in a drop down that contain special characters.
// Client Type
[
{
id: 'ENDOW',
value: 'Foundation/Endowment'
},
{
id: 'PUBLIC',
value: 'Public (Government)'
},
{
id: 'MULTI',
value: 'Union/Multi-Employer'
}
]
There are others that are worse, they have <, >, %, etc. And some of them are complete sentences so the restrictive naming of enums really isn't going to work for this case. I'm leaning towards just making a bunch of lookup queries and treating each lookup as a distinct business object
I found a way to make enums work the way I needed. I can get the value by putting it in the description
Here's my gql schema definition
enum ClientType {
"""
Public (Government)
"""
PUBLIC
"""
Union/Multi-Employer
"""
MULTI
"""
Foundation/Endowment
"""
ENDOW
}
When I retrieve it with an introspection query like so
{
__type(name: "ClientType") {
enumValues {
name
description
}
}
}
I get my data in the exact structure I was looking for!
{
"data": {
"__type": {
"enumValues": [{
"name": "PUBLIC",
"description": "Public (Government)"
}, {
"name": "MULTI",
"description": "Union/Multi-Employer"
}, {
"name": "ENDOW",
"description": "Foundation/Endowment"
}]
}
}
}
Which has exactly what I need. I can use all the special characters, numbers, etc. found in our descriptions. If anyone is wondering how I keep my schema in sync with our database, I have a simple code generating script that queries the tables that store this info and generates an enums.ts file that exports all these enums. Whenever the data is updated (which doesn't happen that often) I just re-run the code generator and publish the schema changes to production.
You can still use enums for this if you want.
Introspection queries can be used client-side just like any other query. Depending on what implementation/framework you're using server-side, you may have to explicitly enable introspection in production. Your client can query the possible enum values when your app loads -- regardless of how many times the schema changes, the client will always have the correct enum values to display.
Enum values are not limited to all caps, although they cannot contain spaces. So you can have Engineer but not Human Resources. That said, if you substitute underscores for spaces, you can just transform the value client-side.
I can't speak to non-JavaScript implementations, but GraphQL.js supports assigning a value property for each enum value. This property is only used internally. For example, if you receive the enum as an argument, you'll get 2 instead of Engineer. Likewise, you would return 2 instead of Engineer inside a resolver. You can see how this is done with Apollo Server here.

graphql-tools difference between mergeSchemas and makeExecutableSchema

So the reason I am asking this question is because I can get both of these to return a working result with just replacing one or the other. So which is the right one to use and why?
What are their purposes in regards to schemas?
import { mergeSchemas } from 'graphql-tools'
import bookSchema from './book/schema/book.gql'
import bookResolver from './book/resolvers/book'
export const schema = mergeSchemas({
schemas: [bookSchema],
resolvers: [bookResolver]
})
import { makeExecutableSchema } from 'graphql-tools'
import bookSchema from './book/schema/book.gql'
import bookResolver from './book/resolvers/book'
export const schema = makeExecutableSchema({
typeDefs: [bookSchema],
resolvers: [bookResolver]
})
Both of these examples work and return the desired outcome. I believe the correct one to use here is the makeExecutableSchema but not sure why the first one would work?
EDIT
Just incase it would be nice to have the types/resolvers:
typeDefs
type Query {
book(id: String!): Book
bookList: [Book]
}
type Book {
id: String
name: String
genre: String
}
Resolvers
export default {
Query: {
book: () => {
return {
id: `1`,
name: `name`,
genre: `scary`
}
},
bookList: () => {
return [
{ id: `1`, name: `name`, genre: `scary` },
{ id: `2`, name: `name`, genre: `scary` }
]
}
}
}
Query Ran
query {
bookList{
id
name
genre
}
}
Result
{
"data": {
"bookList": [
{
"id": "1",
"name": "name",
"genre": "scary"
},
{
"id": "2",
"name": "name",
"genre": "scary"
}
]
}
}
mergeSchemas is primarily intended to be used for schema stitching, not combing code for a single schema you've chosen to split up for organizational purposes.
Schema stitching is most commonly done when you have multiple microservices that each expose a GraphQL endpoint. You can extract schemas from each endpoint and then use mergeSchemas to create a single GraphQL service that delegates queries to each microservice as appropriate. Technically, schema stitching could also be used to extend some existing API or to create multiple services from a base schema, although I imagine those use cases are less common.
If you are architecting a single, contained GraphQL service you should stick with makeExecutableSchema. makeExecutableSchema is what actually lets you use Schema Definition Language to generate your schema. mergeSchemas is a relatively new API and has a number of open issues, especially with regards to how directives are handled. If you don't need the functionality provided by mergeSchemas -- namely, you're not actually merging separate schemas, don't use it.
Yes makeExecutableSchema creates a GraphQL.js GraphQLSchema instance from GraphQL schema language as per graphql-tools docs So if you are creating stand alone, contained GrpaphQL service is a way to go.
But if you are looking to consolidate multiple GraphQL services there are multiple different strategies you may consider such as schema-stitching, schema-merging from graphql-tools or federation from apollo (there are probably more).
Since I landed here while searching what is the difference between stitching and merging I wanted to point out that they are not one and the same thing. Here is the answer I got for this question on graphql-tools github.
Schema Stitching creates a proxy schema on top of different independent subschemas, so the parts of that schema are executed using GraphQLJS internally. This is useful to create an architecture like microservices.
Schema Merging creates a new schema by merging the extracted type definitions and resolvers from them, so there will be a single execution layer.
The first one keeps the individual schemas, but the second one won't. A use case for the first would be for combining multiple remote GraphQL APIs (microservices), while the second one would be good for combining local schemas.

Resources