What is the best way to create a dictionary class in Kotlin? - spring

I'm trying to figure out the most efficient way to store a map of dictionary values at spring bean. This bean should return a message, stored in the mentioned map by code. For example the dictionary with about 11 elements, like:
[
{
1: "critical case"
},
{
2: "common processing error"
},
...
]
and a bean
#Component
class ErrorAnalyzer {
fun aggregateErrorDescriptions(errors: List<Int>): String {
// need to get all code descriptions from dictionary by errors
}
}
I assume, that it is a bad approach to store my dictionary as a static variable in companion object of that bean like:
companion object {
private val dictionary = mapOf(
1 to "critical case",
2 to "common processing error"
)
}
Perhaps a lazy-loaded map sounds better but i'm not sure that it is much more efficient though...
val lazyValue: Map<Int, String> by lazy {
mapOf(
1 to "critical case",
2 to "common processing error"
)
}
Also the lazy map way can be modified by loading from a resource json file, but that's only for 11 elements... Also i have a feeling, that no elements would be added to this map in future...
What is preferrable / your way of this?

Well, maybe I didn't understand the question right but if you need this map globally then just create it as a bean, something like this:
#Bean
fun nameOfADictionary(): Map<Int, String> = mapOf(...)
in a #Configuration class and then inject it where you need in the context.
Otherwise, if you only need it somewhere locally, then create the map in a file / class where it'll be used as a private object and that's it. You don't even need the companion object feature here.
Also if we talk about 11 elements in a collection, in my opinion, you don't need to mind about performance issues, just use the KISS principle.

Related

Question of best practice in Kotlin to save a data object in another data object

I'm looking for the best way in Kotlin (/ Spring Boot) to save in ObjectA ObjectB (and other values) in my service:
fun create(objectARequest: ObjectARequest): ObjectA {
val foundObjectB = objectBRepository.findById(objectARequest.objectBId)
val objectA = ObjectA(
id = UUID.randomUUID(),
...
objectB = foundObjectB.get()
)
return objectARepository.save(objectA)
}
My question is related to this line objectB = foundObjectB.get():
This is the Java way (implementation from java.utils), so it this the best approach or should I simply go with objectB = foundObjectB? What is the best practice in Kotlin here?
Optional is a Java workaround to deal with null. Kotlin has a much better way, by explicitly defining nullability. Having said that, I would suggest you replacing Optional with null or the object itself if present. You can do this with “orElse(null)”. By doing so you either have null or the object. Now you are in the Kotlin realm again with all its benefits.
val foundObjectB : ObjectB? = objectBRepository.findById(objectARequest.objectBId).orElse(null)
Additionally, keep in mind that “get()” on an optional would throw NoSuchElementException if no object is present.

How to ignore unknown enum values?

I'm wondering what would be the best way to ignore/discard the unknown enum values in GraphQL/Apollo server.
Let's say my GraphQL schema defines array of enums "enum Service { Supermarket, TicketSales }" and it works fine now, but later on other service I'm using is adding some new values (e.g. Playground) and my client just doesn't support it and I would just like to ignore it and return the supported values without error.
What would be the best way to do this in GraphQL. My first idea was to make directive that would read the supported values from schema and ignore everything else, but after googling around I didn't find any good examples how to do it. Can you point me a direction where to go about this?
If your resolver function will accept arbitrary strings, then you can use a custom scalar type, or just String.
"""
The type of a service. `Supermarket` means..., and
`TicketSales` means...; any other value is ignored.
"""
scalar Service
GraphQL generally places responsibility on the client to conform to the server's expectations, rather than making the server try to support any request. There are a couple of places you can reasonably expect an enum value like this to appear:
enum Service { Supermarket, TicketSales }
type Query {
inAReturnValue: Service!
asAQueryParam(service: Service!): Node
}
type Mutation {
asAMutationInput(service: Service!): Node
}
In particular it may not make sense to tell the server "make the type of this object be a playground" if the server just doesn't understand that. Conversely, if the server knows about "playground", it could return it in cases the client may not expect. Having an enum here makes it explicit what the server knows about. The server has said what it supports and it's the client's responsibility to cooperate.
Note that it's possible for the client to find out if the server supports playgrounds, if it's an enum value, and this might help it inform its behavior.
query GetServiceTypes {
__type(name: "Service") {
enumValues { name }
}
}
After playing around I found something that I can use to get around my original problem, so I will post it here in case somebody else is wondering the same thing.
So my original problem was in short that I'm receiving several different "available services" kind of string arrays from another services and I was thinking to map them to enum for better typescript support etc. But the problem was that if I get some unknown value from another service, my graphql will fail.
So my original idea was to fix it with directive which I after all got working:
# In schema
directive #mapUnknownTo(value: String) on ENUM
enum SomeAttribute #mapUnknownTo(value: "__UNKNOWN__") {
SomeAttribute1
AnotherAttribute
SomethingElse
__UNKNOWN__
}
And the directive implementation is:
import { SchemaDirectiveVisitor } from 'graphql-tools';
import { GraphQLEnumType } from 'graphql';
export class MapUnknownToDirective extends SchemaDirectiveVisitor {
visitEnum(type: GraphQLEnumType) {
const { value = '__UNKNOWN__' } = this.args;
const valueMap = type.getValues().reduce((map, v) => map.set(v.value, v.name), new Map<string, string>());
type.serialize = (v: string): string => valueMap.get(v) || value;
}
}
So this will map all the values not defined in schema into some custom value, which is not exactly what I originally wanted, but at least it's not giving an error, so it's okay-ish.
I'm still not 100% sure if directives are way to go on cases like this, but at least it's one possible solution.

ConfigurationProperties loading list from YML

I'm trying to load Configuration from YML. I can load value and I can also load list if these are comma seperated values. But i can't load a typical YML List.
Configuration Class
#Component
#PropertySource("classpath:routing.yml")
#ConfigurationProperties
class RoutingProperties(){
var angular = listOf("nothing")
var value: String = ""
}
Working routing.yml
angular: /init, /home
value: Hello World
Not Working routing.yml
angular:
- init
- home
value: Hello World
Why can't i load the second version of yml / do I have a syntaxt error?
ENV: Kotlin, Spring 2.0.0.M3
As #flyx say, #PropetySource not worked with yaml files. But in spring you may override almost everything :)
PropertySource has additional parameter: factory. It's possible to create your own PropertySourceFactory base on DefaultPropertySourceFactory
open class YamlPropertyLoaderFactory : DefaultPropertySourceFactory() {
override fun createPropertySource(name: String?, resource: EncodedResource?): org.springframework.core.env.PropertySource<*> {
if (resource == null)
return super.createPropertySource(name, resource)
return YamlPropertySourceLoader().load(resource.resource.filename, resource.resource, null)
}
}
And when use this factory in propertysource annotation:
#PropertySource("classpath:/routing.yml", factory = YamlPropertyLoaderFactory::class)
Last that you need is to initialized variable angular with mutableList
Full code sample:
#Component
#PropertySource("classpath:/routing.yml", factory = YamlPropertyLoaderFactory::class)
#ConfigurationProperties
open class RoutingProperties {
var angular = mutableListOf("nothing")
var value: String = ""
override fun toString(): String {
return "RoutingProperties(angular=$angular, value='$value')"
}
}
open class YamlPropertyLoaderFactory : DefaultPropertySourceFactory() {
override fun createPropertySource(name: String?, resource: EncodedResource?): org.springframework.core.env.PropertySource<*> {
if (resource == null)
return super.createPropertySource(name, resource)
return YamlPropertySourceLoader().load(resource.resource.filename, resource.resource, null)
}
}
#SpringBootApplication
#EnableAutoConfiguration(exclude = arrayOf(DataSourceAutoConfiguration::class))
open class Application {
companion object {
#JvmStatic
fun main(args: Array<String>) {
val context = SpringApplication.run(Application::class.java, *args)
val bean = context.getBean(RoutingProperties::class.java)
println(bean)
}
}
}
Kinda old post, i know. But i am at the very same topic right now.
As of now, it seems that PropertySource does indeed work with yaml Files. Given the restriction that it only allows for primitive types (it seems) and it cant handle nested elements. I'm probably gonna dig a bit deeper and update my answer accordingly, but as of now, the accepted answer seems like a functioning workaround.
Well, according to the docs, your YAML file will be rewritten into a property file. The first YAML file becomes:
angular=/init, /home
value=Hello World
While the second one becomes:
angular[0]=init
angular[1]=home
value=Hello World
These are obviously two very different things and therefore behave differently.
Moreover, later in the docs, it is stated that YAML does not even work with #PropertySource:
24.6.4 YAML shortcomings
YAML files can’t be loaded via the #PropertySource annotation. So in the case that you need to load values that way, you need to use a properties file.
That makes me kind of wonder why the first case works for you at all.
The docs say this about the generated …[index] properties:
To bind to properties like that using the Spring DataBinder utilities (which is what #ConfigurationProperties does) you need to have a property in the target bean of type java.util.List (or Set) and you either need to provide a setter, or initialize it with a mutable value, e.g. this will bind to the properties above
So, let's have a look at Kotlin docs: listOf returns a new read-only list of given elements. So the list is not mutable as required by the docs, which I assume is why it doesn't work. Try using a mutable list (since I have never used Kotlin, I cannot give you working code). Also try to declare it as java.util.List if that's possible in Kotlin.

What's the best way to pass a huge collection to a Spring Batch Step?

Use case:
A one-time read of data set X (from database) into a Collection C. [Collection size could be say 5000]
Use Collection C to process/enrich items in a Spring Batch Step (say enrichStep)
If C is much greater than what can be passed via ExecutionContext, how can we make it available in the ItemProcessor of the enrichStep?
In your enrichStep add a StepExecutionListener.beforeStep and load your huge collection in a HugeCollectionBeanHolder bean.
In this way you will load collection only once (when step start or re-start) and without persist it into execution context.
In your enrich processor wire the HugeCollectionBeanHolder to access huge collection.
class HugeCollectionBeanHolder {
Collection<Item> hudeCollection;
void setHugeCollection(Collection<Item> c) { this.hugeCollection = c;}
Collection<Item> getHugeCollection() { return this.hugeCollection;}
}
class MyProcessor implements ItemProcessor<Input,Output> {
HugeCollectionBeanHolder hcbh;
void setHugeCollectionBeanHolder(HugeCollectionBeanHolder bean) { this.hcbh = bean;}
// other methods...
}
You can also look at Spring Batch: what is the best way to use, the data retrieved in one TaskletStep, in the processing of another step

MongoDB - override default Serializer for a C# primitive type

I'd like to change the representation of C# Doubles to rounded Int64 with a four decimal place shift in the serialization C# Driver's stack for MongoDB. In other words, store (Double)29.99 as (Int64)299900
I'd like this to be transparent to my app. I've had a look at custom serializers but I don't want to override everything and then switch on the Type with fallback to the default, as that's a bit messy.
I can see that RegisterSerializer() won't let me add one for an existing type, and that BsonDefaultSerializationProvider has a static list of primitive serializers and it's marked as internal with private members so I can't easily subclass.
I can also see that it's possible to RepresentAs Int64 for Doubles, but this is a cast not a conversion. I need essentially a cast AND a conversion in both serialization directions.
I wish I could just give the default serializer a custom serializer to override one of it's own, but that would mean a dirty hack.
Am I missing a really easy way?
You can definitely do this, you just have to get the timing right. When the driver starts up there are no serializers registered. When it needs a serializer, it looks it up in the dictionary where it keeps track of the serializers it knows about (i.e. the ones that have been registered). Only it it can't find one in the dictionary does it start figuring out where to get one (including calling the serialization providers) and if it finds one it registers it.
The limitation in RegisterSerializer is there so that you can't replace an existing serializer that has already been used. But that doesn't mean you can't register your own if you do it early enough.
However, keep in mind that registering a serializer is a global operation, so if you register a custom serializer for double it will be used for all doubles, which could lead to unexpected results!
Anyway, you could write the custom serializer something like this:
public class CustomDoubleSerializer : BsonBaseSerializer
{
public override object Deserialize(BsonReader bsonReader, Type nominalType, Type actualType, IBsonSerializationOptions options)
{
var rep = bsonReader.ReadInt64();
return rep / 100.0;
}
public override void Serialize(BsonWriter bsonWriter, Type nominalType, object value, IBsonSerializationOptions options)
{
var rep = (long)((double)value * 100);
bsonWriter.WriteInt64(rep);
}
}
And register it like this:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
You could test it using the following class:
public class C
{
public int Id;
public double X;
}
and this code:
BsonSerializer.RegisterSerializer(typeof(double), new CustomDoubleSerializer());
var c = new C { Id = 1, X = 29.99 };
var json = c.ToJson();
Console.WriteLine(json);
var r = BsonSerializer.Deserialize<C>(json);
Console.WriteLine(r.X);
You can also use your own serialization provider to tell Mongo which serializer to use for certain types, which I ended up doing to mitigate some of the timing issues mentioned when trying to override existing serializers. Here's an example of a serialisation provider that overrides how to serialize decimals:
public class CustomSerializationProvider : IBsonSerializationProvider
{
public IBsonSerializer GetSerializer(Type type)
{
if (type == typeof(decimal)) return new DecimalSerializer(BsonType.Decimal128);
return null; // falls back to Mongo defaults
}
}
If you return null from your custom serialization provider, it will fall back to using Mongo's default serialization provider.
Once you've written your provider, you just need to register it:
BsonSerializer.RegisterSerializationProvider(new CustomSerializationProvider());
I looked through the latest iteration of the driver's code and checked if there's some sort of backdoor to set custom serializers. I am afraid there's none; you should open an issue in the project's bug tracker if you think this needs to be looked at for future iterations of the driver (https://jira.mongodb.org/).
Personally, I'd open a ticket -- and if a quick workaround is necessary or required, I'd subclass DoubleSerializer, implement the new behavior, and then use Reflection to inject it into either MongoDB.Bson.Serialization.Serializers.DoubleSerializer.__instance or MongoDB.Bson.Serialization.BsonDefaultSerializationProvider.__serializers.

Resources