Mandatory header for all API in openapi 3.0 - spring-boot

I am using OpenAPI 3.0 with Spring-boot 5 and therefore have no configuration YAML. I have a header that contains the client Identification ID(This is not an authentication header). I want to make that a mandatory header param. Added below OpenAPI configuration
#Configuration
public class OpenAPIConfiguration {
#Bean
public OpenAPI customOpenAPI() {
return new OpenAPI()
.components(new Components()
.addParameters("myCustomHeader", new Parameter().in("header").schema(new StringSchema()).required(true).description("myCustomHeader").name("myCustomHeader")))
.info(new Info()
.title("My Rest Application")
.version("1.2.26"));
}
}
However, the swagger UI does not show the required param in any API. Can someone help as to what I am doing wrong?

Adding parameter definition to a custom OpenAPI bean will not work because the parameter won't get propagated to the operations definitions. You can achieve your goal using OperationCustomizer:
#Bean
public OperationCustomizer customize() {
return (operation, handlerMethod) -> operation.addParametersItem(
new Parameter()
.in("header")
.required(true)
.description("myCustomHeader")
.name("myCustomHeader"));
}
The OperationCustomizer interface was introduced in the springdoc-openapi 1.2.22. In previous versions you would need to use OpenApiCustomiser:
#Component
public class MyOpenApiCustomizer implements OpenApiCustomiser {
private static final List<Function<PathItem, Operation>> OPERATION_GETTERS = Arrays.asList(
PathItem::getGet, PathItem::getPost, PathItem::getDelete, PathItem::getHead,
PathItem::getOptions, PathItem::getPatch, PathItem::getPut);
private Stream<Operation> getOperations(PathItem pathItem) {
return OPERATION_GETTERS.stream()
.map(getter -> getter.apply(pathItem))
.filter(Objects::nonNull);
}
#Override
public void customise(OpenAPI openApi) {
openApi.getPaths().values().stream()
.flatMap(this::getOperations)
.forEach(this::customize);
}
private void customize(Operation operation) {
operation.addParametersItem(
new Parameter()
.in("header")
.required(true)
.description("myCustomHeader")
.name("myCustomHeader"));
}
}

Related

WebServerFactoryCustomizer is not hit in a Springboot Webflux app

Following Configure the Web Server , I add a NettyWebServerFactoryCustomizer
#Configuration
public class NettyWebServerFactoryCustomizer implements WebServerFactoryCustomizer<NettyReactiveWebServerFactory> {
#Override
public void customize(NettyReactiveWebServerFactory factory) {
factory.addServerCustomizers(httpServer -> {
return httpServer
.wiretap(true)
.metrics(true, s->s)
.doOnConnection(conn -> {
conn.addHandlerFirst(new ReadTimeoutHandler(50, TimeUnit.MILLISECONDS));
});
});
}
}
I have two questions:
When I run the app, the customize function is not hit. Where do I miss?
My purpose is to enable the Netty metrics, I can't find any documents about config the metrics in the application.yml file. so I add the NettyWebServerFactoryCustomizer.
The second parameter of .metrics(true, s->s) is a uriTagValue, Are there any example about how to pass in value? I just use s->s because I refer this, but this maybe can't avoid cardinality explosion, Are there any function like ServerWebExchange.getAttribute(HandlerMapping.BEST_MATCHING_PATTERN_ATTRIBUTE) simple give us the templated URL?
I found the workaround of the question 1: define a bean instead of implementing WebServerFactoryCustomizer
#Bean
public ReactiveWebServerFactory reactiveWebServerFactory() {
NettyReactiveWebServerFactory factory = new NettyReactiveWebServerFactory();
factory.addServerCustomizers(builder -> builder
.wiretap(true)
.metrics(true,s->s)
.accessLog(true)
.doOnConnection(conn -> {
conn.addHandlerFirst(new ReadTimeoutHandler(50, TimeUnit.MILLISECONDS));
}));
return factory;
}
About your question 2 : The second parameter of .metrics(true, s->s) is a uriTagValue, Are there any example about how to pass in value?
private static final Pattern URI_TEMPLATE_PATTERN = Pattern.compile("/test/.*");
#Bean
public ReactiveWebServerFactory reactiveWebServerFactory() {
NettyReactiveWebServerFactory factory = new NettyReactiveWebServerFactory();
factory.addServerCustomizers(builder -> builder
.wiretap(true)
.metrics(true,
uriValue ->
{
Matcher matcher = URI_TEMPLATE_PATTERN .matcher(uriValue);
if (matcher.matches()) {
return "/test/";
}
return "/";
}
.accessLog(true)
.doOnConnection(conn -> {
conn.addHandlerFirst(new ReadTimeoutHandler(50, TimeUnit.MILLISECONDS));
}));
return factory;
}

ReadOnlyKeyValueStore from a KTable using spring-kafka

I am migrating a Kafka Streams implementation which uses pure Kafka apis to use spring-kafka instead as it's incorporated in a spring-boot application.
Everything works fine the Stream, GlobalKTable, branching that I have all works perfectly fine but I am having a hard time incorporating a ReadOnlyKeyValueStore. Based on the spring-kafka documentation here: https://docs.spring.io/spring-kafka/docs/2.6.10/reference/html/#streams-spring
It says:
If you need to perform some KafkaStreams operations directly, you can
access that internal KafkaStreams instance by using
StreamsBuilderFactoryBean.getKafkaStreams(). You can autowire
StreamsBuilderFactoryBean bean by type, but you should be sure to use
the full type in the bean definition.
Based on that I tried to incorporate it to my example as in the following fragments below:
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
public KafkaStreamsConfiguration defaultKafkaStreamsConfig() {
Map<String, Object> props = defaultStreamsConfigs();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "quote-stream");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, SpecificAvroSerde.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "stock-quotes-stream-group");
return new KafkaStreamsConfiguration(props);
}
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_BUILDER_BEAN_NAME)
public StreamsBuilderFactoryBean defaultKafkaStreamsBuilder(KafkaStreamsConfiguration defaultKafkaStreamsConfig) {
return new StreamsBuilderFactoryBean(defaultKafkaStreamsConfig);
}
...
final GlobalKTable<String, LeveragePrice> leverageBySymbolGKTable = streamsBuilder
.globalTable(KafkaConfiguration.LEVERAGE_PRICE_TOPIC,
Materialized.<String, LeveragePrice, KeyValueStore<Bytes, byte[]>>as("leverage-by-symbol-table")
.withKeySerde(Serdes.String())
.withValueSerde(leveragePriceSerde));
leveragePriceView = myKStreamsBuilder.getKafkaStreams().store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
But adding the StreamsBuilderFactoryBean(which seems to be needed to get a reference to KafkaStreams) definition causes an error:
The bean 'defaultKafkaStreamsBuilder', defined in class path resource [com/resona/springkafkastream/repository/KafkaConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [org/springframework/kafka/annotation/KafkaStreamsDefaultConfiguration.class] and overriding is disabled.
The issue is I don't want to control the lifecycle of the stream that's what I get with the plain Kafka APIs so I would like to get a reference to the default managed one as I want spring to manage it but whenever I try to expose the bean it gives the error. Any ideas on what's the correct approach to that using spring-kafka?
P.S - I am not interested in solutions using spring-cloud-stream I am looking for implementations of spring-kafka.
You don't need to define any new beans; something like this should work...
spring.application.name=quote-stream
spring.kafka.streams.properties.default.key.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
spring.kafka.streams.properties.default.value.serde=org.apache.kafka.common.serialization.Serdes$StringSerde
#SpringBootApplication
#EnableKafkaStreams
public class So69669791Application {
public static void main(String[] args) {
SpringApplication.run(So69669791Application.class, args);
}
#Bean
GlobalKTable<String, String> leverageBySymbolGKTable(StreamsBuilder sb) {
return sb.globalTable("gkTopic",
Materialized.<String, String, KeyValueStore<Bytes, byte[]>> as("leverage-by-symbol-table"));
}
private ReadOnlyKeyValueStore<String, String> leveragePriceView;
#Bean
StreamsBuilderFactoryBean.Listener afterStart(StreamsBuilderFactoryBean sbfb,
GlobalKTable<String, String> leverageBySymbolGKTable) {
StreamsBuilderFactoryBean.Listener listener = new StreamsBuilderFactoryBean.Listener() {
#Override
public void streamsAdded(String id, KafkaStreams streams) {
leveragePriceView = streams.store("leverage-by-symbol-table", QueryableStoreTypes.keyValueStore());
}
};
sbfb.addListener(listener);
return listener;
}
#Bean
KStream<String, String> stream(StreamsBuilder builder) {
KStream<String, String> stream = builder.stream("someTopic");
stream.to("otherTopic");
return stream;
}
}

Springdoc GroupedOpenApi not following global parameters set with OperationCustomizer

When using GroupedOpenApi to define an API group, the common set of parameters that are added to every endpoint is not present in the parameters list.
Below are the respective codes
#Bean
public GroupedOpenApi v1Apis() {
return GroupedOpenApi.builder().group("v1 APIs")
// hide all v2 APIs
.pathsToExclude("/api/v2/**", "/v2/**")
// show all v1 APIs
.pathsToMatch("/api/v1/**", "/v1/**")
.build();
}
And the class to add the Standard Headers to all the endpoints
#Component
public class GlobalHeaderAdder implements OperationCustomizer {
#Override
public Operation customize(Operation operation, HandlerMethod handlerMethod) {
operation.addParametersItem(new Parameter().$ref("#/components/parameters/ClientID"));
operation.addSecurityItem(new SecurityRequirement().addList("Authorization"));
List<Parameter> parameterList = operation.getParameters();
if (parameterList!=null && !parameterList.isEmpty()) {
Collections.rotate(parameterList, 1);
}
return operation;
}
}
Actual Output
Expected Output
Workaround
Adding the paths to be included/excluded in the application properties file solves the error. But something at the code level will be much appreciated.
Attach the required OperationCustomizerobject while building the Api Group.
#Bean
public GroupedOpenApi v1Apis(GlobalHeaderAdder globalHeaderAdder) {
return GroupedOpenApi.builder().group("v1 APIs")
// hide all v2 APIs
.pathsToExclude("/api/v2/**", "/v2/**")
// show all v1 APIs
.pathsToMatch("/api/v1/**", "/v1/**")
.addOperationCustomizer(globalHeaderAdded)
.build();
}
Edit: Answer updated with reference to #Value not providing values from application properties Spring Boot
Alternative to add and load OperationCustomizer in the case you declare yours open api groups by properties springdoc.group-configs[0].group= instead definition by Java code in a Spring Configuration GroupedOpenApi.builder().
#Bean
public Map<String, GroupedOpenApi> configureGroupedsOpenApi(Map<String, GroupedOpenApi> groupedsOpenApi, OperationCustomizer operationCustomizer) {
groupedsOpenApi.forEach((id, groupedOpenApi) -> groupedOpenApi.getOperationCustomizers()
.add(operationCustomizer));
return groupedsOpenApi;
}

How to consume a spring data rest service with java?

I have the following spring boot + data Rest repository:
#RepositoryRestResource(collectionResourceRel = "dto", path = "produtos")
public interface ProdutoRepository extends CrudRepository<Produto, Integer> {
#Query("SELECT p FROM Produto p where descricao LIKE CONCAT(UPPER(:like),'%')")
List<Produto> findByLike(#Param("like") String like);
}
I also have a java client that access this method (this is my example of doing it):
String url = "http://localhost:8080/produtos/search/findByLike?like={like}";
RestTemplate t = new RestTemplate();
ProdutoDto resp = t.getForObject(url, ProdutoDto.class, txtLoc.getText());
ProdutoDto (this one is not totally necessary):
public class ProdutoDto extends HalDto<Produto> {}
HalDto:
public class HalDto<T extends ResourceSupport> extends ResourceSupport {
#JsonProperty("_embedded")
private EmbeddedDto<T> embedded;
public EmbeddedDto<T> getEmbedded() {
return embedded;
}
public void setEmbedded(EmbeddedDto<T> embedded) {
this.embedded = embedded;
}
}
EmbeddedDto:
public class EmbeddedDto<T> {
#JsonProperty("dto")
private List<T> dtoList;
public List<T> getDtoList()
{
return dtoList;
}
public void setDto(List<T> dtoList) {
this.dtoList = dtoList;
}
}
Those classes are necessary (i think) because Spring Data returns data in the HAL (https://en.wikipedia.org/wiki/Hypertext_Application_Language) format.
Note: Produto must extend ResourceSupport.
Caveats: All collectionResourceRel must be named "dto" and it only works for collections (may be adjusted).
Is this the proper way to do this?
I have googled around and found plenty of examples of doing the server side, but almost nothing on building clients.
Thanks.
This is a solution that I have found which seems to work well.
First, setup your RestTemplate so that it expects JSON/HAL and knows what to do with it:
#Bean
public RestTemplate restTemplate() {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.registerModule(new Jackson2HalModule());
objectMapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
MappingJackson2HttpMessageConverter messageConverter =
new MappingJackson2HttpMessageConverter();
messageConverter.setObjectMapper(objectMapper);
messageConverter.setSupportedMediaTypes(Arrays.asList(MediaTypes.HAL_JSON, MediaType.APPLICATION_JSON_UTF8));
return new RestTemplate(Arrays.asList(messageConverter));
}
Then you can use the exchange method of the RestTemplate to specify that you want your result to be ResponseEntity<PagedResources<Producto>>
ResponseEntity<PagedResources<Producto>> resultResponse = restTemplate.exchange(uri, HttpMethod.GET, HttpEntity.EMPTY, new ParameterizedTypeReference<PagedResources<Producto>>(){});
if(resultResponse.getStatusCode() == HttpStatus.OK){
Collection<Producto> results = resultResponse.getBody().getContent();
log.info("{} results obtained", results.size());
}
You can instantiate restTemplate by either calling the restTemplate() method defined above or you can inject (autowire) it.

How to read x-death header of a RabbitMQ dead-lettered message using Spring Boot?

I am trying to implement re-routing of dead-lettered messages as described in this answer. I am using Spring config. I have no idea on how to read the headers to get the original routing key and original queue. The following is my config:
#Configuration
public class NotifEngineRabbitMQConfig {
#Bean
public MessageHandler handler(){
return new MessageHandler();
}
#Bean
public Jackson2JsonMessageConverter messageConverter(){
return new Jackson2JsonMessageConverter();
}
#Bean
public MessageListenerAdapter messageListenerAdapter(){
return new MessageListenerAdapter(handler(), messageConverter());
}
/**
* Listens for incoming messages
* Allows multiple queue to listen to
* */
#Bean
public SimpleMessageListenerContainer simpleMessageListenerContainer(){
SimpleMessageListenerContainer container = new SimpleMessageListenerContainer();
container.addQueueNames(QUEUE_TO_LISTEN_TO.split(","));
container.setMessageListener(messageListenerAdapter());
container.setConnectionFactory(rabbitConnectionFactory());
container.setDefaultRequeueRejected(false);
return container;
}
#Bean
public ConnectionFactory rabbitConnectionFactory(){
CachingConnectionFactory factory = new CachingConnectionFactory(HOST);
factory.setUsername(USERNAME);
factory.setPassword(PASSWORD);
return factory;
}
}
The headers are not available using "old" style Pojo messaging (with a MessageListenerAdapter). You need to implement MessageListener which gives you access to the headers.
However, you will need to invoke the converter yourself in that case and, if you are using request/reply messaging, you lose the reply mechanism within the adapter and you have to send the reply yourself.
Alternatively, you can use a custom message converter and "enhance" the converted object with the header after invoking the standard converter.
Consider instead using the newer style POJO messaging with #RabbitListener - it gives you access to the headers and has request/reply capability.
Here's an example:
#SpringBootApplication
public class So37581560Application {
public static void main(String[] args) {
SpringApplication.run(So37581560Application.class, args);
}
#Bean
public FooListener fooListener() {
return new FooListener();
}
public static class FooListener {
#RabbitListener(queues="foo")
public void pojoListener(String body,
#Header(required = false, name = "x-death") List<String> xDeath) {
System.out.println(body + ":" + (xDeath == null ? "" : xDeath));
}
}
}
Result:
Foo:[{reason=expired, count=1, exchange=, time=Thu Jun 02 08:44:19 EDT 2016, routing-keys=[bar], queue=bar}]
Gary's answer is the right one. Just a little detail, the type of xDeath is better to be ArrayList<HashMap<String,*>> instead List<String> xDeath. Then you can access any field by doing something like: xDeath.first().get("count")

Resources