Spring amqp: Remove all bindings for specific queue - spring

Is there a way to remove all bindings for specific queue using spring-amqp?
There's a workaround, first delete a queue, and then redeclare it
amqpAdmin.deleteQueue("testQueue");
amqpAdmin.declareQueue(new Queue("testQueue"));
but this is pretty ugly solution and I'd like to avoid it

You can use the REST API to list the bindings and amqpAdmin.removeBinding() for those you want to remove.
EDIT
Here's the code using a Java 8 Stream - you can do the same thing by iterating over the list if you are not using Java 8...
RabbitManagementTemplate rmt = new RabbitManagementTemplate("http://localhost:15672/api/", "guest", "guest");
rmt.getBindings().stream()
.filter(b -> b.getDestination().equals("q1") && b.isDestinationQueue())
.forEach(b -> {
System.out.println("Deleting " + b);
amqpAdmin.removeBinding(b);
});
Result:
Deleting Binding [destination=q1, exchange=, routingKey=q1]
Deleting Binding [destination=q1, exchange=ex1, routingKey=foo]
Deleting Binding [destination=q1, exchange=ex2, routingKey=foo]
(when q1 was bound to the default exchange and 2 others).
The RabbitAdmin amqpAdmin is used to do the deletes.

Related

KTable & LogAndContinueExceptionHandler

I have a very simple consumer from which I create a materialized view. I have enabled validation on my value object (throwing Constraintviolationexception for invalid json data). When I receive a value on which the validation fails, I exepct the value to logged & consumer should read the next offset as I have LogAndContinueExceptionHandler enabled.
However LogAndContinueExceptionHandler is never invoked and consumePojo State transition from PENDING_ERROR to ERROR
Code
#Bean
public Consumer<KTable<String, Pojo>> consume() {
return values->
values
.filter((key, value) -> Objects.nonNull(key))
.mapValues(value-> value, Materialized.<String, Pojo>as(Stores.inMemoryKeyValueStore("POJO_STORE_NAME"))
.withKeySerde(Serdes.String())
.withValueSerde(SerdeUtil.pojoSerde())
.withLoggingDisabled())
.toStream()
.peek((key, value) -> log.debug("Receiving Pojo from topic with key: {}, and UUID: {}", key, value == null ? 0 : value.getUuid()));
}
Why is it that LogAndContinueExceptionHandler is not invoked in case of KTable?
Note: If code is changed to KStreams then I see logging and records being skipped but with KTable not !!
In order to handle exceptions not handled by Kafka Streams use the KafkaStreams.setUncaughtExceptionHandler method and StreamsUncaughtExceptionHandler implementation, this needs to return one of 3 available enumerations:
REPLACE_THREAD
SHUTDOWN_CLIENT
SHUTDOWN_APPLICATION
and in your case REPLACE_THREAD is the best option, as you can see in KIP-671:
REPLACE_THREAD:
The current thread is shutdown and transits to state DEAD.
A new thread is started if the Kafka Streams client is in state RUNNING or REBALANCING.
For the Global thread this option will log an error and revert to shutting down the client until the option had been added
In Spring Kafka you can replace default StreamsUncaughtExceptionHandler by StreamsBuilderFactoryBean:
#Autowired
void setMyStreamsUncaughtExceptionHandler(StreamsBuilderFactoryBean streamsBuilderFactoryBean) {
streamsBuilderFactoryBean.setStreamsUncaughtExceptionHandler(exception -> StreamsUncaughtExceptionHandler.StreamThreadExceptionResponse.REPLACE_THREAD);
}
I was able to solve the problem after looking at the logs carefully, I found that valueSerde for the Pojo, was showing useNativeDecoding (default being JsonSerde) due to this DeserializationExceptionHandler wasn't invoked and thread terminated.
Problem went away when I fixed the valueSerde in application.properties

Jdbi transaction - multiple methods - Resources should be closed

Suppose I want to run two sql queries in a transaction I have code like the below:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
handle.createUpdate("INSERT INTO SOMETABLE (id) " +
"VALUES (:id , xxx);")
.bind("id")
.execute();
}
));
Now as the complexity grows I want to extract each update in into it's own method:
jdbi.useHandle(handle -> handle.useTransaction(h -> {
var id = someQuery1(h);
someQuery2(id, h);
}
));
...with someQuery1 looking like:
private Long someQuery1(Handle handle) {
return handle.createUpdate("some query")
.executeAndReturnGeneratedKeys()
.mapTo(Long.class).findOne().orElseThrow(() -> new IllegalStateException("No id"));
}
Now when I refactor to the latter I get a SonarQube blocker bug on the someQuery1 handle.createUpdate stating:
Resources should be closed
Connections, streams, files, and other
classes that implement the Closeable interface or its super-interface,
AutoCloseable, needs to be closed after use....*
I was under the impression, that because I'm using jdbi.useHandle (and passing the same handle to the called methods) that a callback would be used and immediately release the handle upon return. As per the jdbi docs:
Both withHandle and useHandle open a temporary handle, call your
callback, and immediately release the handle when your callback
returns.
Any help / suggestions appreciated.
TIA
SonarQube doesn't know any specifics regarding JDBI implementation and just triggers by AutoCloseable/Closable not being closed. Just suppress sonar issue and/or file a feature-request to SonarQube team to improve this behavior.

Reactor changes in Spring Boot 2 M4

I have updated from Spring Boot 2.0.0.M3 to 2.0.0.M4, which updates Reactor from 3.1.0.M3 to 3.1.0.RC1. This causes my code to break in a number of places.
Mono.and() now returns Mono<Void>, where previously it returned Mono<Tuple>
This is also the case for Mono.when()
The following code compiles with the older versions, but not with the new version
Mono<String> m1 = Mono.just("A");
Mono<String> m2 = Mono.just("B");
Mono<String> andResult = m1.and(m2).map(t -> t.getT1() + t.getT2());
Mono<String> whenResult = Mono.when(m1, m2).map(t -> t.getT1() + t.getT2());
Has there been any changes to how this should work?
when and and that produce Tuple have been replaced with zip/zipWith which are their exact equivalent in the Flux API, in order to align the APIs. Remaining when and and methods, which are found only in Mono, are now purely about combining the completion signals, discarding the onNexts (hence they return a Mono<Void>)
I switched to Mono.zip(...):
mono1.and(mono2).map(...)
=>
Mono.zip(mono1, mono2).map(...)

How to safely write to one file from many verticle instances in vert.x 3.2?

Instead of using a logger or database server I'd like to append information to one file from possibly many verticle instances.
There are versions of methods for writing asynchronously to a file.
Can I assume that vertx handles the synchronisation between the writes so that these dont interfere when using those versions of methods marked as ¨async¨ ?
There seems to be a rule that one can rely on vertx providing all isolation between concurrent processing out of the box. But is that true in case of writing file access?
Could you please include a code snippet into the answer that shows how to open and write to one file from many verticle instances with finest possible granularity, e.g. for logging requests.
I wouldn't recommend writing to a single file with many different "writers". Regarding concurrent logging I would stick to the Single Writer principle.
Create a Verticle which subscribes to the Event Bus and listens for messages to be logged. Lets call this Verticle Logger which listens to system.logger.
EventBus eb = vertx.eventBus();
eb.consumer("system.logger", message -> {
// write to file
});
Verticles which like to log something need to send a message to the Logger Verticle:
eventBus.send("system.logger", "foobar");
Appending to a existing file work something like this (didn't test):
vertx.fileSystem().open("file.log", new OpenOptions(), result -> {
if (result.succeeded()) {
Buffer buff = Buffer.buffer(message); // message from consume
AsyncFile file = result.result();
file.write(buff, buff.length() * i, ar -> {
if (ar.succeeded()) {
System.out.println("done");
} else {
System.err.println("write failed: " + ar.cause());
}
});
} else {
System.err.println("open file failed " + result.cause());
}
});

Boost event logger

http://boost-log.sourceforge.net/libs/log/doc/html/log/detailed/sink_backends.html
In this page there is sample code to initialize boost windows event backends,
but when I run it it gives memory error at first line.
void init_logging()
{
// Create an event log sink
boost::shared_ptr< sink_t > sink(new sink_t());
sink->set_formatter
(
expr::format("%1%: [%2%] - %3%")
% expr::attr< unsigned int >("LineID")
% expr::attr< boost::posix_time::ptime >("TimeStamp")
% expr::smessage
);
// We'll have to map our custom levels to the event log event types
sinks::event_log::custom_event_type_mapping< severity_level > mapping("Severity");
mapping[normal] = sinks::event_log::info;
mapping[warning] = sinks::event_log::warning;
mapping[error] = sinks::event_log::error;
sink->locked_backend()->set_event_type_mapper(mapping);
// Add the sink to the core
logging::core::get()->add_sink(sink);
}
Here it fails to create sink_t object.
boost::shared_ptr< sink_t > sink(new sink_t());
Any idea what is the problem and how can I solve this?
Also If you know any other source that I can learn using boost event logging please write.
No answer yet...
But I have found a solution in a blog by Timo Geusch.
http://www.lonecpluspluscoder.com/2011/01/boost-log-preventing-the-unhandled-exception-in-windows-7-when-attempting-to-log-to-the-event-log/
The reason for this problem was that the registry key in HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\EventLog\ that the application needs access to both has to be present (if you’re not administrator, you don’t have the privileges to create it) and the user who runs the application also needs to be able to both read and write to it.

Resources