How to test a state stored aggregate that doesn't produce events - spring-boot

I want to test a state stored aggregate by using AggregateTestFixture. However I get AggregateNotFoundException: No 'given' events were configured for this aggregate, nor have any events been stored. error.
I change the state of the aggregate in command handlers and apply no events since I don't want my domain entry table to grow unnecessarily.
Here is my external command handler for the aggregate;
open class AllocationCommandHandler constructor(
private val repository: Repository<Allocation>,
) {
#CommandHandler
fun on(cmd: CreateAllocation) {
this.repository.newInstance {
Allocation(
cmd.allocationId
)
}
}
#CommandHandler
fun on(cmd: CompleteAllocation) {
this.load(cmd.allocationId).invoke { it.complete() }
}
private fun load(allocationId: AllocationId): Aggregate<Allocation> =
repository.load(allocationId)
}
Here is the aggregate;
#Entity
#Aggregate
#Revision("1.0")
final class Allocation constructor() {
#AggregateIdentifier
#Id
lateinit var allocationId: AllocationId
private set
var status: AllocationStatusEnum = AllocationStatusEnum.IN_PROGRESS
private set
constructor(
allocationId: AllocationId,
) : this() {
this.allocationId = allocationId
this.status = AllocationStatusEnum.IN_PROGRESS
}
fun complete() {
if (this.status != AllocationStatusEnum.IN_PROGRESS) {
throw IllegalArgumentException("cannot complete if not in progress")
}
this.status = AllocationStatusEnum.COMPLETED
apply(
AllocationCompleted(
this.allocationId
)
)
}
}
There is no event handler for AllocationCompleted event in this aggregate, since it is listened by an other aggregate.
So here is the test code;
class AllocationTest {
private lateinit var fixture: AggregateTestFixture<Allocation>
#Before
fun setUp() {
fixture = AggregateTestFixture(Allocation::class.java).apply {
registerAnnotatedCommandHandler(AllocationCommandHandler(repository))
}
}
#Test
fun `create allocation`() {
fixture.givenNoPriorActivity()
.`when`(CreateAllocation("1")
.expectSuccessfulHandlerExecution()
.expectState {
assertTrue(it.allocationId == "1")
};
}
#Test
fun `complete allocation`() {
fixture.givenState { Allocation("1"}
.`when`(CompleteAllocation("1"))
.expectSuccessfulHandlerExecution()
.expectState {
assertTrue(it.status == AllocationStatusEnum.COMPLETED)
};
}
}
create allocation tests passes, I get the error on complete allocation test.

The givenNoPriorActivity is actually not intended to be used with State-Stored aggregates. Quite recently an adjustment has been made to the AggregateTestFixture to support this, but that will be released with Axon 4.6.0 (the current most recent version is 4.5.1).
That however does not change the fact I find it odd the complete allocation test fails. Using the givenState and expectState methods is the way to go. Maybe the Kotlin/Java combination is acting up right now; have you tried doing the same with pure Java, just for certainty?
On any note, the exception you share comes from the RecordingEventStore inside the AggregateTestFixture. It should only occur if an Event Sourcing Repository is used under the hood (by the fixture) actually since that will read events. What might be the culprit, is the usage of the givenNoPriorActivity. Please try to replace that for a givenState() providing an empty Aggregate instance.

Related

Clean Architecture: Cannot map PagingSource<Int, Entities> to PagingSource<RepositoryModel>

My requirement is to display the notes in pages using clean architecture along with offline suppport.
I am using the Paging library for pagination. And below is the clean architectural diagram for getting notes.
Note: Please open the above image in new tab and zoom to view it clear.
I have four layers UI, UseCase, Repository, and Datasource. I am planning to abstract the internal implementation of the data source. For that, I need to map NotesEntities to another model before crossing the boundary.
class TimelineDao{
#Transaction
#Query("SELECT * FROM NotesEntities ORDER BY timeStamp DESC")
abstract fun getPagingSourceForNotes(): PagingSource<Int, NotesEntities>
}
Current Implementation:
internal class NotesLocalDataSourceImpl #Inject constructor(
private val notesDao: NotesDao
) : NotesLocalDataSource {
override suspend fun insertNotes(notes: NotesEntities) {
notesDao.insert(NotesEntities)
}
override fun getNotesPagingSource(): PagingSource<Int, NotesEntities> {
return notesDao.getPagingSourceForNotes()
}
}
Expected Implementation:
internal class NotesLocalDataSourceImpl #Inject constructor(
private val notesDao: NotesDao
) : NotesLocalDataSource {
override suspend fun insertNotes(notes: NotesRepositoryModel) {
notesDao.insert(NotesRepositoryModel.toEntity())
}
override fun getNotesPagingSource(): PagingSource<Int, NotesRepositoryModel> {
return notesDao.getPagingSourceForNotes().map{ it.toNotesRepositoryModel() }
}
}
I am having an issue mapping the PagingSource<Int, NotesEntities> to PagingSource<Int, NotesRespositoryModel>. As for as I have researched, there is no way to map
PagingSource<Int, NotesEntities> to PagingSource<Int, NotesRespositoryModel>
Kindly let me know if there is a clean way/ workaround way to map the paging source objects. If anyone is sure if there is no way as of now. Please leave a comment as well.
Please Note: I am aware that paging allows transformation for PagingData. Below is code snippet that gets notes in pages. It maps NotesEntities to NotesDomainModel. But then I want to use NotesRespositoryModel instead of NotesEntities in the NotesRespositoryImpl, abstracting the NotesEntities within NotesLocalDataSourceImpl layer.
override fun getPaginatedNotes(): Flow<PagingData<NotesDomainModel>> {
return Pager<Int, NotesEntities>(
config = PagingConfig(pageSize = 10),
remoteMediator = NotesRemoteMediator(localDataSource,remoteDataSource),
pagingSourceFactory = localDataSource.getNotesPagingSource()
).flow.map{ it.toDomainModel() }
}
The solution I have thought of:
Instead of using the PagingSource in Dao directly, I thought of creating a custom PagingSource, that calls the Dao and maps the NoteEntities to LocalRepositoryModel.
But then I need to understand that any updates to the DB will not be reflected in the PagingSource. I need to handle it internally.
Kindly let me know your thoughts on this.
What about creating an implementation of PagingSource that forwards all of the calls to the original PagingSource and performs the mapping, something like this:
class MappingPagingSource<Key: Any, Value: Any, MappedValue: Any>(
private val originalSource: PagingSource<Key, Value>,
private val mapper: (Value) -> MappedValue,
) : PagingSource<Key, MappedValue>() {
override fun getRefreshKey(state: PagingState<Key, MappedValue>): Key? {
return originalSource.getRefreshKey(
PagingState(
pages = emptyList(),
leadingPlaceholderCount = 0,
anchorPosition = state.anchorPosition,
config = state.config,
)
)
}
override suspend fun load(params: LoadParams<Key>): LoadResult<Key, MappedValue> {
val originalResult = originalSource.load(params)
return when (originalResult) {
is LoadResult.Error -> LoadResult.Error(originalResult.throwable)
is LoadResult.Invalid -> LoadResult.Invalid()
is LoadResult.Page -> LoadResult.Page(
data = originalResult.data.map(mapper),
prevKey = originalResult.prevKey,
nextKey = originalResult.nextKey,
)
}
}
override val jumpingSupported: Boolean
get() = originalSource.jumpingSupported
}
Usage would be like this then:
override fun getNotesPagingSource(): PagingSource<Int, NotesRepositoryModel> {
return MappingPagingSource(
originalSource = notesDao.getPagingSourceForNotes(),
mapper = { it.toNotesRepositoryModel() },
)
}
Regarding the empty pages in PagingState - mapping all loaded pages back to original value would be too expensive and room's paging implementation is only using anchorPosition and config.initialLoadSize anyway - see here and here.

What is the purpose of getTonightFromDatabase() in the Android Kotlin Room codelabs?

I am trying to understand codelab 6.2 Coroutines and Room in Android Kotlin Fundamentals. Class SleepTrackerViewModel includes (with comments added by me):
private var tonight = MutableLiveData<SleepNight?>()
private suspend fun getTonightFromDatabase(): SleepNight? {
var night = database.getTonight() // this gets the most recent night
// Return null if this night has been completed (its end time has been set).
if (night?.endTimeMilli != night?.startTimeMilli) {
night = null
}
return night
}
fun onStartTracking() {
viewModelScope.launch {
val newNight = SleepNight()
insert(newNight)
tonight.value = getTonightFromDatabase()
}
}
fun onStopTracking() {
viewModelScope.launch {
val oldNight = tonight.value ?: return#launch
oldNight.endTimeMilli = System.currentTimeMillis()
update(oldNight)
}
}
I don't understand why the method getTonightFromDatabase(), which is called only from onStartTracking(), is needed. It seems the last statement in onStartTracking() could be replaced by:
tonight.value = newNight
I also don't understand why the conditional in getTonightFromDatabase() is needed.
One of the reasons is that the nightId in the SleepNight data class is auto generated by SQL.
If the code would do tonight.value = newNight then the nightId wouldn't be the same than the one in the database. That would cause the update call in onStopTracking to end (update) the wrong night.
Note too that the method getTonightFromDatabase is called from a later version of SleepNightViewModel:
private var tonight = MutableLiveData<SleepNight?>()
init {
initializeTonight()
}
private fun initializeTonight() {
viewModelScope.launch {
tonight.value = getTonightFromDatabase()
}
}
When the application restarts, getTonightFromDatabase is called to set the instance variable tonight (which would more accurately be called latestNight). If the most recent night was complete, the completeness check would ensure that null is returned, prevent the entry from being modified.

spring cloud stream file source app - History of Processed files and polling files under sub directory

I'm building a data pipeline with Spring Cloud Stream File Source app at the start of the pipeline. I need some help with working around some missing features
My File source app (based on org.springframework.cloud.stream.app:spring-cloud-starter-stream-source-file) works perfectly well excepting missing features that I need help with. I need
To delete files after polled and messaged
Poll into the subdirectories
With respect to item 1, I read that the delete feature doesn't exist in the file source app (it is available on sftp source). Every time the app is restarted, the files that were processed in the past will be re-picked, can the history of files processed made permanent? Is there an easy alternative?
To support those requirements you definitely need to modify the code of the mentioned File Source project: https://docs.spring.io/spring-cloud-stream-app-starters/docs/Einstein.BUILD-SNAPSHOT/reference/htmlsingle/#_patching_pre_built_applications
I would suggest to fork the project and poll it from GitHub as is, since you are going to modify existing code of the project. Then you follow instruction in the mentioned doc how to build the target binder-specific artifact which will be compatible with SCDF environment.
Now about the questions:
To poll sub-directories for the same file pattern, you need to configure a RecursiveDirectoryScanner on the Files.inboundAdapter():
/**
* Specify a custom scanner.
* #param scanner the scanner.
* #return the spec.
* #see FileReadingMessageSource#setScanner(DirectoryScanner)
*/
public FileInboundChannelAdapterSpec scanner(DirectoryScanner scanner) {
Note that all the filters must be configured on this DirectoryScanner instead.
There is going to be a warning otherwise:
// Check that the filter and locker options are _NOT_ set if an external scanner has been set.
// The external scanner is responsible for the filter and locker options in that case.
Assert.state(!(this.scannerExplicitlySet && (this.filter != null || this.locker != null)),
() -> "When using an external scanner the 'filter' and 'locker' options should not be used. " +
"Instead, set these options on the external DirectoryScanner: " + this.scanner);
To keep track of the files, it is better to consider to have a FileSystemPersistentAcceptOnceFileListFilter based on the external persistence store for the ConcurrentMetadataStore implementation: https://docs.spring.io/spring-integration/reference/html/#metadata-store. This must be used instead of that preventDuplicates(), because FileSystemPersistentAcceptOnceFileListFilter ensure only once logic for us as well.
Deleting file after sending might not be a case, since you may just send File as is and it is has to be available on the other side.
Also, you can add a ChannelInterceptor into the source.output() and implement its postSend() to perform ((File) message.getPayload()).delete(), which is going to happen when the message has been successfully sent to the binder destination.
#EnableBinding(Source.class)
#Import(TriggerConfiguration.class)
#EnableConfigurationProperties({FileSourceProperties.class, FileConsumerProperties.class,
TriggerPropertiesMaxMessagesDefaultUnlimited.class})
public class FileSourceConfiguration {
#Autowired
#Qualifier("defaultPoller")
PollerMetadata defaultPoller;
#Autowired
Source source;
#Autowired
private FileSourceProperties properties;
#Autowired
private FileConsumerProperties fileConsumerProperties;
private Boolean alwaysAcceptDirectories = false;
private Boolean deletePostSend;
private Boolean movePostSend;
private String movePostSendSuffix;
#Bean
public IntegrationFlow fileSourceFlow() {
FileInboundChannelAdapterSpec messageSourceSpec = Files.inboundAdapter(new File(this.properties.getDirectory()));
RecursiveDirectoryScanner recursiveDirectoryScanner = new RecursiveDirectoryScanner();
messageSourceSpec.scanner(recursiveDirectoryScanner);
FileVisitOption[] fileVisitOption = new FileVisitOption[1];
recursiveDirectoryScanner.setFilter(initializeFileListFilter());
initializePostSendAction();
IntegrationFlowBuilder flowBuilder = IntegrationFlows
.from(messageSourceSpec,
new Consumer<SourcePollingChannelAdapterSpec>() {
#Override
public void accept(SourcePollingChannelAdapterSpec sourcePollingChannelAdapterSpec) {
sourcePollingChannelAdapterSpec
.poller(defaultPoller);
}
});
ChannelInterceptor channelInterceptor = new ChannelInterceptor() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
if (sent) {
File fileOriginalFile = (File) message.getHeaders().get("file_originalFile");
if (fileOriginalFile != null) {
if (movePostSend) {
fileOriginalFile.renameTo(new File(fileOriginalFile + movePostSendSuffix));
} else if (deletePostSend) {
fileOriginalFile.delete();
}
}
}
}
//Override more interceptor methods to capture some logs here
};
MessageChannel messageChannel = source.output();
((DirectChannel) messageChannel).addInterceptor(channelInterceptor);
return FileUtils.enhanceFlowForReadingMode(flowBuilder, this.fileConsumerProperties)
.channel(messageChannel)
.get();
}
private void initializePostSendAction() {
deletePostSend = this.properties.isDeletePostSend();
movePostSend = this.properties.isMovePostSend();
movePostSendSuffix = this.properties.getMovePostSendSuffix();
if (deletePostSend && movePostSend) {
String errorMessage = "The 'delete-file-post-send' and 'move-file-post-send' attributes are mutually exclusive";
throw new IllegalArgumentException(errorMessage);
}
if (movePostSend && (movePostSendSuffix == null || movePostSendSuffix.trim().length() == 0)) {
String errorMessage = "The 'move-post-send-suffix' is required when 'move-file-post-send' is set to true.";
throw new IllegalArgumentException(errorMessage);
}
//Add additional validation to ensure the user didn't configure a file move that will result in cyclic processing of file
}
private FileListFilter<File> initializeFileListFilter() {
final List<FileListFilter<File>> filtersNeeded = new ArrayList<FileListFilter<File>>();
if (this.properties.getFilenamePattern() != null && this.properties.getFilenameRegex() != null) {
String errorMessage = "The 'filename-pattern' and 'filename-regex' attributes are mutually exclusive.";
throw new IllegalArgumentException(errorMessage);
}
if (StringUtils.hasText(this.properties.getFilenamePattern())) {
SimplePatternFileListFilter patternFilter = new SimplePatternFileListFilter(this.properties.getFilenamePattern());
if (this.alwaysAcceptDirectories != null) {
patternFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(patternFilter);
} else if (this.properties.getFilenameRegex() != null) {
RegexPatternFileListFilter regexFilter = new RegexPatternFileListFilter(this.properties.getFilenameRegex());
if (this.alwaysAcceptDirectories != null) {
regexFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(regexFilter);
}
FileListFilter<File> createdFilter = null;
if (!Boolean.FALSE.equals(this.properties.isIgnoreHiddenFiles())) {
filtersNeeded.add(new IgnoreHiddenFileListFilter());
}
if (Boolean.TRUE.equals(this.properties.isPreventDuplicates())) {
filtersNeeded.add(new AcceptOnceFileListFilter<File>());
}
if (filtersNeeded.size() == 1) {
createdFilter = filtersNeeded.get(0);
} else {
createdFilter = new CompositeFileListFilter<File>(filtersNeeded);
}
return createdFilter;
}
}

How to Access Mono<T> While Handling Exception with onErrorMap()?

In data class I defined the 'name' must be unique across whole mongo collection:
#Document
data class Inn(#Indexed(unique = true) val name: String,
val description: String) {
#Id
var id: String = UUID.randomUUID().toString()
var intro: String = ""
}
So in service I have to capture the unexpected exception if someone pass the same name again.
#Service
class InnService(val repository: InnRepository) {
fun create(inn: Mono<Inn>): Mono<Inn> =
repository
.create(inn)
.onErrorMap(
DuplicateKeyException::class.java,
{ err -> InnAlreadyExistedException("The inn already existed", err) }
)
}
This is OK, but what if I want to add more info to the exceptional message like "The inn named '$it.name' already existed", what should I do for transforming exception with enriched message.
Clearly, assign Mono<Inn> to a local variable at the beginning is not a good idea...
Similar situation in handler, I'd like to give client more info which derived from the customized exception, but no proper way can be found.
#Component
class InnHandler(val innService: InnService) {
fun create(req: ServerRequest): Mono<ServerResponse> {
return innService
.create(req.bodyToMono<Inn>())
.flatMap {
created(URI.create("/api/inns/${it.id}"))
.contentType(MediaType.APPLICATION_JSON_UTF8).body(it.toMono())
}
.onErrorReturn(
InnAlreadyExistedException::class.java,
badRequest().body(mapOf("code" to "SF400", "message" to t.message).toMono()).block()
)
}
}
In reactor, you aren't going to have the value you want handed to you in onErrorMap as an argument, you just get the Throwable. However, in Kotlin you can reach outside the scope of the error handler and just refer to inn directly. You don't need to change much:
fun create(inn: Mono<Inn>): Mono<Inn> =
repository
.create(inn)
.onErrorMap(
DuplicateKeyException::class.java,
{ InnAlreadyExistedException("The inn ${inn.name} already existed", it) }
)
}

Parallel Stream repeating items

I am retrieving big chunks of data from DB and using this data to write it somewhere else. In order to avoid a long processing time, I'm trying to use parallel streams to write it. When I run this as sequential streams, it works perfectly. However, if I change it to parallel, the behavior is odd: it prints the same object multiple times (more than 10).
#PostConstruct
public void retrieveAllTypeRecords() throws SQLException {
logger.info("Retrieve batch of Type records.");
try {
Stream<TypeRecord> typeQueryAsStream = jdbcStream.getTypeQueryAsStream();
typeQueryAsStream.forEach((type) -> {
logger.info("Printing Type with field1: {} and field2: {}.", type.getField1(), type.getField2()); //the same object gets printed here multiple times
//write this object somewhere else
});
logger.info("Completed full retrieval of Type data.");
} catch (Exception e) {
logger.error("error: " + e);
}
}
public Stream<TypeRecord> getTypeQueryAsStream() throws SQLException {
String sql = typeRepository.getQueryAllTypesRecords(); //retrieves SQL query in String format
TypeMapper typeMapper = new TypeMapper();
JdbcStream.StreamableQuery query = jdbcStream.streamableQuery(sql);
Stream<TypeRecord> stream = query.stream()
.map(row -> {
return typeMapper.mapRow(row); //maps columns values to object values
});
return stream;
}
public class StreamableQuery implements Closeable {
(...)
public Stream<SqlRow> stream() throws SQLException {
final SqlRowSet rowSet = new ResultSetWrappingSqlRowSet(preparedStatement.executeQuery());
final SqlRow sqlRow = new SqlRowAdapter(rowSet);
Supplier<Spliterator<SqlRow>> supplier = () -> Spliterators.spliteratorUnknownSize(new Iterator<SqlRow>() {
#Override
public boolean hasNext() {
return !rowSet.isLast();
}
#Override
public SqlRow next() {
if (!rowSet.next()) {
throw new NoSuchElementException();
}
return sqlRow;
}
}, Spliterator.CONCURRENT);
return StreamSupport.stream(supplier, Spliterator.CONCURRENT, true); //this boolean sets the stream as parallel
}
}
I've also tried using typeQueryAsStream.parallel().forEach((type) but the result is the same.
Example of output:
[ForkJoinPool.commonPool-worker-1] INFO TypeService - Saving Type with field1: L6797 and field2: P1433.
[ForkJoinPool.commonPool-worker-1] INFO TypeService - Saving Type with field1: L6797 and field2: P1433.
[main] INFO TypeService - Saving Type with field1: L6797 and field2: P1433.
[ForkJoinPool.commonPool-worker-1] INFO TypeService - Saving Type with field1: L6797 and field2: P1433.
Well, look at you code,
final SqlRow sqlRow = new SqlRowAdapter(rowSet);
Supplier<Spliterator<SqlRow>> supplier = () -> Spliterators.spliteratorUnknownSize(new Iterator<SqlRow>() {
…
#Override
public SqlRow next() {
if (!rowSet.next()) {
throw new NoSuchElementException();
}
return sqlRow;
}
}, Spliterator.CONCURRENT);
You are returning the same object every time. You achieve your desired effects by implicitly modifying the state of this object when calling rowSet.next().
This obviously can’t work when multiple threads try to access that single object concurrently. Even buffering some items, to hand them over to another thread will cause trouble. Therefore, such interference can cause problems with sequential streams as well, as soon as stateful intermediate operations are involved, like sorted or distinct.
Assuming that typeMapper.mapRow(row) will produce an actual data item which has no interference to other data items, you should integrate this step into the stream source, to create a valid stream.
public Stream<TypeRecord> stream(TypeMapper typeMapper) throws SQLException {
SqlRowSet rowSet = new ResultSetWrappingSqlRowSet(preparedStatement.executeQuery());
SqlRow sqlRow = new SqlRowAdapter(rowSet);
Spliterator<TypeRecord> sp = new Spliterators.AbstractSpliterator<TypeRecord>(
Long.MAX_VALUE, Spliterator.CONCURRENT|Spliterator.ORDERED) {
#Override
public boolean tryAdvance(Consumer<? super TypeRecord> action) {
if(!rowSet.next()) return false;
action.accept(typeMapper.mapRow(sqlRow));
return true;
}
};
return StreamSupport.stream(sp, true); //this boolean sets the stream as parallel
}
Note that for a lot of use cases, like this one, implementing a Spliterator is simpler than implementing an Iterator (which needs to be wrapped via spliteratorUnknownSize anyway). Also, there is no need to encapsulate this instantiation into a Supplier.
As a final note, the current implementation does not perform well for streams with an unknown size, as it treats Long.MAX_VALUE like a very large number, ignoring the “unknown” semantic assigned to it by the specification. It will be very beneficial to the parallel performance to provide an estimate size, it doesn’t need to be precise, in fact, with the current implementation, even a completely made up number, say 1000 may perform better than correctly using Long.MAX_VALUE to denote an entirely unknown size.

Resources