spring cloud stream file source app - History of Processed files and polling files under sub directory - spring

I'm building a data pipeline with Spring Cloud Stream File Source app at the start of the pipeline. I need some help with working around some missing features
My File source app (based on org.springframework.cloud.stream.app:spring-cloud-starter-stream-source-file) works perfectly well excepting missing features that I need help with. I need
To delete files after polled and messaged
Poll into the subdirectories
With respect to item 1, I read that the delete feature doesn't exist in the file source app (it is available on sftp source). Every time the app is restarted, the files that were processed in the past will be re-picked, can the history of files processed made permanent? Is there an easy alternative?

To support those requirements you definitely need to modify the code of the mentioned File Source project: https://docs.spring.io/spring-cloud-stream-app-starters/docs/Einstein.BUILD-SNAPSHOT/reference/htmlsingle/#_patching_pre_built_applications
I would suggest to fork the project and poll it from GitHub as is, since you are going to modify existing code of the project. Then you follow instruction in the mentioned doc how to build the target binder-specific artifact which will be compatible with SCDF environment.
Now about the questions:
To poll sub-directories for the same file pattern, you need to configure a RecursiveDirectoryScanner on the Files.inboundAdapter():
/**
* Specify a custom scanner.
* #param scanner the scanner.
* #return the spec.
* #see FileReadingMessageSource#setScanner(DirectoryScanner)
*/
public FileInboundChannelAdapterSpec scanner(DirectoryScanner scanner) {
Note that all the filters must be configured on this DirectoryScanner instead.
There is going to be a warning otherwise:
// Check that the filter and locker options are _NOT_ set if an external scanner has been set.
// The external scanner is responsible for the filter and locker options in that case.
Assert.state(!(this.scannerExplicitlySet && (this.filter != null || this.locker != null)),
() -> "When using an external scanner the 'filter' and 'locker' options should not be used. " +
"Instead, set these options on the external DirectoryScanner: " + this.scanner);
To keep track of the files, it is better to consider to have a FileSystemPersistentAcceptOnceFileListFilter based on the external persistence store for the ConcurrentMetadataStore implementation: https://docs.spring.io/spring-integration/reference/html/#metadata-store. This must be used instead of that preventDuplicates(), because FileSystemPersistentAcceptOnceFileListFilter ensure only once logic for us as well.
Deleting file after sending might not be a case, since you may just send File as is and it is has to be available on the other side.
Also, you can add a ChannelInterceptor into the source.output() and implement its postSend() to perform ((File) message.getPayload()).delete(), which is going to happen when the message has been successfully sent to the binder destination.

#EnableBinding(Source.class)
#Import(TriggerConfiguration.class)
#EnableConfigurationProperties({FileSourceProperties.class, FileConsumerProperties.class,
TriggerPropertiesMaxMessagesDefaultUnlimited.class})
public class FileSourceConfiguration {
#Autowired
#Qualifier("defaultPoller")
PollerMetadata defaultPoller;
#Autowired
Source source;
#Autowired
private FileSourceProperties properties;
#Autowired
private FileConsumerProperties fileConsumerProperties;
private Boolean alwaysAcceptDirectories = false;
private Boolean deletePostSend;
private Boolean movePostSend;
private String movePostSendSuffix;
#Bean
public IntegrationFlow fileSourceFlow() {
FileInboundChannelAdapterSpec messageSourceSpec = Files.inboundAdapter(new File(this.properties.getDirectory()));
RecursiveDirectoryScanner recursiveDirectoryScanner = new RecursiveDirectoryScanner();
messageSourceSpec.scanner(recursiveDirectoryScanner);
FileVisitOption[] fileVisitOption = new FileVisitOption[1];
recursiveDirectoryScanner.setFilter(initializeFileListFilter());
initializePostSendAction();
IntegrationFlowBuilder flowBuilder = IntegrationFlows
.from(messageSourceSpec,
new Consumer<SourcePollingChannelAdapterSpec>() {
#Override
public void accept(SourcePollingChannelAdapterSpec sourcePollingChannelAdapterSpec) {
sourcePollingChannelAdapterSpec
.poller(defaultPoller);
}
});
ChannelInterceptor channelInterceptor = new ChannelInterceptor() {
#Override
public void postSend(Message<?> message, MessageChannel channel, boolean sent) {
if (sent) {
File fileOriginalFile = (File) message.getHeaders().get("file_originalFile");
if (fileOriginalFile != null) {
if (movePostSend) {
fileOriginalFile.renameTo(new File(fileOriginalFile + movePostSendSuffix));
} else if (deletePostSend) {
fileOriginalFile.delete();
}
}
}
}
//Override more interceptor methods to capture some logs here
};
MessageChannel messageChannel = source.output();
((DirectChannel) messageChannel).addInterceptor(channelInterceptor);
return FileUtils.enhanceFlowForReadingMode(flowBuilder, this.fileConsumerProperties)
.channel(messageChannel)
.get();
}
private void initializePostSendAction() {
deletePostSend = this.properties.isDeletePostSend();
movePostSend = this.properties.isMovePostSend();
movePostSendSuffix = this.properties.getMovePostSendSuffix();
if (deletePostSend && movePostSend) {
String errorMessage = "The 'delete-file-post-send' and 'move-file-post-send' attributes are mutually exclusive";
throw new IllegalArgumentException(errorMessage);
}
if (movePostSend && (movePostSendSuffix == null || movePostSendSuffix.trim().length() == 0)) {
String errorMessage = "The 'move-post-send-suffix' is required when 'move-file-post-send' is set to true.";
throw new IllegalArgumentException(errorMessage);
}
//Add additional validation to ensure the user didn't configure a file move that will result in cyclic processing of file
}
private FileListFilter<File> initializeFileListFilter() {
final List<FileListFilter<File>> filtersNeeded = new ArrayList<FileListFilter<File>>();
if (this.properties.getFilenamePattern() != null && this.properties.getFilenameRegex() != null) {
String errorMessage = "The 'filename-pattern' and 'filename-regex' attributes are mutually exclusive.";
throw new IllegalArgumentException(errorMessage);
}
if (StringUtils.hasText(this.properties.getFilenamePattern())) {
SimplePatternFileListFilter patternFilter = new SimplePatternFileListFilter(this.properties.getFilenamePattern());
if (this.alwaysAcceptDirectories != null) {
patternFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(patternFilter);
} else if (this.properties.getFilenameRegex() != null) {
RegexPatternFileListFilter regexFilter = new RegexPatternFileListFilter(this.properties.getFilenameRegex());
if (this.alwaysAcceptDirectories != null) {
regexFilter.setAlwaysAcceptDirectories(this.alwaysAcceptDirectories);
}
filtersNeeded.add(regexFilter);
}
FileListFilter<File> createdFilter = null;
if (!Boolean.FALSE.equals(this.properties.isIgnoreHiddenFiles())) {
filtersNeeded.add(new IgnoreHiddenFileListFilter());
}
if (Boolean.TRUE.equals(this.properties.isPreventDuplicates())) {
filtersNeeded.add(new AcceptOnceFileListFilter<File>());
}
if (filtersNeeded.size() == 1) {
createdFilter = filtersNeeded.get(0);
} else {
createdFilter = new CompositeFileListFilter<File>(filtersNeeded);
}
return createdFilter;
}
}

Related

How to test a state stored aggregate that doesn't produce events

I want to test a state stored aggregate by using AggregateTestFixture. However I get AggregateNotFoundException: No 'given' events were configured for this aggregate, nor have any events been stored. error.
I change the state of the aggregate in command handlers and apply no events since I don't want my domain entry table to grow unnecessarily.
Here is my external command handler for the aggregate;
open class AllocationCommandHandler constructor(
private val repository: Repository<Allocation>,
) {
#CommandHandler
fun on(cmd: CreateAllocation) {
this.repository.newInstance {
Allocation(
cmd.allocationId
)
}
}
#CommandHandler
fun on(cmd: CompleteAllocation) {
this.load(cmd.allocationId).invoke { it.complete() }
}
private fun load(allocationId: AllocationId): Aggregate<Allocation> =
repository.load(allocationId)
}
Here is the aggregate;
#Entity
#Aggregate
#Revision("1.0")
final class Allocation constructor() {
#AggregateIdentifier
#Id
lateinit var allocationId: AllocationId
private set
var status: AllocationStatusEnum = AllocationStatusEnum.IN_PROGRESS
private set
constructor(
allocationId: AllocationId,
) : this() {
this.allocationId = allocationId
this.status = AllocationStatusEnum.IN_PROGRESS
}
fun complete() {
if (this.status != AllocationStatusEnum.IN_PROGRESS) {
throw IllegalArgumentException("cannot complete if not in progress")
}
this.status = AllocationStatusEnum.COMPLETED
apply(
AllocationCompleted(
this.allocationId
)
)
}
}
There is no event handler for AllocationCompleted event in this aggregate, since it is listened by an other aggregate.
So here is the test code;
class AllocationTest {
private lateinit var fixture: AggregateTestFixture<Allocation>
#Before
fun setUp() {
fixture = AggregateTestFixture(Allocation::class.java).apply {
registerAnnotatedCommandHandler(AllocationCommandHandler(repository))
}
}
#Test
fun `create allocation`() {
fixture.givenNoPriorActivity()
.`when`(CreateAllocation("1")
.expectSuccessfulHandlerExecution()
.expectState {
assertTrue(it.allocationId == "1")
};
}
#Test
fun `complete allocation`() {
fixture.givenState { Allocation("1"}
.`when`(CompleteAllocation("1"))
.expectSuccessfulHandlerExecution()
.expectState {
assertTrue(it.status == AllocationStatusEnum.COMPLETED)
};
}
}
create allocation tests passes, I get the error on complete allocation test.
The givenNoPriorActivity is actually not intended to be used with State-Stored aggregates. Quite recently an adjustment has been made to the AggregateTestFixture to support this, but that will be released with Axon 4.6.0 (the current most recent version is 4.5.1).
That however does not change the fact I find it odd the complete allocation test fails. Using the givenState and expectState methods is the way to go. Maybe the Kotlin/Java combination is acting up right now; have you tried doing the same with pure Java, just for certainty?
On any note, the exception you share comes from the RecordingEventStore inside the AggregateTestFixture. It should only occur if an Event Sourcing Repository is used under the hood (by the fixture) actually since that will read events. What might be the culprit, is the usage of the givenNoPriorActivity. Please try to replace that for a givenState() providing an empty Aggregate instance.

Spring boot SFTP, dynamic directory in SFTP

I tried to upload files to dynamic directory to SFTP. When I uploaded some files, the first file always uploaded to the last directory. Then after that rest file will be uploaded to the correct directory. When I did debug mode, I saw that every first file would be uploaded to temporaryDirectory which is the code already set up by spring. I don't know how to set the value of this temporaryDirectory to the right value. Please, help me to solve the problem.
Or maybe you guys have other way to upload and create proper dynamic directory. Please let me know.
Here is the code:
private String sftpRemoteDirectory = "documents/"
#MessagingGateway
public interface UploadGateway {
#Gateway(requestChannel = "toSftpChannel")
void upload(File file, #Header("dirName") String dirName);
}
#Bean
#ServiceActivator(inputChannel = "toSftpChannel")
public MessageHandler handler() {
SftpMessageHandler handler = new SftpMessageHandler(sftpSessionFactory());
SimpleDateFormat formatter = new SimpleDateFormat("yyMMdd");
String newDynamicDirectory = "E" + formatter.format(new Date())+String.format("%04d",Integer.parseInt("0001") + 1);
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory + newDynamicDirectory));
handler.setFileNameGenerator(message -> {
String dirName = (String) message.getHeaders().get("dirName");
handler.setRemoteDirectoryExpression(new LiteralExpression(sftpRemoteDirectory + dirName));
handler.setAutoCreateDirectory(true);
if (message.getPayload() instanceof File) {
return (((File) message.getPayload()).getName());
} else {
throw new IllegalArgumentException("File expected as payload!");
}
});
return handler;
}
You are using a LiteralExpression, evaluated just once, you need an expression that's evaluated at runtime.
handler.setRemoteDirectoryExpressionString("'" + sftpRemoteDirectory/ + "'" + headers['dirName']);

Do I need to deal with concurrency?

I have an application where I am managing documents. I would like to ask you whether I need to deal with concurrency.
Lets say, I will have the method below (which is in the class with #Service and #Transactional) and more requests would come which would require to use this function.
Will spring and database handle concurrency without synchronization? (my db is MySQL and JPA). So the first request to use this method will be executed, but another request will wait till the previous request will be done... so it would not happen that something would be overwritten in the database...
Thanks for help
public void updateSharing(long userId, long documentId, int approval) {
Optional<Document> optionalDocument = documentRepository.findById(documentId);
User user = userService.findUserById(userId);
if(optionalDocument.isPresent()){
Document document = optionalDocument.get();
if(document.getDocumentState().getId() == 2){
documentRepository.updateSharing(userId, documentId, approval);
if(approval == 0){
List<User> users = userService.getUsersForApprovingDocument(documentId);
Map<String, String> map = emailService.createMessage(2, user, document);
if(document.getUser().isActive()){
users.add(document.getUser());
}
setDocumentType(documentId, 3);
sendEmail(users, map.get("subject"), map.get("message"));
} else if(isDocumentApproved(documentId)){
setDocumentType(documentId, 1);
List<User> users = userService.getUsersForApprovingDocument(documentId);
if(document.getUser().isActive()){
users.add(document.getUser());
}
Map<String, String> map = emailService.createMessage(5, user, document);
sendEmail(users, map.get("subject"), map.get("message"));
}
} else if(document.getDocumentState().getId() == 1){
documentRepository.updateSharing(userId, documentId, approval);
} else {
return;
}
}
}
You don't need to deal with concurrency in this situation.
For every request, the container creates a new Thread and each Thread has it's own stack.

Creating webhook-notifications in testing environment

I'm currently trying to create a test webhook-notification as it's shown in the documentation:
HashMap<String, String> sampleNotification = gateway.webhookTesting().sampleNotification(
WebhookNotification.Kind.SUBSCRIPTION_WENT_PAST_DUE, "my_id"
);
WebhookNotification webhookNotification = gateway.webhookNotification().parse(
sampleNotification.get("bt_signature"),
sampleNotification.get("bt_payload")
);
webhookNotification.getSubscription().getId();
// "my_id"
First off I don't know what my_id actually should be. Is it supposed to be a plan ID? Or should it be a Subscription ID?
I've tested all of it. I've set it to an existing billing plan in my vault and I also tried to create a Customer down to an actual Subscription like this:
public class WebhookChargedSuccessfullyLocal {
private final static BraintreeGateway BT;
static {
String btConfig = "C:\\workspaces\\mz\\mz-server\\mz-web-server\\src\\main\\assembly\\dev\\braintree.properties";
Braintree.initialize(btConfig);
BT = Braintree.instance();
}
public static void main(String[] args) {
WebhookChargedSuccessfullyLocal webhookChargedSuccessfullyLocal = new WebhookChargedSuccessfullyLocal();
webhookChargedSuccessfullyLocal.post();
}
/**
*
*/
public void post() {
CustomerRequest customerRequest = new CustomerRequest()
.firstName("Testuser")
.lastName("Tester");
Result<Customer> createUserResult = BT.customer().create(customerRequest);
if(createUserResult.isSuccess() == false) {
System.err.println("Could not create customer");
System.exit(1);
}
Customer customer = createUserResult.getTarget();
PaymentMethodRequest paymentMethodRequest = new PaymentMethodRequest()
.customerId(customer.getId())
.paymentMethodNonce("fake-valid-visa-nonce");
Result<? extends PaymentMethod> createPaymentMethodResult = BT.paymentMethod().create(paymentMethodRequest);
if(createPaymentMethodResult.isSuccess() == false) {
System.err.println("Could not create payment method");
System.exit(1);
}
if(!(createPaymentMethodResult.getTarget() instanceof CreditCard)) {
System.err.println("Unexpected error. Result is not a credit card.");
System.exit(1);
}
CreditCard creditCard = (CreditCard) createPaymentMethodResult.getTarget();
SubscriptionRequest subscriptionRequest = new SubscriptionRequest()
.paymentMethodToken(creditCard.getToken())
.planId("mmb2");
Result<Subscription> createSubscriptionResult = BT.subscription().create(subscriptionRequest);
if(createSubscriptionResult.isSuccess() == false) {
System.err.println("Could not create subscription");
System.exit(1);
}
Subscription subscription = createSubscriptionResult.getTarget();
HashMap<String, String> sampleNotification = BT.webhookTesting()
.sampleNotification(WebhookNotification.Kind.SUBSCRIPTION_CHARGED_SUCCESSFULLY, subscription.getId());
WebhookNotification webhookNotification = BT.webhookNotification()
.parse(
sampleNotification.get("bt_signature"),
sampleNotification.get("bt_payload")
);
System.out.println(webhookNotification.getSubscription().getId());
}
}
but all I'm getting is a WebhookNotification instance that has nothing set. Only its ID and the timestamp appears to be set but that's it.
What I expected:
I expected to receive a Subscription object that tells me which customer has subscribed to it as well as e.g. all add-ons which are included in the billing plan.
Is there a way to get such test-notifications in the sandbox mode?
Full disclosure: I work at Braintree. If you have any further questions, feel free to contact support.
webhookNotification.getSubscription().getId(); will return the ID of the subscription associated with sampleNotification, which can be anything for testing purposes, but will be a SubscriptionID in a production environment.
Receiving a dummy object from webhookTesting().sampleNotification() is the expected behavior, and is in place to help you ensure that all kinds of webhooks can be correctly caught. Once that logic is in place, in the Sandbox Gateway under Settings > Webhooks you can specify your endpoint to receive real webhook notifications.
In the case of SUBSCRIPTION_CHARGED_SUCCESSFULLY you will indeed receive a Subscription object containing add-on information as well as an array of Transaction objects containing customer information.

JPA relation changes not updated in PreUpdate callback

In my current project I have an entity which can be published to other systems. For keeping track on the publications the entity has a relation called "publications". I am using Eclipselink.
This entity bean also has a "PreUpdate" annotated method.
In order to be able to keep the other systems data up to date, I created an Aspect that is executed around the call to the PreUpdate method. Depending on which properties have changed, I need to remove some of the publications. Everything is working absolutely fine.
The problem I am having is that the portal-publishing component correctly sends delete commands and removes the publication from the entities "publications" list. I can even see in the changeset that JPA has noticed the "publications" property to have changed. After the transaction is flushed, the cached entity correctly doesn't have the deleted publications anymore. Unfortunately the database still does and when the system is restarted or the Entity is loaded from the DB again, the publication metadata is there again.
I tried allmost everything. I even managed to get the deleted instances from the JPA ChangeSet in the Aspect and tried to use the entityManager to manually delete them, but nothing actually worked. I seem to be unable to delete these relational entities. Currently I am thinking about using JDBC to delete them, but this would only be my last measure.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
portalService.deleteCopies(truck, portal);
// Get any deleted portal references and use the entityManager to finally delete them.
changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
final ChangeRecord publicationChanges = objectChangeSet.getChangesForAttributeNamed("publications");
if (publicationChanges != null) {
if (publicationChanges instanceof CollectionChangeRecord) {
final CollectionChangeRecord collectionChanges =
(CollectionChangeRecord) publicationChanges;
#SuppressWarnings("unchecked")
final Collection<ObjectChangeSet> removedPublications =
(Collection<ObjectChangeSet>)
collectionChanges.getRemoveObjectList().values();
for (final ObjectChangeSet removedPublication : removedPublications) {
final TruckPublication publication = (TruckPublication) ((org.eclipse.persistence.internal.sessions.ObjectChangeSet) removedPublication).getUnitOfWorkClone();
entityManager.remove(publication);
}
}
}
}
}
}
}
Chris
The issue is that PreUpdate is raised during the commit process, when the set of changes have already been computed, and the set of objects to delete have already been computed.
Ideally you would perform something like this in your application logic, not through a persistence event.
You could try executing a DeleteObjectQuery directly from your event (instead of using em.remove()), this may work, but in general it would be better to perform this logic in your application.
jpaEntityManager.getUnitOfWork().deleteObject(object);
Also note that getCurrentChanges() computes the changes, in a PreUpdate event the changes are already computed, so you should be able to use getUnitOfWorkChangeSet().
The only solution I found was to create a new Method for performing the delete and forcing JPA to create a new Transaction. As by this I am losing the changeSet, I had to manually find out which publications were deleted. I then simply call that helper method and the publications are correctly deleted, but I find this solution extremely ugly.
#Transactional
#Around("execution(* de.cware.services.truck.model.Truck.jpaPreUpdate(..))")
public Object truckPreUpdate(final ProceedingJoinPoint pjp) throws Throwable {
if (alreadyExecutingMarker.get() != Boolean.TRUE) {
alreadyExecutingMarker.set(Boolean.TRUE);
final Truck truck = (Truck) pjp.getTarget();
final JpaEntityManager jpaEntityManager = (JpaEntityManager) entityManager.getDelegate();
final UnitOfWorkChangeSet changeSet = jpaEntityManager.getUnitOfWork().getCurrentChanges();
final ObjectChangeSet objectChangeSet = changeSet.getObjectChangeSetForClone(truck);
if (log.isDebugEnabled()) {
log.debug("--------------------- Truck pre update check (" + truck.getId() + ") ---------------------");
}
////////////////////////////////////////////////////////////////////////////////////////////////////////////
// If the truck date has changed, revoke all publication copies.
////////////////////////////////////////////////////////////////////////////////////////////////////////////
final ChangeRecord truckFreeDate = objectChangeSet.getChangesForAttributeNamed("lkwFreiDatum");
if (truckFreeDate != null) {
if (log.isDebugEnabled()) {
log.debug("The date 'truckFreeDate' of truck with id '" + truck.getId() + "' has changed. " +
"Revoking all publications that are not marked as main applications");
}
removeCopyPublications(truck);
}
}
#Transactional(propagation = Propagation.REQUIRES_NEW)
protected void removeCopyPublications(Truck truck) {
// Delete all not-main-publications.
for (final String portal : truck.getPublishedPortals()) {
if (log.isDebugEnabled()) {
log.debug("- Revoking publications of copies to portal: " + portal);
}
final Map<Integer, TruckPublication> oldPublications = new HashMap<Integer, TruckPublication>();
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.put(publication.getId(), publication);
}
portalService.deleteCopies(truck, portal);
for(final TruckPublication publication : truck.getPublications(portal)) {
oldPublications.remove(publication.getId());
}
for (TruckPublication removedPublication : oldPublications.values()) {
if(!entityManager.contains(removedPublication)) {
removedPublication = entityManager.merge(removedPublication);
}
entityManager.remove(removedPublication);
entityManager.flush();
}
}
}
Why doesn't my first version work?
I had a similar problem, I have a class and its children, when I remove the children from the parent they were deleted from the DB, then I attached new children using merge on the class Parent (CascadeType.ALL) using JPA/EclipseLink, then the children didn't create on the DB but in the persistency Motor (JPA). I fix this doing the following:
1- I set shared-cache-mode to NONE in the persistence.xml file
2- When I remove the children, I executed inmediatly this:
public void remove(T entity) {
getEntityManager().remove(getEntityManager().merge(entity));
getEntityManager().getEntityManagerFactory().getCache().evictAll();
}
And that's all. I hope this would help anyone else.
CHECK REFERENCE
http://wiki.eclipse.org/EclipseLink/Examples/JPA/Caching

Resources