How to delete alarge amount of data one by one from a table with their relations using transactional annotation - spring-boot

I have a large amount of data that I want to purge from the database, there are about 6 tables of which 3 have a many to many relationship with cascadeType. All the others are log and history tables independent of the 3 others
i want to purge this data one by one and if any of them have error while deleting i have to undo only the current record and show it in console and keep deleting the others
I am trying to use transactional annotation with springboot but all purging stops if an error occurs
how to manage this kind of need?
here is what i did :
#Transactional
private void purgeCards(List<CardEntity> cardsTobePurge) {
List<Long> nextCardsNumberToUpdate = getNextCardsWhichWillNotBePurge(cardsTobePurge);
TransactionTemplate lTransTemplate = new TransactionTemplate(transactionManager);
lTransTemplate.setPropagationBehavior(TransactionTemplate.PROPAGATION_REQUIRED);
lTransTemplate.execute(new TransactionCallback<Object>() {
#Override
public Object doInTransaction(TransactionStatus status) {
cardsTobePurge.forEach(cardTobePurge -> {
Long nextCardNumberOfCurrent = cardTobePurge.getNextCard();
if (nextCardsNumberToUpdate.contains(nextCardNumberOfCurrent)) {
CardEntity cardToUnlik = cardRepository.findByCardNumber(nextCardNumberOfCurrent);
unLink(cardToUnlik);
}
log.info(BATCH_TITLE + " Removing card Number : " + cardTobePurge.getCardNumber() + " with Id : "
+ cardTobePurge.getId());
List<CardHistoryEntity> historyEntitiesOfThisCard = cardHistoryRepository.findByCard(cardTobePurge);
List<LogCreationCardEntity> logCreationEntitiesForThisCard = logCreationCardRepository
.findByCardNumber(cardTobePurge.getCardNumber());
List<LogCustomerMergeEntity> logCustomerMergeEntitiesForThisCard = logCustomerMergeRepository
.findByCard(cardTobePurge);
cardHistoryRepository.deleteAll(historyEntitiesOfThisCard);
logCreationCardRepository.deleteAll(logCreationEntitiesForThisCard);
logCustomerMergeRepository.deleteAll(logCustomerMergeEntitiesForThisCard);
cardRepository.delete(cardTobePurge);
});
return Boolean.TRUE;
}
});
}

As a solution to my question:
I worked with TransactionTemplate to be able to manage transactions manually
so if an exception is raised a rollback will only be applied for the current iteration and will continue to process other cards
private void purgeCards(List<CardEntity> cardsTobePurge) {
int[] counter = { 0 }; //to simulate the exception
List<Long> nextCardsNumberToUpdate = findNextCardsWhichWillNotBePurge(cardsTobePurge);
cardsTobePurge.forEach(cardTobePurge -> {
Long nextCardNumberOfCurrent = cardTobePurge.getNextCard();
CardEntity cardToUnlik = null;
counter[0]++; //to simulate the exception
if (nextCardsNumberToUpdate.contains(nextCardNumberOfCurrent)) {
cardToUnlik = cardRepository.findByCardNumber(nextCardNumberOfCurrent);
}
purgeCard(cardTobePurge, nextCardsNumberToUpdate, cardToUnlik, counter);
});
}
private void purgeCard(#NonNull CardEntity cardToPurge, List<Long> nextCardsNumberToUpdate, CardEntity cardToUnlik,
int[] counter) {
TransactionTemplate lTransTemplate = new TransactionTemplate(transactionManager);
lTransTemplate.setPropagationBehavior(TransactionTemplate.PROPAGATION_REQUIRED);
lTransTemplate.execute(new TransactionCallbackWithoutResult() {
#Override
public void doInTransactionWithoutResult(TransactionStatus status) {
try {
if (cardToUnlik != null)
unLink(cardToUnlik);
log.info(BATCH_TITLE + " Removing card Number : " + cardToPurge.getCardNumber() + " with Id : "
+ cardToPurge.getId());
List<CardHistoryEntity> historyEntitiesOfThisCard = cardHistoryRepository.findByCard(cardToPurge);
List<LogCreationCardEntity> logCreationEntitiesForThisCard = logCreationCardRepository
.findByCardNumber(cardToPurge.getCardNumber());
List<LogCustomerMergeEntity> logCustomerMergeEntitiesForThisCard = logCustomerMergeRepository
.findByCard(cardToPurge);
cardHistoryRepository.deleteAll(historyEntitiesOfThisCard);
logCreationCardRepository.deleteAll(logCreationEntitiesForThisCard);
logCustomerMergeRepository.deleteAll(logCustomerMergeEntitiesForThisCard);
cardRepository.delete(cardToPurge);
if (counter[0] == 2)//to simulate the exception
throw new Exception();//to simulate the exception
} catch (Exception e) {
status.setRollbackOnly();
if (cardToPurge != null)
log.error(BATCH_TITLE + " Problem with card Number : " + cardToPurge.getCardNumber()
+ " with Id : " + cardToPurge.getId(), e);
else
log.error(BATCH_TITLE + "Card entity is null", e);
}
}
});
}

Related

Duplication with Chunks in Spring Batch

I have a huge file, I need to read it and dump it into DB. Any invalid records(invalid length, duplicate keys, etc), if present need to be written into a Error Report. Due to the huge size of the file we tried using the chunk-size(commit-interval) as 1000/5000/10000. In the process I found that the data was being processed redundantly due to the usage of chunks and thus my Error Report is incorrect, it not only has the actual invalid records from the input-file but also the duplicates from the chunks.
Code snippet:
#Bean
public Step readAndWriteStudentInfo() {
return stepBuilderFactory.get("readAndWriteStudentInfo")
.<Student, Student>chunk(5000).reader(studentFileReader()).faultTolerant()
.skipPolicy(skipper)..listener(listener).processor(new ItemProcessor<Student, Student>() {
#Override
public Student process(Student Student) throws Exception {
if(processedRecords.contains(Student)){
return null;
}else {
processedRecords.add(Student);
return Student;
}
}
}).writer(studentDBWriter()).build();
}
#Bean
public ItemReader<Student> studentFileReader() {
FlatFileItemReader<Student> reader = new FlatFileItemReader<>();
reader.setResource(new FileSystemResource(studentInfoFileName));
reader.setLineMapper(new DefaultLineMapper<Student>() {
{
setLineTokenizer(new FixedLengthTokenizer() {
{
setNames(classProperties50);
setColumns(range50);
}
});
setFieldSetMapper(new BeanWrapperFieldSetMapper<Student>() {
{
setTargetType(Student.class);
}
});
}
});
reader.setSaveState(false);
reader.setLinesToSkip(1);
reader.setRecordSeparatorPolicy(new TrailerSkipper());
return reader;
}
#Bean
public ItemWriter<Student> studentDBWriter() {
JdbcBatchItemWriter<Student> writer = new JdbcBatchItemWriter<>();
writer.setSql(insertQuery);
writer.setDataSource(datSource);
writer.setItemSqlParameterSourceProvider(new BeanPropertyItemSqlParameterSourceProvider<Student>());
return writer;
}
I've tried with various chunk sizes, 10, 100, 1000, 5000. The accuracy of my error report deteriorates with the increase in chunk size. Writing to Error Report is happening from my implementation of Skip Policy, kindly do let me know if that code is required too to help me out.
How do I ensure that my writer picks up unique set of records in each chunk?
Skipper Implementation:
#Override
public boolean shouldSkip(Throwable t, int skipCount) throws SkipLimitExceededException {
String exception = t.getClass().getSimpleName();
if (t instanceof FileNotFoundException) {
return false;
}
switch (exception) {
case "FlatFileParseException":
FlatFileParseException ffpe = (FlatFileParseException) t;
String errorMessage = "Line no = " + ffpe.getLineNumber() + " " + ffpe.getMessage() + " Record is ["
+ ffpe.getInput() + "].\n";
writeToRecon(errorMessage);
return true;
case "SQLException":
SQLException sE = (SQLException) t;
String sqlErrorMessage = sE.getErrorCode() + " Record is [" + sE.getCause() + "].\n";
writeToRecon(sqlErrorMessage);
return true;
case "BatchUpdateException":
BatchUpdateException batchUpdateException = (BatchUpdateException) t;
String btchUpdtExceptionMsg = batchUpdateException.getMessage() + " " + batchUpdateException.getCause();
writeToRecon(btchUpdtExceptionMsg);
return true;
default:
return false;
}

Confluent HDFS Connector is losing messages

Community, could you please help me to understand why ~3% of my messages don't end up in HDFS? I wrote a simple producer in JAVA to generate 10 million messages.
public static final String TEST_SCHEMA = "{"
+ "\"type\":\"record\","
+ "\"name\":\"myrecord\","
+ "\"fields\":["
+ " { \"name\":\"str1\", \"type\":\"string\" },"
+ " { \"name\":\"str2\", \"type\":\"string\" },"
+ " { \"name\":\"int1\", \"type\":\"int\" }"
+ "]}";
public KafkaProducerWrapper(String topic) throws UnknownHostException {
// store topic name
this.topic = topic;
// initialize kafka producer
Properties config = new Properties();
config.put("client.id", InetAddress.getLocalHost().getHostName());
config.put("bootstrap.servers", "myserver-1:9092");
config.put("key.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
config.put("value.serializer", "io.confluent.kafka.serializers.KafkaAvroSerializer");
config.put("schema.registry.url", "http://myserver-1:8089");
config.put("acks", "all");
producer = new KafkaProducer(config);
// parse schema
Schema.Parser parser = new Schema.Parser();
schema = parser.parse(TEST_SCHEMA);
}
public void send() {
// generate key
int key = (int) (Math.random() * 20);
// generate record
GenericData.Record r = new GenericData.Record(schema);
r.put("str1", "text" + key);
r.put("str2", "text2" + key);
r.put("int1", key);
final ProducerRecord<String, GenericRecord> record = new ProducerRecord<>(topic, "K" + key, (GenericRecord) r);
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata metadata, Exception e) {
if (e != null) {
logger.error("Send failed for record {}", record, e);
messageErrorCounter++;
return;
}
logger.debug("Send succeeded for record {}", record);
messageCounter++;
}
});
}
public String getStats() { return "Messages sent: " + messageCounter + "/" + messageErrorCounter; }
public long getMessageCounter() {
return messageCounter + messageErrorCounter;
}
public void close() {
producer.close();
}
public static void main(String[] args) throws InterruptedException, UnknownHostException {
// initialize kafka producer
KafkaProducerWrapper kafkaProducerWrapper = new KafkaProducerWrapper("my-test-topic");
long max = 10000000L;
for (long i = 0; i < max; i++) {
kafkaProducerWrapper.send();
}
logger.info("producer-demo sent all messages");
while (kafkaProducerWrapper.getMessageCounter() < max)
{
logger.info(kafkaProducerWrapper.getStats());
Thread.sleep(2000);
}
logger.info(kafkaProducerWrapper.getStats());
kafkaProducerWrapper.close();
}
And I use the Confluent HDFS Connector in standalone mode to write data to HDFS. The configuration is as follows:
name=hdfs-consumer-test
connector.class=io.confluent.connect.hdfs.HdfsSinkConnector
tasks.max=1
topics=my-test-topic
hdfs.url=hdfs://my-cluster/kafka-test
hadoop.conf.dir=/etc/hadoop/conf/
flush.size=100000
rotate.interval.ms=20000
# increase timeouts to avoid CommitFailedException
consumer.session.timeout.ms=300000
consumer.request.timeout.ms=310000
heartbeat.interval.ms= 60000
session.timeout.ms= 100000
The connector writes the data into HDFS, but after waiting for 20000 ms (due to rotate.interval.ms) not all messages are received.
scala> spark.read.avro("/kafka-test/topics/my-test-topic/partition=*/my-test-topic*")
.count()
res0: Long = 9749015
Any idea what is the reason for this behavior? Where is my mistake? I'm using Confluent 3.0.1/Kafka 10.0.0.1.
Are you seeing the last few messages are not moved to HDFS? If so, it's likely you are running into the issue described here https://github.com/confluentinc/kafka-connect-hdfs/pull/100
Try sending one more message to the topic after the rotate.interval.ms has expired to validate this is what you are running into. If you need to rotate based on time, it's probably a good idea to upgrade to pickup the fix.

Opendaylight flow notification

I am writing an Opendaylight application that will extract all the flow rules as and when it is deleted, added or updated.
To get the notifications when a flow is added, removed or updated, the application should provide a listener which extends the salFlowListener interface.
However, when I create the application directory structure, it is not clear from the Opendaylight tutorials online as to where the logic is to be put.
Additionally, there are compilation errors when the notification-service is augmented using the YANG model.
Is this the right approach to getting the notifications and is any clear tutorials online that I can refer to?
Thanks.
public class ChangeListener implements DataChangeListener {
private static final Logger logger = LoggerFactory.getLogger(DhcontrollerListener.class);
private DataBroker dataBroker;
public DhcontrollerListener(DataBroker broker) {
this.dataBroker = broker;
//flow
InstanceIdentifier<Flow> flowPath = InstanceIdentifier.builder(Nodes.class)
.child(Node.class).augmentation(FlowCapableNode.class).child(Table.class)
.child(Flow.class).build();
dataBroker.registerDataChangeListener(LogicalDatastoreType.OPERATIONAL, flowPath, this,
DataChangeScope.BASE);
}
#Override
public void onDataChanged(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> change) {
if (change == null) {
logger.info("DhcontrollerListener ===>>> onDataChanged: change is null");
return;
}
handleCreatedFlow(change);
handleUpdatedFlow(change);
handleDeletedFlow(change);
}
private void handleCreatedFlow(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> change) {
Map<InstanceIdentifier<?>, DataObject> createdData = change.getCreatedData();
if (createdData == null) {
logger.info("handleCreatedFlow ===>>> getRemovedPaths==null");
return;
}
for (Map.Entry<InstanceIdentifier<?>, DataObject> entry : createdData.entrySet()) {
final DataObject dataObject = entry.getValue();
if (dataObject instanceof Flow) {
Flow flow = (Flow) dataObject;
logger.info("create flow : " + flow.getCookie() + " " + flow.getKey().toString());
}
}
}
private void handleUpdatedFlow(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> change) {
Map<InstanceIdentifier<?>, DataObject> updatedData = change.getCreatedData();
if (updatedData == null) {
logger.info("handleUpdatedFlow ===>>> getRemovedPaths==null");
return;
}
for (Map.Entry<InstanceIdentifier<?>, DataObject> entry : updatedData.entrySet()) {
final DataObject dataObject = entry.getValue();
if (dataObject instanceof Flow) {
Flow flow = (Flow) dataObject;
logger.info("update flow : " + flow.getCookie() + " " + flow.getKey().toString());
}
}
}
private void handleDeletedFlow(AsyncDataChangeEvent<InstanceIdentifier<?>, DataObject> change) {
Map<InstanceIdentifier<?>, DataObject> originalData = change.getOriginalData();
Set<InstanceIdentifier<?>> removedData = change.getRemovedPaths();
if (removedData == null) {
logger.info("handleDeletedFlow ===>>> getRemovedPaths==null");
return;
}
for (InstanceIdentifier<?> instanceIdentifier : removedData) {
final DataObject dataObject = originalData.get(instanceIdentifier);
if (dataObject instanceof Flow) {
Flow flow = (Flow) dataObject;
logger.info("remove flow : " + flow.getCookie() + " " + flow.getPriority() + " "
+ flow.getMatch().getIpMatch() + " " + flow.getKey().toString());
}
}
}
}
How I do is to set the InstanceIdentifier. Setting different InstanceIdentifier, you can also listener to the change of node, table, and so on.

spring security's searchForSingleEntryInternal method throws exception if record not found

I'm working on an application that uses Spring Security's searchForSingleEntryInternal method. Is there a way to do the same thing without throwing an exception if a record is not found? I want to be able to create a condition that handles missing records.
What I want to change
if (results.size() == 0) {
throw new IncorrectResultSizeDataAccessException(1, 0);
}
From this method
/**
* Internal method extracted to avoid code duplication in AD search.
*/
public static DirContextOperations searchForSingleEntryInternal(DirContext ctx, SearchControls searchControls,
String base, String filter, Object[] params) throws NamingException {
final DistinguishedName ctxBaseDn = new DistinguishedName(ctx.getNameInNamespace());
final DistinguishedName searchBaseDn = new DistinguishedName(base);
final NamingEnumeration<SearchResult> resultsEnum = ctx.search(searchBaseDn, filter, params, searchControls);
if (logger.isDebugEnabled()) {
logger.debug("Searching for entry under DN '" + ctxBaseDn + "', base = '" + searchBaseDn + "', filter = '" + filter + "'");
}
Set<DirContextOperations> results = new HashSet<DirContextOperations>();
try {
while (resultsEnum.hasMore()) {
SearchResult searchResult = resultsEnum.next();
// Work out the DN of the matched entry
DistinguishedName dn = new DistinguishedName(new CompositeName(searchResult.getName()));
if (base.length() > 0) {
dn.prepend(searchBaseDn);
}
if (logger.isDebugEnabled()) {
logger.debug("Found DN: " + dn);
}
results.add(new DirContextAdapter(searchResult.getAttributes(), dn, ctxBaseDn));
}
} catch (PartialResultException e) {
LdapUtils.closeEnumeration(resultsEnum);
logger.info("Ignoring PartialResultException");
}
if (results.size() == 0) {
throw new IncorrectResultSizeDataAccessException(1, 0);
}
if (results.size() > 1) {
throw new IncorrectResultSizeDataAccessException(1, results.size());
}
return results.iterator().next();
}
}
I'm somewhat new to spring and maybe I'm missing something obvious. Any advice would be much appreciated
easy fix, just had to copy over the searchForSingleEntryInternal method from Spring Security and place it in my own project. From there I was able to tweak the exception handling so the application didn't come to a grinding halt if a record wasn't found.

JavaFX: ConcurrentModificationException while adding TreeItem objects in TreeView, in a seperate thread

I have the following code
public void start(Stage primaryStage) {
BorderPane border_pane = new BorderPane();
TreeView tree = addTreeView(); //TreeView on the LEFT
border_pane.setLeft(tree);
/* more stuff added to the border_pane here... */
Scene scene = new Scene(border_pane, 900, 700);
scene.setFill(Color.GHOSTWHITE);
primaryStage.setTitle("PlugControl v0.1e");
primaryStage.setScene(scene);
primaryStage.show();
}
public static void main(String[] args) {
launch(args);
}
With addTreeView being a function that reads data off an SQL DB and adds around ~35 TreeItems, based on that data. The addition of TreeItems to treeItemRoot is done in a seperate thread. NOTE: treeItemRoot is declared in the main class, and is null before here.
public TreeView addTreeView() { //Our treeView is positioned on the LEFT
treeItemRoot = new PlugTreeItem<>("Active Plugs", new ImageView(new Image(getClass().getResourceAsStream("graphics/plugicon.png"))), new Plug()); //Root of the tree, contains a dummy Plug object.
selectedTreeItem = treeItemRoot;
treeItemRoot.setExpanded(true); //always expand it
selectedTreeItem.getPlugItem()
.getSIHUid().addListener(new ChangeListener<String>() {
#Override
public void changed(ObservableValue<? extends String> observable, String oldValue, String newValue
) {
System.err.println("changed " + oldValue + "->" + newValue);
}
}
);
TreeView<String> treeView = new TreeView<>(treeItemRoot); //Build the tree with our root node.
final Task task;
task = new Task<Void>() {
#Override
protected Void call() throws Exception {
//=========== SQL STUFF BEGINS HERE ============================
Statement sta = null;
ResultSet result_set = null;
Connection conn = null;
try {
try {
System.err.println("Loading JDBC driver...");
Class.forName("com.mysql.jdbc.Driver");
System.err.println("Driver loaded!");
} catch (ClassNotFoundException e) {
throw new RuntimeException("Cannot find JDBC driver in the classpath!", e);
}
System.err.println("Connecting to database...");
conn = DriverManager.getConnection("[DB link here]", "[username]", "[password]"); //Username is PlugControl, pw is woof
System.err.println("Connected to Database!");
sta = conn.createStatement();
String sql_query = "SELECT * FROM pwnodes INNER JOIN pwcomports ON pwnodes.NetworkID = pwcomports.NetworkID WHERE pwnodes.connection = 'on' ORDER BY pwnodes.Location";
result_set = sta.executeQuery(sql_query);
System.err.println("SQL query successfuly executed!");
int count = 0;
while (result_set.next()) {
Plug pl = null; //MARKER: We might need to do switch (result_set.getString("Server")) for SIHU1 and SIHU2.
count++;
pl = new Plug(result_set.getString("SIHUid"), result_set.getString("sensorID"), result_set.getString("Location"), result_set.getString("Appliance"), result_set.getString("Type"), result_set.getString("connection"), result_set.getString("Server"), result_set.getString("ServerIP"));
PlugTreeItem<String> pti = new PlugTreeItem(pl.getSIHUid().getValue() + " " + pl.getLocation() + " " + pl.getAppliance(), new ImageView(new Image(getClass().getResourceAsStream("graphics/smiley.png"))), pl); //icon does not work in children
treeItemRoot.getChildren().add(pti); //CONCURRENCY ERRORS HERE
}
System.err.println("ALERT SQL QUERY RESULTS: " + count);
} catch (SQLException e) //linked try clause # line 50
{
throw new RuntimeException("Cannot connect the database!", e);
} finally { // Time to wrap things up, by closing all open SQL procs.
try {
if (sta != null) {
sta.close();
}
if (result_set != null) {
result_set.close();
}
if (conn != null) {
System.err.println("Closing the connection.");
conn.close();
}
} catch (SQLException e) //We might as well ignore this, but just in case.
{
throw new RuntimeException("Error while closing up statement, result set and connection!", e);
}
}
//============== SQL STUFF ENDS HERE ===========================
System.err.println("Finished");
return null;
}
};
new Thread(task).start(); //Run the task!
treeView.getSelectionModel()
.selectedItemProperty().addListener(new ChangeListener() {
#Override
public void changed(ObservableValue observable, Object oldValue, Object newValue
) {
selectedTreeItem = (PlugTreeItem<String>) newValue;
System.err.println("DEBUG: Selection plug SIHUid: " + selectedTreeItem.getPlugItem().print()); //MARKER: REMOVE
updateTextFields(); //Update TextAreas.
if (!"DUMMY".equals(selectedTreeItem.getPlugItem().getSIHUid().getValue())) {
buttonOn.setDisable(false);
buttonOff.setDisable(false);
} else {
buttonOn.setDisable(true);
buttonOff.setDisable(true);
}
}
}
);
return treeView;
}
Once every 5-6 runs I receive a ConcurrentModificationException because I think the Thread task() doesn't manage to finish before addTreeView's returned TreeView is added to the border_pane, and maybe it begins iterating through it while it's still having items added to it?
Executing C:\Users\74\Documents\NetBeansProjects\PlugControl_v0.5\dist\run1559674105\PlugControl.jar using platform C:\Program Files\Java\jdk1.7.0_25\jre/bin/java
Loading JDBC driver...
Driver loaded!
Connecting to database...
Connected to Database!
SQL query successfuly executed!
java.util.ConcurrentModificationException
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:819)
at java.util.ArrayList$Itr.next(ArrayList.java:791)
at com.sun.javafx.collections.ObservableListWrapper$ObservableListIterator.next(ObservableListWrapper.java:681)
at javafx.scene.control.TreeItem.updateExpandedDescendentCount(TreeItem.java:788)
at javafx.scene.control.TreeItem.getExpandedDescendentCount(TreeItem.java:777)
at javafx.scene.control.TreeView.getExpandedDescendantCount(TreeView.java:864)
at javafx.scene.control.TreeView.updateTreeItemCount(TreeView.java:873)
at javafx.scene.control.TreeView.impl_getTreeItemCount(TreeView.java:533)
at com.sun.javafx.scene.control.skin.TreeViewSkin.getItemCount(TreeViewSkin.java:207)
at com.sun.javafx.scene.control.skin.TreeViewSkin.updateItemCount(TreeViewSkin.java:220)
at com.sun.javafx.scene.control.skin.TreeViewSkin.handleControlPropertyChanged(TreeViewSkin.java:135)
at com.sun.javafx.scene.control.skin.SkinBase$3.changed(SkinBase.java:282)
at javafx.beans.value.WeakChangeListener.changed(WeakChangeListener.java:107)
at com.sun.javafx.binding.ExpressionHelper$SingleChange.fireValueChangedEvent(ExpressionHelper.java:196)
at com.sun.javafx.binding.ExpressionHelper.fireValueChangedEvent(ExpressionHelper.java:100)
at javafx.beans.property.IntegerPropertyBase.fireValueChangedEvent(IntegerPropertyBase.java:123)
at javafx.beans.property.IntegerPropertyBase.markInvalid(IntegerPropertyBase.java:130)
at javafx.beans.property.IntegerPropertyBase.set(IntegerPropertyBase.java:163)
at javafx.scene.control.TreeView.setTreeItemCount(TreeView.java:515)
at javafx.scene.control.TreeView.updateTreeItemCount(TreeView.java:876)
at javafx.scene.control.TreeView.impl_getTreeItemCount(TreeView.java:533)
at javafx.scene.control.TreeCell.updateItem(TreeCell.java:391)
at javafx.scene.control.TreeCell.access$000(TreeCell.java:67)
at javafx.scene.control.TreeCell$1.invalidated(TreeCell.java:95)
at com.sun.javafx.binding.ExpressionHelper$SingleInvalidation.fireValueChangedEvent(ExpressionHelper.java:155)
at com.sun.javafx.binding.ExpressionHelper.fireValueChangedEvent(ExpressionHelper.java:100)
at javafx.beans.property.ReadOnlyIntegerWrapper$ReadOnlyPropertyImpl.fireValueChangedEvent(ReadOnlyIntegerWrapper.java:195)
at javafx.beans.property.ReadOnlyIntegerWrapper.fireValueChangedEvent(ReadOnlyIntegerWrapper.java:161)
at javafx.beans.property.IntegerPropertyBase.markInvalid(IntegerPropertyBase.java:130)
at javafx.beans.property.IntegerPropertyBase.set(IntegerPropertyBase.java:163)
at javafx.scene.control.IndexedCell.updateIndex(IndexedCell.java:112)
at com.sun.javafx.scene.control.skin.VirtualFlow.setCellIndex(VirtualFlow.java:1596)
at com.sun.javafx.scene.control.skin.VirtualFlow.addLeadingCells(VirtualFlow.java:1049)
at com.sun.javafx.scene.control.skin.VirtualFlow.layoutChildren(VirtualFlow.java:1005)
at com.sun.javafx.scene.control.skin.VirtualFlow.setCellCount(VirtualFlow.java:206)
at com.sun.javafx.scene.control.skin.TreeViewSkin.updateItemCount(TreeViewSkin.java:225)
at com.sun.javafx.scene.control.skin.TreeViewSkin.handleControlPropertyChanged(TreeViewSkin.java:135)
at com.sun.javafx.scene.control.skin.SkinBase$3.changed(SkinBase.java:282)
at javafx.beans.value.WeakChangeListener.changed(WeakChangeListener.java:107)
at com.sun.javafx.binding.ExpressionHelper$SingleChange.fireValueChangedEvent(ExpressionHelper.java:196)
at com.sun.javafx.binding.ExpressionHelper.fireValueChangedEvent(ExpressionHelper.java:100)
at javafx.beans.property.IntegerPropertyBase.fireValueChangedEvent(IntegerPropertyBase.java:123)
at javafx.beans.property.IntegerPropertyBase.markInvalid(IntegerPropertyBase.java:130)
at javafx.beans.property.IntegerPropertyBase.set(IntegerPropertyBase.java:163)
at javafx.scene.control.TreeView.setTreeItemCount(TreeView.java:515)
at javafx.scene.control.TreeView.updateTreeItemCount(TreeView.java:876)
at javafx.scene.control.TreeView.impl_getTreeItemCount(TreeView.java:533)
at javafx.scene.control.TreeCell.updateItem(TreeCell.java:391)
at javafx.scene.control.TreeCell.access$000(TreeCell.java:67)
at javafx.scene.control.TreeCell$1.invalidated(TreeCell.java:95)
at com.sun.javafx.binding.ExpressionHelper$SingleInvalidation.fireValueChangedEvent(ExpressionHelper.java:155)
at com.sun.javafx.binding.ExpressionHelper.fireValueChangedEvent(ExpressionHelper.java:100)
at javafx.beans.property.ReadOnlyIntegerWrapper$ReadOnlyPropertyImpl.fireValueChangedEvent(ReadOnlyIntegerWrapper.java:195)
at javafx.beans.property.ReadOnlyIntegerWrapper.fireValueChangedEvent(ReadOnlyIntegerWrapper.java:161)
at javafx.beans.property.IntegerPropertyBase.markInvalid(IntegerPropertyBase.java:130)
at javafx.beans.property.IntegerPropertyBase.set(IntegerPropertyBase.java:163)
at javafx.scene.control.IndexedCell.updateIndex(IndexedCell.java:112)
at com.sun.javafx.scene.control.skin.VirtualFlow.setCellIndex(VirtualFlow.java:1596)
at com.sun.javafx.scene.control.skin.VirtualFlow.getCell(VirtualFlow.java:1500)
at com.sun.javafx.scene.control.skin.VirtualFlow.getCellLength(VirtualFlow.java:1523)
at com.sun.javafx.scene.control.skin.VirtualFlow$3.call(VirtualFlow.java:478)
at com.sun.javafx.scene.control.skin.VirtualFlow$3.call(VirtualFlow.java:476)
at com.sun.javafx.scene.control.skin.PositionMapper.computeViewportOffset(PositionMapper.java:143)
at com.sun.javafx.scene.control.skin.VirtualFlow.layoutChildren(VirtualFlow.java:1001)
at javafx.scene.Parent.layout(Parent.java:1018)
at javafx.scene.Parent.layout(Parent.java:1028)
at javafx.scene.Parent.layout(Parent.java:1028)
at javafx.scene.Parent.layout(Parent.java:1028)
at javafx.scene.Scene.layoutDirtyRoots(Scene.java:516)
at javafx.scene.Scene.doLayoutPass(Scene.java:487)
at javafx.scene.Scene.access$3900(Scene.java:170)
at javafx.scene.Scene$ScenePulseListener.pulse(Scene.java:2203)
at com.sun.javafx.tk.Toolkit.firePulse(Toolkit.java:363)
at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:460)
at com.sun.javafx.tk.quantum.QuantumToolkit$9.run(QuantumToolkit.java:329)
at com.sun.glass.ui.win.WinApplication._runLoop(Native Method)
at com.sun.glass.ui.win.WinApplication.access$100(WinApplication.java:29)
at com.sun.glass.ui.win.WinApplication$3$1.run(WinApplication.java:73)
at java.lang.Thread.run(Thread.java:724)
ALERT SQL QUERY RESULTS: 35
Closing the connection.
Finished
Any help/advice on dealing with the issue, guys? The exception doesn't point me towards a place in my code, and so far I'm working on hunches.}
EDIT: For reference, PlugTreeItem is just a TreeItem<> that also carries a Plug with it, Plug being a class of mine that holds a few String values. Nothing special.
public class PlugTreeItem<T> extends TreeItem{ \* code *\}
I would suggest you to make a List<Plug> inside your while loop rather than making individual object and adding it to the tree, because all operations on javafx controls must be done on javafx thread and not on task thread !
Create a list outside the Thread body
List<Plug> listOfPlugs = new ArrayList<Plug>();
Then, in the while loop, you can write
int count = 0;
while (result_set.next()) {
Plug pl = null;
count++;
pl = new Plug(result_set.getString("SIHUid"),
result_set.getString("sensorID"), result_set.getString("Location"),
result_set.getString("Appliance"), result_set.getString("Type"),
result_set.getString("connection"), result_set.getString("Server"),
result_set.getString("ServerIP"));
listOfPlugs.add(p1);
}
Later, after you start the thread you can make the following code
new Thread(task).start();
task.setOnSucceeded(new EventHandler<WorkerStateEvent>()
{
#Override
public void handle(WorkerStateEvent workerStateEvent) {
for(Plug p1 : listOfPlugs)
{
PlugTreeItem<String> pti = new PlugTreeItem(pl.getSIHUid().getValue()
+ " " + pl.getLocation() + " " + pl.getAppliance(),
new ImageView(new Image(
getClass().getResourceAsStream("graphics/smiley.png"))), pl);
treeItemRoot.getChildren().add(pti);
}
}
I don't see you syncing on the javafx application thread which is required when coming from another one

Resources