can anyone help me Spring Batch Issue? (Unintended schedule Spring Batch) - spring

The implemented function is to send LMS to the user at the alarm time.
Send a total of 4 alarms (9:00, 13:00, 19:00, 21:00).
Log was recorded regardless of success.
It was not recorded in the Log, but when I looked at the batch data in the DB, I found an unintended COMPLETED.
Issue>
Batch was successfully executed at 9 and 13 on the 18th.
But at 13:37 it's not even a schedule, but it's executed. (and FAILED)
Subsequently, 13:38, 40, 42, 44 minutes executed. (all COMPLETED)
Q1. Why was it executed when it wasn't even the batch execution time?
Q2. I save the log even when executing batch and sending SMS. Log was printed normally at 9 and 13 o'clock.
But Log is not saved for non-schedule(13:37, 38, 40, 42, 44).
Check spring boot service and tomcat service with one
server CPU, memory usage is normal
Batch Problem
Spring Boot (2.2.6 RELEASE)
Spring Boot - Embedded Tomcat
===== Start Scheduler =====
#Component
public class DosageAlarmScheduler {
public static final int MORNING_HOUR = 9;
public static final int LUNCH_HOUR = 13;
public static final int DINNER_HOUR = 19;
public static final int BEFORE_SLEEP_HOUR = 21;
#Scheduled(cron = "'0 0 */1 * * *") // every hour
public void executeDosageAlarmJob() {
LocalDateTime nowDateTime = LocalDateTime.now();
try {
if(isExecuteTime(nowDateTime)) {
log.info("[Send LMS], {}", nowDateTime);
EatFixCd eatFixCd = currentEatFixCd(nowDateTime);
jobLauncher.run(
alarmJob,
new JobParametersBuilder()
.addString("currentDate", nowDateTime.toString())
.addString("eatFixCodeValue", eatFixCd.getCodeValue())
.toJobParameters()
);
} else {
log.info("[Not Send LMS], {}", nowDateTime);
}
} catch (JobExecutionAlreadyRunningException e) {
log.error("[JobExecutionAlreadyRunningException]", e);
} catch (JobRestartException e) {
log.error("[JobRestartException]", e);
} catch (JobInstanceAlreadyCompleteException e) {
log.error("[JobInstanceAlreadyCompleteException]", e);
} catch (JobParametersInvalidException e) {
log.error("[JobParametersInvalidException]", e);
} catch(Exception e) {
log.error("[Exception]", e);
}
/* Start private method */
private boolean isExecuteTime(LocalDateTime nowDateTime) {
return nowDateTime.getHour() == MORNING_TIME.getHour()
|| nowDateTime.getHour() == LUNCH_TIME.getHour()
|| nowDateTime.getHour() == DINNER_TIME.getHour()
|| nowDateTime.getHour() == BEFORE_SLEEP_TIME.getHour();
}
private EatFixCd currentEatFixCd(LocalDateTime nowDateTime) {
switch(nowDateTime.getHour()) {
case MORNING_HOUR:
return EatFixCd.MORNING;
case LUNCH_HOUR:
return EatFixCd.LUNCH;
case DINNER_HOUR:
return EatFixCd.DINNER;
case BEFORE_SLEEP_HOUR:
return EatFixCd.BEFORE_SLEEP;
default:
throw new RuntimeException("Not Dosage Time");
}
}
/* End private method */
}
}
===== End Scheduler =====
===== Start Job =====
#Configuration
public class DosageAlarmConfiguration {
private final int chunkSize = 20;
private final JobBuilderFactory jobBuilderFactory;
private final StepBuilderFactory stepBuilderFactory;
private final EntityManagerFactory entityManagerFactory;
#Bean
public Job dosageAlarmJob() {
log.info("[dosageAlarmJob excute]");
return jobBuilderFactory.get("dosageAlarmJob")
.start(dosageAlarmStep(null, null)).build();
}
#Bean
#JobScope
public Step dosageAlarmStep(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Step excute]");
return stepBuilderFactory.get("dosageAlarmStep")
.<Object[], DosageReceiverInfoDto>chunk(chunkSize)
.reader(dosageAlarmReader(currentDate, eatFixCodeValue))
.processor(dosageAlarmProcessor(currentDate, eatFixCodeValue))
.writer(dosageAlarmWriter(currentDate, eatFixCodeValue))
.build();
}
#Bean
#StepScope
public JpaPagingItemReader<Object[]> dosageAlarmReader(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Reader excute : {}, {}]", currentDate, eatFixCodeValue);
if(currentDate == null) {
return null;
} else {
JpaPagingItemReader<Object[]> jpaPagingItemReader = new JpaPagingItemReader<>();
jpaPagingItemReader.setName("dosageAlarmReader");
jpaPagingItemReader.setEntityManagerFactory(entityManagerFactory);
jpaPagingItemReader.setPageSize(chunkSize);
jpaPagingItemReader.setQueryString("select das from DosageAlarm das where :currentDate between das.startDate and das.endDate ");
HashMap<String, Object> parameterValues = new HashMap<>();
parameterValues.put("currentDate", LocalDateTime.parse(currentDate).toLocalDate());
jpaPagingItemReader.setParameterValues(parameterValues);
return jpaPagingItemReader;
}
}
#Bean
#StepScope
public ItemProcessor<Object[], DosageReceiverInfoDto> dosageAlarmProcessor(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Processor excute : {}, {}]", currentDate, eatFixCodeValue);
...
convert to DosageReceiverInfoDto
...
}
#Bean
#StepScope
public ItemWriter<DosageReceiverInfoDto> dosageAlarmWriter(
#Value("#{jobParameters[currentDate]}") String currentDate,
#Value("#{jobParameters[eatFixCodeValue]}") String eatFixCodeValue
) {
log.info("[dosageAlarm Writer excute : {}, {}]", currentDate, eatFixCodeValue);
...
make List
...
if(reqMessageDtoList != null) {
sendMessages(reqMessageDtoList);
} else {
log.info("[reqMessageDtoList not Exist]");
}
}
public SmsExternalSendResDto sendMessages(List<reqMessagesDto> reqMessageDtoList) {
log.info("[receiveList] smsTypeCd : {}, contentTypeCd : {}, messages : {}", smsTypeCd.LMS, contentTypeCd.COMM, reqMessageDtoList);
...
send Messages
}
}
===== End Job =====
Thank U.
i want to fix my problem and i hope this question is hepled another people.

Related

How can i use #autowire in runnable spring boot

I have few MongoTemplate and Repos and i need to call them using #Autowire in my runnable class that is being executed by exceutor class using multi threading, now the problem is that when i run the application my AutoWire for mongoTempelate and Repos returns null pointer exception.
Executor class:
#Component
public class MessageConsumer implements ConsumerSeekAware {
#Autowired
AlarmDataRepository alarmDataRepository;
int assignableCores = ((Runtime.getRuntime().availableProcessors()));
ExecutorService executor = Executors.newFixedThreadPool(
assignableCores > 1 ? assignableCores : 1
);
int counter = 0;
List<String> uniqueRecords = new ArrayList<String>();
#KafkaListener(topics = "teltonikaTest", groupId = "xyz")
public void processMessages(#Payload List<String> payload, #Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions, #Header(KafkaHeaders.OFFSET) List<Long> offsets) throws UnsupportedEncodingException, DecodeException {
System.out.println("assignable resources are: " + assignableCores);
log.info("Batch Size is: {}", payload.size());
if(counter==0){
log.info("Teletonica Packets Received!");
}
for (int i = 0; i < payload.size(); i++) {
log.info("processing message='{}' with partition off-set='{}'", payload.get(i), partitions.get(i) + " _" + offsets.get(i));
}
uniqueRecords = payload.stream().distinct().collect(Collectors.toList());
Runnable worker = new TeltonikaWorkerThread(uniqueRecords);
executor.execute(worker);
counter++;
}
}
public class TeltonikaWorkerThread implements Runnable{
List<String> records;
List<CurrentDevice> currentDevices = new ArrayList<>();
#Autowired
CurrentDeviceRepository currentDeviceRepository;
#Autowired
MongoTemplate mongoTemplate;
public TeltonikaWorkerThread(List<String> records) {
this.records = records;
}
public void run() {
try {
processMessage();
} catch (UnsupportedEncodingException e) {
e.printStackTrace();
} catch (DecodeException e) {
e.printStackTrace();
}
}
public void processMessage() throws UnsupportedEncodingException,DecodeException {
for(Object record : records){
if(record!="0"){
try{
int IMEILength = record.toString().indexOf("FF");
String IMEI = record.toString().substring(0,IMEILength);
}
catch (Exception e){
e.printStackTrace();
}
}
}
}
}
If I understand correctly, your problem is about multiple beans and Spring doesn't know which one should be injected. There are several options here.
For example, you can use #Qualifier annotation based on the bean name or #Primary annotation.
If your problem is something else, please add an example to your question.

Read New File While Doing Processing For A Field In Spring Batch

I have a fixedlength input file reading by using SPRING BATCH.
I have already implemented Job, Step, Processor, etc.
Here are the sample code.
#Configuration
public class BatchConfig {
private JobBuilderFactory jobBuilderFactory;
private StepBuilderFactory stepBuilderFactory;
#Value("${inputFile}")
private Resource resource;
#Autowired
public BatchConfig(JobBuilderFactory jobBuilderFactory, StepBuilderFactory stepBuilderFactory) {
this.jobBuilderFactory = jobBuilderFactory;
this.stepBuilderFactory = stepBuilderFactory;
}
#Bean
public Job job() {
return this.jobBuilderFactory.get("JOB-Load")
.start(fileReadingStep())
.build();
}
#Bean
public Step fileReadingStep() {
return stepBuilderFactory.get("File-Read-Step1")
.<Employee,EmpOutput>chunk(1000)
.reader(itemReader())
.processor(new CustomFileProcesser())
.writer(new CustomFileWriter())
.faultTolerant()
.skipPolicy(skipPolicy())
.build();
}
#Bean
public FlatFileItemReader<Employee> itemReader() {
FlatFileItemReader<Employee> flatFileItemReader = new FlatFileItemReader<Employee>();
flatFileItemReader.setResource(resource);
flatFileItemReader.setName("File-Reader");
flatFileItemReader.setLineMapper(LineMapper());
return flatFileItemReader;
}
#Bean
public LineMapper<Employee> LineMapper() {
DefaultLineMapper<Employee> defaultLineMapper = new DefaultLineMapper<Employee>();
FixedLengthTokenizer fixedLengthTokenizer = new FixedLengthTokenizer();
fixedLengthTokenizer.setNames(new String[] { "employeeId", "employeeName", "employeeSalary" });
fixedLengthTokenizer.setColumns(new Range[] { new Range(1, 9), new Range(10, 20), new Range(20, 30)});
fixedLengthTokenizer.setStrict(false);
defaultLineMapper.setLineTokenizer(fixedLengthTokenizer);
defaultLineMapper.setFieldSetMapper(new CustomFieldSetMapper());
return defaultLineMapper;
}
#Bean
public JobSkipPolicy skipPolicy() {
return new JobSkipPolicy();
}
}
For Processing I have added some sample code What I need, But if I add BufferedReader here then it's taking more times to do the job.
#Component
public class CustomFileProcesser implements ItemProcessor<Employee, EmpOutput> {
#Override
public EmpOutput process(Employee item) throws Exception {
EmpOutput emp = new EmpOutput();
emp.setEmployeeSalary(checkSal(item.getEmployeeSalary()));
return emp;
}
public String checkSal(String sal) {
// need to read the another file
// required to do some kind of validation
// after that final result need to return
File f1 = new File("C:\\Users\\John\\New\\salary.txt");
FileReader fr;
try {
fr = new FileReader(f1);
BufferedReader br = new BufferedReader(fr);
String s = br.readLine();
while (s != null) {
String value = s.substring(5, 7);
if(value.equals(sal))
sal = value;
else
sal = "5000";
s = br.readLine();
}
} catch (Exception e) {
e.printStackTrace();
}
return sal;
}
// other fields need to check by reading different different file.
// These new files contains more than 30k records.
// all are fixedlength file.
// I need to get the field by giving the index
}
While doing the processing for one or more field, I need to check In another file by reading that file (it's a file I will read from fileSystem/Cloud).
While processing the data for 5 fields I need to read 5 different different file again, I will check the fields details inside those file and then I will gererate the result , That result will process forther.
You can cache the content of the file in memory and do your check against the cache instead of re-reading the entire file from disk for each item.
You can find an example here: Spring Batch With Annotation and Caching.

Spring Boot WebSocket URL Not Responding and RxJS Call Repetition?

I'm trying to follow a guide to WebSockets at https://www.devglan.com/spring-boot/spring-boot-angular-websocket
I'd like it to respond to ws://localhost:8448/wsb/softlayer-cost-file, but I'm sure I misunderstood something. I'd like to get it to receive a binary file and issue periodic updates as the file is being processed.
Questions are:
How come Spring does not respond to my requests despite all the multiple URLs I try (see below).
Does my RxJS call run once and then conclude, or does it keep running until some closure has happened? Sorry to ask what might be obvious to others.
On my Spring Boot Server start, I see no errors. After about 5-7 minutes of running, I saw the following log message:
INFO o.s.w.s.c.WebSocketMessageBrokerStats - WebSocketSession[0 current WS(0)-HttpStream(0)-HttpPoll(0), 0 total, 0 closed abnormally (0 connect failure, 0 send limit, 0 transport error)], stompSubProtocol[processed CONNECT(0)-CONNECTED(0)-DISCONNECT(0)], stompBrokerRelay[null], inboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], outboundChannel[pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], sockJsScheduler[pool size = 6, active threads = 1, queued tasks = 0, completed tasks = 5]
I've pointed my browser at these URLs and can't get the Spring Boot server to show any reaction:
ws://localhost:8448/app/message
ws://localhost:8448/greeting/app/message
ws://localhost:8448/topic
ws://localhost:8448/queue
(I got the initial request formed in Firefox, then clicked edit/resend to try again).
WebSocketConfig.java
#Configuration
#EnableWebSocketMessageBroker
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#Autowired
CostFileUploadWebSocketHandler costFileUploadWebSocketHandler;
public void registerWebSocketHandlers(WebSocketHandlerRegistry registry) {
registry.addHandler(new SocketTextHandler(), "/wst");
registry.addHandler(costFileUploadWebSocketHandler, "/wsb/softlayer-cost-file");
}
#Override
public void configureMessageBroker(MessageBrokerRegistry config) {
config.enableSimpleBroker("/topic/", "/queue/");
config.setApplicationDestinationPrefixes("/app");
}
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/greeting").setAllowedOrigins("*");
// .withSockJS();
}
}
CostFileUploadWebSocketHandler.java
#Component
public class CostFileUploadWebSocketHandler extends BinaryWebSocketHandler {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
private SoftLayerJobService softLayerJobService;
private SoftLayerService softLayerService;
private AuthenticationFacade authenticationFacade;
#Autowired
CostFileUploadWebSocketHandler(SoftLayerJobService softLayerJobService, SoftLayerService softLayerService,
AuthenticationFacade authenticationFacade) {
this.softLayerJobService = softLayerJobService;
this.softLayerService = softLayerService;
this.authenticationFacade = authenticationFacade;
}
Map<WebSocketSession, FileUploadInFlight> sessionToFileMap = new WeakHashMap<>();
#Override
public boolean supportsPartialMessages() {
return true;
}
class WebSocketProgressReporter implements ProgressReporter {
private WebSocketSession session;
public WebSocketProgressReporter(WebSocketSession session) {
this.session = session;
}
#Override
public void reportCurrentProgress(BatchStatus currentBatchStatus, long currentPercentage) {
try {
session.sendMessage(new TextMessage("BatchStatus "+currentBatchStatus));
session.sendMessage(new TextMessage("Percentage Complete "+currentPercentage));
} catch(IOException e) {
throw new RuntimeException(e);
}
}
}
#Override
protected void handleBinaryMessage(WebSocketSession session, BinaryMessage message) throws Exception {
ByteBuffer payload = message.getPayload();
FileUploadInFlight inflightUpload = sessionToFileMap.get(session);
if (inflightUpload == null) {
throw new IllegalStateException("This is not expected");
}
inflightUpload.append(payload);
if (message.isLast()) {
File fileNameSaved = save(inflightUpload.name, "websocket", inflightUpload.bos.toByteArray());
BatchStatus currentBatchStatus = BatchStatus.UNKNOWN;
long percentageComplete;
ProgressReporter progressReporter = new WebSocketProgressReporter(session);
SoftLayerCostFileJobExecutionThread softLayerCostFileJobExecutionThread =
new SoftLayerCostFileJobExecutionThread(softLayerService, softLayerJobService, fileNameSaved,progressReporter);
logger.info("In main thread about to begin separate thread");
ForkJoinPool.commonPool().submit(softLayerCostFileJobExecutionThread);
while(!softLayerCostFileJobExecutionThread.jobDone());
// softLayerCostFileJobExecutionThread.run();
// Wait for above to complete somehow
// StepExecution foundStepExecution = jobExplorer.getJobExecution(
// jobExecutionThread.getJobExecutionResult().getJobExecution().getId()
// ).getStepExecutions().stream().filter(stepExecution->stepExecution.getStepName().equals("softlayerUploadFile")).findFirst().orElseGet(null);
// if (!"COMPLETED".equals(jobExecutionResult.getExitStatus())) {
// throw new UploadFileException(file.getOriginalFilename() + " exit status: " + jobExecutionResult.getExitStatus());
// }
logger.info("In main thread after separate thread submitted");
session.sendMessage(new TextMessage("UPLOAD "+inflightUpload.name));
session.close();
sessionToFileMap.remove(session);
logger.info("Uploaded "+inflightUpload.name);
}
String response = "Upload Chunk: size "+ payload.array().length;
logger.debug(response);
}
private File save(String fileName, String prefix, byte[] data) throws IOException {
Path basePath = Paths.get(".", "uploads", prefix, UUID.randomUUID().toString());
logger.info("Saving incoming cost file "+fileName+" to "+basePath);
Files.createDirectories(basePath);
FileChannel channel = new FileOutputStream(Paths.get(basePath.toString(), fileName).toFile(), false).getChannel();
channel.write(ByteBuffer.wrap(data));
channel.close();
return new File(basePath.getFileName().toString());
}
#Override
public void afterConnectionEstablished(WebSocketSession session) throws Exception {
sessionToFileMap.put(session, new FileUploadInFlight(session));
}
static class FileUploadInFlight {
private final Logger logger = LoggerFactory.getLogger(this.getClass());
String name;
String uniqueUploadId;
ByteArrayOutputStream bos = new ByteArrayOutputStream();
/**
* Fragile constructor - beware not prod ready
* #param session
*/
FileUploadInFlight(WebSocketSession session) {
String query = session.getUri().getQuery();
String uploadSessionIdBase64 = query.split("=")[1];
String uploadSessionId = new String(Base64Utils.decodeUrlSafe(uploadSessionIdBase64.getBytes()));
List<String> sessionIdentifiers = Splitter.on("\\").splitToList(uploadSessionId);
String uniqueUploadId = session.getRemoteAddress().toString()+sessionIdentifiers.get(0);
String fileName = sessionIdentifiers.get(1);
this.name = fileName;
this.uniqueUploadId = uniqueUploadId;
logger.info("Preparing upload for "+this.name+" uploadSessionId "+uploadSessionId);
}
public void append(ByteBuffer byteBuffer) throws IOException{
bos.write(byteBuffer.array());
}
}
}
Below is a snippet of Angular code where I make the call to the websocket. The service is intended to receive a file, then provide regular updates of percentage complete until the service is completed. Does this call need to be in a loop, or does the socket run until it's closed?
Angular Snippet of call to WebSocket:
this.softlayerService.uploadBlueReportFile(this.blueReportFile)
.subscribe(data => {
this.showLoaderBlueReport = false;
this.successBlueReport = true;
this.blueReportFileName = "No file selected";
this.responseBlueReport = 'File '.concat(data.fileName).concat(' ').concat('is ').concat(data.exitStatus);
this.blueReportSelected = false;
this.getCurrentUserFiles();
},
(error)=>{
if(error.status === 504){
this.showLoaderBlueReport = false;
this.stillProcessing = true;
}else{
this.showLoaderBlueReport = false;
this.displayUploadBlueReportsError(error, 'File upload failed');
}
});
}

How can I create many kafka topics during spring-boot application start up?

I have this configuration:
#Configuration
public class KafkaTopicConfig {
private final TopicProperties topics;
public KafkaTopicConfig(TopicProperties topics) {
this.topics = topics;
}
#Bean
public NewTopic newTopicImportCharge() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CHARGES.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportPayment() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_PAYMENTS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
#Bean
public NewTopic newTopicImportCatalog() {
TopicProperties.Topic topic = topics.getTopicNameByType(MessageType.IMPORT_CATALOGS.name());
return new NewTopic(topic.getTopicName(), topic.getNumPartitions(), topic.getReplicationFactor());
}
}
I can add 10 differents topics to TopicProperties. And I don't want create each similar bean manually. Does some way exist for create all topic in spring-kafka or only spring?
Use an admin client directly; you can get a pre-built properties map from Boot's KafkaAdmin.
#SpringBootApplication
public class So55336461Application {
public static void main(String[] args) {
SpringApplication.run(So55336461Application.class, args);
}
#Bean
public ApplicationRunner runner(KafkaAdmin kafkaAdmin) {
return args -> {
AdminClient admin = AdminClient.create(kafkaAdmin.getConfigurationProperties());
List<NewTopic> topics = new ArrayList<>();
// build list
admin.createTopics(topics).all().get();
};
}
}
EDIT
To check if they already exist, or if the partitions need to be increased, the KafkaAdmin has this logic...
private void addTopicsIfNeeded(AdminClient adminClient, Collection<NewTopic> topics) {
if (topics.size() > 0) {
Map<String, NewTopic> topicNameToTopic = new HashMap<>();
topics.forEach(t -> topicNameToTopic.compute(t.name(), (k, v) -> t));
DescribeTopicsResult topicInfo = adminClient
.describeTopics(topics.stream()
.map(NewTopic::name)
.collect(Collectors.toList()));
List<NewTopic> topicsToAdd = new ArrayList<>();
Map<String, NewPartitions> topicsToModify = checkPartitions(topicNameToTopic, topicInfo, topicsToAdd);
if (topicsToAdd.size() > 0) {
addTopics(adminClient, topicsToAdd);
}
if (topicsToModify.size() > 0) {
modifyTopics(adminClient, topicsToModify);
}
}
}
private Map<String, NewPartitions> checkPartitions(Map<String, NewTopic> topicNameToTopic,
DescribeTopicsResult topicInfo, List<NewTopic> topicsToAdd) {
Map<String, NewPartitions> topicsToModify = new HashMap<>();
topicInfo.values().forEach((n, f) -> {
NewTopic topic = topicNameToTopic.get(n);
try {
TopicDescription topicDescription = f.get(this.operationTimeout, TimeUnit.SECONDS);
if (topic.numPartitions() < topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
}
else if (topic.numPartitions() > topicDescription.partitions().size()) {
if (LOGGER.isInfoEnabled()) {
LOGGER.info(String.format(
"Topic '%s' exists but has a different partition count: %d not %d, increasing "
+ "if the broker supports it", n,
topicDescription.partitions().size(), topic.numPartitions()));
}
topicsToModify.put(n, NewPartitions.increaseTo(topic.numPartitions()));
}
}
catch (#SuppressWarnings("unused") InterruptedException e) {
Thread.currentThread().interrupt();
}
catch (TimeoutException e) {
throw new KafkaException("Timed out waiting to get existing topics", e);
}
catch (#SuppressWarnings("unused") ExecutionException e) {
topicsToAdd.add(topic);
}
});
return topicsToModify;
}
Currently we can just use KafkaAdmin.NewTopics
Spring Doc

transactional unit testing with ObjectifyService - no rollback happening

We are trying to use google cloud datastore in our project and trying to use objectify as the ORM since google recommends it. I have carefully used and tried everything i could read about and think of but somehow the transactions don't seem to work. Following is my code and setup.
#RunWith(SpringRunner.class)
#EnableAspectJAutoProxy(proxyTargetClass = true)
#ContextConfiguration(classes = { CoreTestConfiguration.class })
public class TestObjectifyTransactionAspect {
private final LocalServiceTestHelper helper = new LocalServiceTestHelper(
// Our tests assume strong consistency
new LocalDatastoreServiceTestConfig().setApplyAllHighRepJobPolicy(),
new LocalMemcacheServiceTestConfig(), new LocalTaskQueueTestConfig());
private Closeable closeableSession;
#Autowired
private DummyService dummyService;
#BeforeClass
public static void setUpBeforeClass() {
// Reset the Factory so that all translators work properly.
ObjectifyService.setFactory(new ObjectifyFactory());
}
/**
* #throws java.lang.Exception
*/
#Before
public void setUp() throws Exception {
System.setProperty("DATASTORE_EMULATOR_HOST", "localhost:8081");
ObjectifyService.register(UserEntity.class);
this.closeableSession = ObjectifyService.begin();
this.helper.setUp();
}
/**
* #throws java.lang.Exception
*/
#After
public void tearDown() throws Exception {
AsyncCacheFilter.complete();
this.closeableSession.close();
this.helper.tearDown();
}
#Test
public void testTransactionMutationRollback() {
// save initial list of users
List<UserEntity> users = new ArrayList<UserEntity>();
for (int i = 1; i <= 10; i++) {
UserEntity user = new UserEntity();
user.setAge(i);
user.setUsername("username_" + i);
users.add(user);
}
ObjectifyService.ofy().save().entities(users).now();
try {
dummyService.mutateDataWithException("username_1", 6L);
} catch (Exception e) {
e.printStackTrace();
}
List<UserEntity> users2 = this.dummyService.findAllUsers();
Assert.assertEquals("Size mismatch on rollback", users2.size(), 10);
boolean foundUserIdSix = false;
for (UserEntity userEntity : users2) {
if (userEntity.getUserId() == 1) {
Assert.assertEquals("Username update failed in transactional context rollback.", "username_1",
userEntity.getUsername());
}
if (userEntity.getUserId() == 6) {
foundUserIdSix = true;
}
}
if (!foundUserIdSix) {
Assert.fail("Deleted user with userId 6 but it is not rolledback.");
}
}
}
Since I am using spring, idea is to use an aspect with a custom annotation to weave objectify.transact around the spring service beans methods that are calling my daos.
But somehow the update due to ObjectifyService.ofy().save().entities(users).now(); is not gettign rollbacked though the exception throws causes Objectify to run its rollback code. I tried printing the ObjectifyImpl instance hashcodes and they are all same but still its not rollbacking.
Can someone help me understand what am i doing wrong? Havent tried the actual web based setup yet...if it cant pass transnational test cases there is no point in actual transaction usage in a web request scenario.
Update: Adding aspect, services, dao as well to make a complete picture. The code uses spring boot.
DAO class. Note i am not using any transactions here because as per code of com.googlecode.objectify.impl.TransactorNo.transactOnce(ObjectifyImpl<O>, Work<R>) a transnational ObjectifyImpl is flushed and committed in this method which i don't want. I want commit to happen once and rest all to join in on that transaction. Basically this is the wrong code in com.googlecode.objectify.impl.TransactorNo ..... i will try to explain my understanding a later in the question.
#Component
public class DummyDaoImpl implements DummyDao {
#Override
public List<UserEntity> loadAll() {
Query<UserEntity> query = ObjectifyService.ofy().transactionless().load().type(UserEntity.class);
return query.list();
}
#Override
public List<UserEntity> findByUserId(Long userId) {
Query<UserEntity> query = ObjectifyService.ofy().transactionless().load().type(UserEntity.class);
//query = query.filterKey(Key.create(UserEntity.class, userId));
return query.list();
}
#Override
public List<UserEntity> findByUsername(String username) {
return ObjectifyService.ofy().transactionless().load().type(UserEntity.class).filter("username", username).list();
}
#Override
public void update(UserEntity userEntity) {
ObjectifyService.ofy().save().entity(userEntity);
}
#Override
public void update(Iterable<UserEntity> userEntities) {
ObjectifyService.ofy().save().entities(userEntities);
}
#Override
public void delete(Long userId) {
ObjectifyService.ofy().delete().key(Key.create(UserEntity.class, userId));
}
}
Below is the Service class
#Service
public class DummyServiceImpl implements DummyService {
private static final Logger LOGGER = LoggerFactory.getLogger(DummyServiceImpl.class);
#Autowired
private DummyDao dummyDao;
public void saveDummydata() {
List<UserEntity> users = new ArrayList<UserEntity>();
for (int i = 1; i <= 10; i++) {
UserEntity user = new UserEntity();
user.setAge(i);
user.setUsername("username_" + i);
users.add(user);
}
this.dummyDao.update(users);
}
/* (non-Javadoc)
* #see com.bbb.core.objectify.test.services.DummyService#mutateDataWithException(java.lang.String, java.lang.Long)
*/
#Override
#ObjectifyTransactional
public void mutateDataWithException(String usernameToMutate, Long userIdToDelete) throws Exception {
//update one
LOGGER.info("Attempting to update UserEntity with username={}", "username_1");
List<UserEntity> mutatedUsersList = new ArrayList<UserEntity>();
List<UserEntity> users = dummyDao.findByUsername(usernameToMutate);
for (UserEntity userEntity : users) {
userEntity.setUsername(userEntity.getUsername() + "_updated");
mutatedUsersList.add(userEntity);
}
dummyDao.update(mutatedUsersList);
//delete another
UserEntity user = dummyDao.findByUserId(userIdToDelete).get(0);
LOGGER.info("Attempting to delete UserEntity with userId={}", user.getUserId());
dummyDao.delete(user.getUserId());
throw new RuntimeException("Dummy Exception");
}
/* (non-Javadoc)
* #see com.bbb.core.objectify.test.services.DummyService#findAllUsers()
*/
#Override
public List<UserEntity> findAllUsers() {
return dummyDao.loadAll();
}
Aspect which wraps the method annoted with ObjectifyTransactional as a transact work.
#Aspect
#Component
public class ObjectifyTransactionAspect {
private static final Logger LOGGER = LoggerFactory.getLogger(ObjectifyTransactionAspect.class);
#Around(value = "execution(* *(..)) && #annotation(objectifyTransactional)")
public Object objectifyTransactAdvise(final ProceedingJoinPoint pjp, ObjectifyTransactional objectifyTransactional) throws Throwable {
try {
Object result = null;
Work<Object> work = new Work<Object>() {
#Override
public Object run() {
try {
return pjp.proceed();
} catch (Throwable throwable) {
throw new ObjectifyTransactionExceptionWrapper(throwable);
}
}
};
switch (objectifyTransactional.propagation()) {
case REQUIRES_NEW:
int limitTries = objectifyTransactional.limitTries();
if(limitTries <= 0) {
Exception illegalStateException = new IllegalStateException("limitTries must be more than 0.");
throw new ObjectifyTransactionExceptionWrapper(illegalStateException);
} else {
if(limitTries == Integer.MAX_VALUE) {
result = ObjectifyService.ofy().transactNew(work);
} else {
result = ObjectifyService.ofy().transactNew(limitTries, work);
}
}
break;
case NOT_SUPPORTED :
case NEVER :
case MANDATORY :
result = ObjectifyService.ofy().execute(objectifyTransactional.propagation(), work);
break;
case REQUIRED :
case SUPPORTS :
ObjectifyService.ofy().transact(work);
break;
default:
break;
}
return result;
} catch (ObjectifyTransactionExceptionWrapper e) {
String packageName = pjp.getSignature().getDeclaringTypeName();
String methodName = pjp.getSignature().getName();
LOGGER.error("An exception occured while executing [{}.{}] in a transactional context."
, packageName, methodName, e);
throw e.getCause();
} catch (Throwable ex) {
String packageName = pjp.getSignature().getDeclaringTypeName();
String methodName = pjp.getSignature().getName();
String fullyQualifiedmethodName = packageName + "." + methodName;
throw new RuntimeException("Unexpected exception while executing ["
+ fullyQualifiedmethodName + "] in a transactional context.", ex);
}
}
}
Now the problem code part that i see is as follows in com.googlecode.objectify.impl.TransactorNo:
#Override
public <R> R transact(ObjectifyImpl<O> parent, Work<R> work) {
return this.transactNew(parent, Integer.MAX_VALUE, work);
}
#Override
public <R> R transactNew(ObjectifyImpl<O> parent, int limitTries, Work<R> work) {
Preconditions.checkArgument(limitTries >= 1);
while (true) {
try {
return transactOnce(parent, work);
} catch (ConcurrentModificationException ex) {
if (--limitTries > 0) {
if (log.isLoggable(Level.WARNING))
log.warning("Optimistic concurrency failure for " + work + " (retrying): " + ex);
if (log.isLoggable(Level.FINEST))
log.log(Level.FINEST, "Details of optimistic concurrency failure", ex);
} else {
throw ex;
}
}
}
}
private <R> R transactOnce(ObjectifyImpl<O> parent, Work<R> work) {
ObjectifyImpl<O> txnOfy = startTransaction(parent);
ObjectifyService.push(txnOfy);
boolean committedSuccessfully = false;
try {
R result = work.run();
txnOfy.flush();
txnOfy.getTransaction().commit();
committedSuccessfully = true;
return result;
}
finally
{
if (txnOfy.getTransaction().isActive()) {
try {
txnOfy.getTransaction().rollback();
} catch (RuntimeException ex) {
log.log(Level.SEVERE, "Rollback failed, suppressing error", ex);
}
}
ObjectifyService.pop();
if (committedSuccessfully) {
txnOfy.getTransaction().runCommitListeners();
}
}
}
transactOnce is by code / design always using a single transaction to do things. It will either commit or rollback the transaction. there is no provision to chain transactions like a normal enterprise app would want.... service -> calls multiple dao methods in a single transaction and commits or rollbacks depending on how things look.
keeping this in mind, i removed all annotations and transact method calls in my dao methods so that they don't start an explicit transaction and the aspect in service wraps the service method in transact and ultimately in transactOnce...so basically the service method is running in a transaction and no new transaction is getting fired again. This is a very basic scenario, in actual production apps services can call other service methods and they might have the annotation on them and we could still end up in a chained transaction..but anyway...that is a different problem to solve....
I know NoSQLs dont support write consistency at table or inter table levels so am I asking too much from google cloud datastore?

Resources