Transform Optional String and return Date in Java 8? - java-8

I am currently parsing a nullable String to a Date. I try to use Optional to avoid using if statement. Here is what I have written so far :
Client client = new Client();
Optional.ofNullable(methodThatMayReturnStringOrNull())
.ifPresent((s) -> {
try {
client.setBirthDate(DateUtils.parseDate(
StringUtils.substring(s, 0, 10),
new String[]{"yyyy-MM-dd"}));
} catch (ParseException e) {
throw new TechnicalException("error.parsing.date", e);
}
});
Is it possible to transform this lambda so I can make it a method similar to the following but java 8 style?
private Date parse(String complexString) {
Date birthDate = null;
if (complexString != null) {
try {
birthDate = DateUtils.parseDate(
StringUtils.substring(complexString, 0, 10),
new String[]{"yyyy-MM-dd"});
} catch (final ParseException e) {
throw new TechnicalException("error.parsing.date", e);
}
}
return birthDate;
}

Not sure how far you want to go, but you can start with
Optional<Date> date = Optional.ofNullable(methodThatMayReturnStringOrNull())
.map((s) -> {
try {
return DateUtils.parseDate(
StringUtils.substring(s, 0, 10),
new String[]{"yyyy-MM-dd"}));
} catch (ParseException e) {
throw new TechnicalException("error.parsing.date", e);
}
});
You might also consider using flatMap instead of map and returning empty optional instead of throwing exception on error - depends on how you want to progress you flow.
On completely unrelated note, get rid of Date and use either joda or new java time classes ;)

Related

Best way to assert that a value is not null

This is my method.
public boolean authenticateAndPollCallbackResult(BankIdAuthRequest bankIdAuthRequest) {
ResponseEntity<BankIdAuthResponse> authResponse = bankIdAuthentication(bankIdAuthRequest);
AbstractApplicationForm applicationForm = applicationFormRepository.findByToken(bankIdAuthRequest.getRefID());
try {
//Add new bankId authentication to database.
BankIdAuthenticationEntity bankIdAuthenticationEntity = new BankIdAuthenticationEntity();
bankIdAuthenticationEntity.setAbstractApplicationForm(applicationForm);
bankIdAuthenticationEntity.setAuthStatus(STATUS_PROGRESS);
bankIdAuthenticationEntity.setOrderReference(authResponse.getBody().getOrderRef());
bankIdAuthenticationEntity.setAutoStartToken(authResponse.getBody().getAutoStartToken());
Long bankIdAuthenticationId = bankIdAuthenticationRepository.save(bankIdAuthenticationEntity).getId();
BankIdAuthenticationEntity.AuthStatus authStatus;
do {
TimeUnit.MILLISECONDS.sleep(1500);
authStatus = getAuthStatus(bankIdAuthenticationId);
if (authStatus == BankIdAuthenticationEntity.AuthStatus.COMPLETED)
return true;
if (authStatus == BankIdAuthenticationEntity.AuthStatus.FAILED || authStatus == BankIdAuthenticationEntity.AuthStatus.NOT_ASSIGNED)
return false;
} while (authStatus == BankIdAuthenticationEntity.AuthStatus.PROGRESS);
} catch (InterruptedException e) {
log.error("InterruptedException: ", e);
Thread.currentThread().interrupt();
} catch (NullPointerException e) {
log.error("Either BankId API not responding correctly. Check server connection", e);
} catch (Exception e) {
log.error("Exception: Polling collect endpoint method failed", e);
}
return false;
}
Now SonarQube warns that these two lines can return null (and they can):
bankIdAuthenticationEntity.setOrderReference(authResponse.getBody().getOrderRef());
bankIdAuthenticationEntity.setAutoStartToken(authResponse.getBody().getAutoStartToken();
But i don't know what the best way to check for null is.
I tried using Objects.requireNonNull which throws a null and the i figured the null check would catch it but it just feel ugly and not correct.
Any suggestions or absolute correct ways of doing this that i might have missed?
The problem is that authResponse.getBody() can be null. Right?
In this cas you should check it before an either throw an exception or not execute the two lines:
if(authResponse.getBody() != null {
bankIdAuthenticationEntity.setOrderReference(authResponse.getBody().getOrderRef());
bankIdAuthenticationEntity.setAutoStartToken(authResponse.getBody().getAutoStartToken();
})
or
if(authResponse.getBody() == null {
throw new ....Exception();
}
And if the problem is that getOrderRef() or getAutoStartToken() could return null, you should check these values before and handle the cases when they are null.

GraphQL.ExecutionError: Error trying to resolve

Summary:
My GraphQL ExecuteAsync returns a result that contains. According to the stackTrace provided below, the system cannot resolve my custom type remitsGeneralSearch. The remitsGeneralSearch resolver can return a type called ClaimPaymentOrCheckSearchGraphType which is a UnionGraphType.
StackTrace:
["GraphQL.ExecutionError: Error trying to resolve remitsGeneralSearch.\n ---> System.InvalidOperationException: Unexpected type: \n at GraphQL.Execution.ExecutionStrategy.BuildExecutionNode(ExecutionNode parent, IGraphType graphType, Field field, FieldType fieldDefinition, String[] path)\n at GraphQL.Execution.ExecutionStrategy.SetSubFieldNodes(ExecutionContext context, ObjectExecutionNode parent, Dictionary`2 fields)\n at GraphQL.Execution.ExecutionStrategy.SetSubFieldNodes(ExecutionContext context, ObjectExecutionNode parent)\n at GraphQL.Execution.ExecutionStrategy.ExecuteNodeAsync(ExecutionContext context, ExecutionNode node)\n --- End of inner exception stack trace ---"]4008305)
GraphQL Version: 2.4.0
FrameWork: .Net
OS: MacOS Catalina
Links Referenced: https://github.com/graphql-dotnet/graphql-dotnet/issues/964
CODE SNIPPETS:
RESOLVER:
FieldAsync<ClaimPaymentOrCheckSearchGraphType>(
"remitsGeneralSearch",
resolve: async context =>
{
var securityFilter = await GetUserRemitFilters(context);
var range = context.GetRange();
var sortFields = context.GetArgument<List<SortField>>("sort") ?? Enumerable.Empty<SortField>();
var whereClaimPayment = context.GetArgument<ClaimPaymentSearchFilter>("whereClaimPayment");
Connection<ClaimPaymentSearchRow> claimPaymentSearchRowResult;
try
{
using (LogContext.PushProperty("where", whereClaimPayment, true))
{
//claimPaymentSearchRowResult = await DMAQueryService.GetRemitReadersAsync(context);
var whereArguments = context.Arguments["whereClaimPayment"] as Dictionary<string, object>;
claimPaymentSearchRowResult = await DMAQueryService.GetRemitReadersAsync(
range,
whereClaimPayment,
whereArguments,
sortFields,
securityFilter,
context.CancellationToken
);
}
}
catch (Exception e)
{
_logger.LogInformation("Exception occurred {e}", e);
throw e;
}
var userRemitFilters = context.UserContext as Services.DMA.UserRemitFilters;
if (claimPaymentSearchRowResult.EdgeCount > 0)
{
return claimPaymentSearchRowResult;
}
var _whereCheckSearch = context.GetArgument<CheckSearchFilter>("whereCheck");
try
{
Connection<CheckSearchRow> checkSearchRowResult;
using (LogContext.PushProperty("whereCheck", _whereCheckSearch, true))
{
checkSearchRowResult = await DMAQueryService.GetCheckReadersAsync(context);
return checkSearchRowResult;
}
}
catch (Exception e)
{
throw e;
}
},arguments: queryArguments
);
}
catch (Exception e)
{
throw e;
}
Custom GraphType:
[Transient]
public class ClaimPaymentOrCheckSearchGraphType : UnionGraphType
{
private readonly ILogger<ClaimPaymentOrCheckSearchGraphType> _logger;
public ClaimPaymentOrCheckSearchGraphType(
ILogger<ClaimPaymentOrCheckSearchGraphType> logger,
ConnectionGraphType<ClaimPaymentSearchGraphType> claimPaymentSearchGraphType,
ConnectionGraphType<CheckSearchGraphType> checkSearchGraphType
)
{
_logger = logger;
Type<ConnectionGraphType<ClaimPaymentSearchGraphType>>();
Type<ConnectionGraphType<CheckSearchGraphType>>();
ResolveType = obj =>
{
try
{
if (obj is Connection<ClaimPaymentSearchRow>)
{
return claimPaymentSearchGraphType;
}
if (obj is Connection<CheckSearchRow>)
{
return checkSearchGraphType;
}
throw new ArgumentOutOfRangeException($"Could not resolve graph type for {obj.GetType().Name}");
}
catch (Exception e)
{
_logger.LogInformation("ClaimPaymentOrCheckSearchGraphType Exception {e}: ", e);
throw e;
}
};
}
}
Link to answer found here: https://github.com/graphql-dotnet/graphql-dotnet/issues/2674Try replacing this:
Type<ConnectionGraphType<ClaimPaymentSearchGraphType>>();
Type<ConnectionGraphType<CheckSearchGraphType>>();
with this:
AddPossibleType(claimPaymentSearchGraphType);
AddPossibleType(checkSearchGraphType);
I'm thinking that if you're registering these types as transients, then the copy that gets registered to the schema initialization code is a different copy that gets returned from ResolveType. Because of that, its fields' ResolvedType properties was never set, and so null is passed into BuildExecutionNode for the graphType instead of a resolved type.
If the ResolveType method could return a type rather than an instance, there wouldn't be an issue, but unfortunately that's not the way it works. Or you could register the type as a singleton.

What is the best way to fetch millions of rows at a time in spring boot?

I have a spring boot application and for a particular feature I have to prepare a CSV everyday for another service to use. The job runs everyday at 6 AM. And dumps the csv on the server. The issue is the data list is big. It's Around 7.8 millions of rows.I am using spring JPA to fetch all the records. Is their any better way to make it more efficient? Here's my code....
#Scheduled(cron = "0 1 6 * * ?")
public void saveMasterChildList() {
log.debug("running write job");
DateFormat dateFormatter = new SimpleDateFormat("dd_MM_yy");
String currentDateTime = dateFormatter.format(new Date());
String fileName = currentDateTime + "_Master_Child.csv";
ICsvBeanWriter beanWriter = null;
List<MasterChild> masterChildren = masterChildRepository.findByMsisdnIsNotNull();
try {
beanWriter = new CsvBeanWriter(new FileWriter(new File("/u01/edw_bill/", fileName)),
CsvPreference.STANDARD_PREFERENCE);
String[] header = {"msisdn"};
String[] nameMapping = {"msisdn"};
beanWriter.writeHeader(header);
for (MasterChild masterChild : masterChildren) {
beanWriter.write(masterChild, nameMapping);
}
} catch ( IOException e) {
log.debug("Error writing the CSV file {}", e.toString());
} finally {
if (beanWriter != null) {
try {
beanWriter.close();
} catch (IOException e) {
log.debug("Error closing the writer {}", e.toString());
}
}
}
} here
You could use pagination to separate data and load chunk by chunk. See this.

Set Field Text, Java

I have two classes. MetaDataExtractor(GUI) and MetaData.
MetaData has the method which extracts the metadata from an image. MetaDataExtractor is designed to display the data in a JTextPane. (Please excuse the class names. I know it's a little confusing. I'm fairly new to Java).
MetaDataExtractor:
LongitudeField.setText("" + MetaDataTags.getLongitude());
MetaData:
public String getLongitude() {
try {
Metadata metadata = ImageMetadataReader.readMetadata(jpegFile);
if (metadata.containsDirectory(GpsDirectory.class)) {
GpsDirectory gpsDir = (GpsDirectory) metadata.getDirectory(GpsDirectory.class);
GpsDescriptor gpsDesc = new GpsDescriptor(gpsDir);
String Longitude = "" + gpsDesc.getGpsLongitudeDescription();
}
} catch (ImageProcessingException ex) {
Logger.getLogger(MetaData.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("Error 1");
} catch (IOException ex) {
Logger.getLogger(MetaData.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("Error 2");
}
return longitude;
}
If I set the longitude to be displayed in the JTextPane, it returns "null". If however, I set it to print out on the command line, it prints the longitude fine?
Please excuse me if its a simple solution. I'm still getting to grips with Java.
Java is case sensitive and declare firstly your variable outside of try & catch statement.
Use a IDE like Eclipse to reduce syntax errors like these.
so you should have :
public String getLongitude() {
String longitudeDesc ="";
try {
Metadata metadata = ImageMetadataReader.readMetadata(jpegFile);
if (metadata.containsDirectory(GpsDirectory.class)) {
GpsDirectory gpsDir = (GpsDirectory) metadata.getDirectory(GpsDirectory.class);
GpsDescriptor gpsDesc = new GpsDescriptor(gpsDir);
longitudeDesc = "" + gpsDesc.getGpsLongitudeDescription();
}
} catch (ImageProcessingException ex) {
Logger.getLogger(MetaData.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("Error 1");
} catch (IOException ex) {
Logger.getLogger(MetaData.class.getName()).log(Level.SEVERE, null, ex);
System.out.println("Error 2");
}
return longitudeDesc ;
}

DD anomaly, and cleaning up database resources: is there a clean solution?

Here's a piece of code we've all written:
public CustomerTO getCustomerByCustDel(final String cust, final int del)
throws SQLException {
final PreparedStatement query = getFetchByCustDel();
ResultSet records = null;
try {
query.setString(1, cust);
query.setInt(2, del);
records = query.executeQuery();
return this.getCustomer(records);
} finally {
if (records != null) {
records.close();
}
query.close();
}
}
If you omit the 'finally' block, then you leave database resources dangling, which obviously is a potential problem. However, if you do what I've done here - set the ResultSet to null outside the **try** block, and then set it to the desired value inside the block - PMD reports a 'DD anomaly'. In the documentation, a DD anomaly is described as follows:
DataflowAnomalyAnalysis: The dataflow analysis tracks local definitions, undefinitions and references to variables on different paths on the data flow.From those informations there can be found various problems. [...] DD - Anomaly: A recently defined variable is redefined. This is ominous but don't have to be a bug.
If you declare the ResultSet outside the block without setting a value, you rightly get a 'variable might not have been initialised' error when you do the if (records != null) test.
Now, in my opinion my use here isn't a bug. But is there a way of rewriting cleanly which would not trigger the PMD warning? I don't particularly want to disable PMD's DataFlowAnomalyAnalysis rule, as identifying UR and DU anomalies would be actually useful; but these DD anomalies make me suspect I could be doing something better - and, if there's no better way of doing this, they amount to clutter (and I should perhaps look at whether I can rewrite the PMD rule)
I think this is clearer:
PreparedStatement query = getFetchByCustDel();
try {
query.setString(1, cust);
query.setInt(2, del);
ResultSet records = query.executeQuery();
try {
return this.getCustomer(records);
} finally {
records.close();
}
} finally {
query.close();
}
Also, in your version the query doesn't get closed if records.close() throws an exception.
I think that DD anomaly note is more bug, than a feature
Also, the way you free resources is a bit incomplete, for example
PreparedStatement pstmt = null;
Statement st = null;
try {
...
} catch (final Exception e) {
...
} finally {
try{
if (pstmt != null) {
pstmt.close();
}
} catch (final Exception e) {
e.printStackTrace(System.err);
} finally {
try {
if (st != null) {
st.close();
}
} catch (final Exception e) {
e.printStackTrace(System.err);
}
}
}
moreover this is not right again, cuz you should close resources like that
PreparedStatement pstmt = null;
Throwable th = null;
try {
...
} catch (final Throwable e) {
<something here>
th = e;
throw e;
} finally {
if (th == null) {
pstmt.close();
} else {
try {
if (pstmt != null) {
pstmt.close();
}
} catch (Throwable u) {
}
}
}

Resources