I have a SpringBoot app, I was making some performance test in the controller, and I realized that whatever is the first query I put the controller, It take ages compare to the others... (ths DB is a remote connection, but I can't change this)
long t1 = System.nanoTime();
menuPriceSummaryService.findAllVegan().stream();
long t2 = System.nanoTime();
long elapsedTimeInSeconds = (t2 - t1) / 1000000000;
System.out.println("elapsedTimeInSeconds1 -> " + elapsedTimeInSeconds);
t1 = System.nanoTime();
menuPriceSummaryService.findAllVegan();
t2 = System.nanoTime();
elapsedTimeInSeconds = (t2 - t1) / 1000000000;
System.out.println("elapsedTimeInSeconds2 -> " + elapsedTimeInSeconds);
t1 = System.nanoTime();
menuPriceSummaryService.findAllVegan().parallelStream();
t2 = System.nanoTime();
elapsedTimeInSeconds = (t2 - t1) / 1000000000;
System.out.println("elapsedTimeInSeconds3 -> " + elapsedTimeInSeconds);
t1 = System.nanoTime();
menuPriceSummaryService.findAllVegan().parallelStream().filter(this::notInMyFavourites);
t2 = System.nanoTime();
elapsedTimeInSeconds = (t2 - t1) / 1000000000;
the time:
elapsedTimeInSeconds1 -> 76
elapsedTimeInSeconds2 -> 0
elapsedTimeInSeconds3 -> 0
elapsedTimeInSeconds4 -> 0
Is it normal?
Is there is something I can do configuring the Hikari pool to optimize this?
the pom.xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
the application.properties:
spring.datasource.url=jdbc:mysql://elcordelaciutat.awob1oxhu1so.eu-central-1.rds.amazonaws.com:3306/elcor
spring.datasource.username=elcor
spring.datasource.password=elcor2#$
spring.jpa.show-sql=false
spring.jpa.properties.hibernate.format_sql=true
hibernate.dialect=org.hibernate.dialect.MySQLDialect
You should follow Hikari's MySQL Configuration:
A typical MySQL configuration for HikariCP might look something like this:
dataSource.cachePrepStmts=true
dataSource.prepStmtCacheSize=250
dataSource.prepStmtCacheSqlLimit=2048
dataSource.useServerPrepStmts=true
dataSource.useLocalSessionState=true
dataSource.useLocalTransactionState=true
dataSource.rewriteBatchedStatements=true
dataSource.cacheResultSetMetadata=true
dataSource.cacheServerConfiguration=true
dataSource.elideSetAutoCommits=true
dataSource.maintainTimeStats=false
Related
I use GCP client libraries to implement pub/sub model in my spring-boot application. For authenticating i'm using GOOGLE_APPLICATION_CREDENTIALS path env variable. It works fine with other versions of JDK/JRE, But it fails with segmentation Error with below mentioned jdk/jre
Environment details
Java version:
openjdk version "1.8.0_322"
OpenJDK Runtime Environment (Zulu 8.60.0.22-SA-linux-musl-x64) (build 1.8.0_322-b06)
OpenJDK 64-Bit Server VM (Zulu 8.60.0.22-SA-linux-musl-x64) (build 25.322-b06, mixed mode)
Log:
# A fatal error has been detected by the Java Runtime Environment:
# SIGSEGV (0xb) at pc=0x0000000000003fd6, pid=1, tid=0x00007f99a14fcb38
#
# JRE version: OpenJDK Runtime Environment (Zulu 8.60.0.22-SA-linux-musl-x64) (8.0_322-b06) (build 1.8.0_322-b06)
#
# Java VM: OpenJDK 64-Bit Server VM (25.322-b06 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C 0x0000000000003fd6
#
# Core dump written. Default location: //core or core.1
#
# An error report file with more information is saved as:
# /tmp/hs_err_pid1.log
#
# If you would like to submit a bug report, please visit:
# http://www.azul.com/support/
Java frames: (J=compiled Java code, j=interpreted, Vv=VM code)
j java.lang.ClassLoader$NativeLibrary.load(Ljava/lang/String;Z)V+0
j java.lang.ClassLoader.loadLibrary0(Ljava/lang/Class;Ljava/io/File;)Z+328
j java.lang.ClassLoader.loadLibrary(Ljava/lang/Class;Ljava/lang/String;Z)V+92
j java.lang.Runtime.load0(Ljava/lang/Class;Ljava/lang/String;)V+57
j java.lang.System.load(Ljava/lang/String;)V+7
j io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryUtil.loadLibrary(Ljava/lang/String;Z)V+5
v ~StubRoutines::call_stub
J 2066 sun.reflect.NativeMethodAccessorImpl.invoke0(Ljava/lang/reflect/Method;Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (0 bytes) # 0x00007f5cad99bdf7 [0x00007f5cad99bd80+0x77]
J 2065 C1 sun.reflect.NativeMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (104 bytes) # 0x00007f5cad9a2a8c [0x00007f5cad9a1900+0x118c]
J 1974 C1 sun.reflect.DelegatingMethodAccessorImpl.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (10 bytes) # 0x00007f5cad961784 [0x00007f5cad961680+0x104]
J 2084 C1 java.lang.reflect.Method.invoke(Ljava/lang/Object;[Ljava/lang/Object;)Ljava/lang/Object; (62 bytes) # 0x00007f5cad9a3e8c [0x00007f5cad9a3aa0+0x3ec]
j io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader$1.run()Ljava/lang/Object;+53
v ~StubRoutines::call_stub
J 1349 java.security.AccessController.doPrivileged(Ljava/security/PrivilegedAction;)Ljava/lang/Object; (0 bytes) # 0x00007f5cad764f4f [0x00007f5cad764f00+0x4f]
j io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader.loadLibraryByHelper(Ljava/lang/Class;Ljava/lang/String;Z)V+10
j io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader.loadLibrary(Ljava/lang/ClassLoader;Ljava/lang/String;Z)V+15
j io.grpc.netty.shaded.io.netty.util.internal.NativeLibraryLoader.load(Ljava/lang/String;Ljava/lang/ClassLoader;)V+359
j io.grpc.netty.shaded.io.netty.channel.epoll.Native.loadNativeLibrary()V+60
j io.grpc.netty.shaded.io.netty.channel.epoll.Native.<clinit>()V+76
v ~StubRoutines::call_stub
j io.grpc.netty.shaded.io.netty.channel.epoll.Epoll.<clinit>()V+28
v ~StubRoutines::call_stub
J 993 java.lang.Class.forName0(Ljava/lang/String;ZLjava/lang/ClassLoader;Ljava/lang/Class;)Ljava/lang/Class; (0 bytes) # 0x00007f5cad6995fa [0x00007f5cad699580+0x7a]
J 1952 C1 java.lang.Class.forName(Ljava/lang/String;)Ljava/lang/Class; (15 bytes) # 0x00007f5cad948d4c [0x00007f5cad948ba0+0x1ac]
j io.grpc.netty.shaded.io.grpc.netty.Utils.isEpollAvailable()Z+3
j io.grpc.netty.shaded.io.grpc.netty.Utils.<clinit>()V+144
v ~StubRoutines::call_stub
j io.grpc.netty.shaded.io.grpc.netty.NettyChannelBuilder.<clinit>()V+16
v ~StubRoutines::call_stub
j io.grpc.netty.shaded.io.grpc.netty.NettyChannelProvider.builderForAddress(Ljava/lang/String;I)Lio/grpc/netty/shaded/io/grpc/netty/NettyChannelBuilder;+2
j io.grpc.netty.shaded.io.grpc.netty.NettyChannelProvider.builderForAddress(Ljava/lang/String;I)Lio/grpc/ManagedChannelBuilder;+3
j io.grpc.ManagedChannelBuilder.forAddress(Ljava/lang/String;I)Lio/grpc/ManagedChannelBuilder;+5
j com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createSingleChannel()Lio/grpc/ManagedChannel;+285
j com.google.api.gax.grpc.InstantiatingGrpcChannelProvider$$Lambda$596.createSingleChannel()Lio/grpc/ManagedChannel;+4
j com.google.api.gax.grpc.ChannelPool.<init>(Lcom/google/api/gax/grpc/ChannelPoolSettings;Lcom/google/api/gax/grpc/ChannelFactory;Ljava/util/concurrent/ScheduledExecutorService;)V+71
j com.google.api.gax.grpc.ChannelPool.create(Lcom/google/api/gax/grpc/ChannelPoolSettings;Lcom/google/api/gax/grpc/ChannelFactory;)Lcom/google/api/gax/grpc/ChannelPool;+9
j com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.createChannel()Lcom/google/api/gax/rpc/TransportChannel;+10
j com.google.api.gax.grpc.InstantiatingGrpcChannelProvider.getTransportChannel()Lcom/google/api/gax/rpc/TransportChannel;+35
j com.google.api.gax.rpc.ClientContext.create(Lcom/google/api/gax/rpc/StubSettings;)Lcom/google/api/gax/rpc/ClientContext;+179
j com.google.cloud.pubsub.v1.stub.GrpcSubscriberStub.create(Lcom/google/cloud/pubsub/v1/stub/SubscriberStubSettings;)Lcom/google/cloud/pubsub/v1/stub/GrpcSubscriberStub;+6
j com.google.cloud.pubsub.v1.Subscriber.doStart()V+16
j com.google.api.core.AbstractApiService$InnerService.doStart()V+4
j com.google.common.util.concurrent.AbstractService.startAsync()Lcom/google/common/util/concurrent/Service;+33
j com.google.api.core.AbstractApiService.startAsync()Lcom/google/api/core/ApiService;+4
j com.google.cloud.pubsub.v1.Subscriber.startAsync()Lcom/google/api/core/ApiService;+1
Dependencies:
<dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>libraries-bom</artifactId>
<version>25.1.0</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
</dependency>
<dependencies>
And also i wanted to know is there any other way to authenticate other than using path env variable? Can I use spring.cloud.gcp.credentials.location=file:{location} with GCP client libraries, instead of env variable?
As mentioned by #Juraj Martinka, it was problem with underlying google library io.grpc.netty.shaded. It seems Netty does not support Alpine since Netty depends on glibc but Alpine does not have it, it has musl libc instead.
The issue disappears if you disable Netty's native support
or if you use an image that has glibc, e.g.:
azul/zulu-openjdk-alpine:11-jre: Alpine-based, no glibc -> does not work
azul/zulu-openjdk:11: Ubuntu-based, has glibc -> works
Using -Dio.grpc.netty.shaded.io.netty.transport.noNative=true avoids the segfault
example:
java -jar -D-Dio.grpc.netty.shaded.io.netty.transport.noNative=true app.jar
The other workaround is, using grpc-netty instead of grpc-netty-shaded
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-pubsub</artifactId>
<exclusions>
<exclusion>
<groupId>io.grpc</groupId>
<artifactId>grpc-netty-shaded</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.grpc</groupId>
<artifactId>grpc-netty</artifactId>
</dependency>
Reference Links: Link 1, Link 2
I'm new to octave and I wrote this code. But even the fprintf statement on the first line isn't getting printed. Someone please help
I typed
C = strassen(zeros(1024, 1024), zeros(1024, 1024)
But nothing is being printed and the word
octave 2:> does not show up on the next line after giving the command
function [C] = strassen(A, B)
fprintf('strassen called\n');
row = size(A, 1);%Incase zeros must be padded to make dimensions even
column = size(B, 2);
common = size(A, 2);
if row*common*column <= 1000000,%Base case when less than 10^6 multiplications needed
C = zeros( row, column );
for x = 1 : row,
for y = 1 : column,
for z = 1 : common,
C(x, y) += A(x, z)*B(z, y);
end;
end;
end;
else
%Padding zeros if needed
if rem(row, 2) == 1,
A = [A; zeros(1, common)];
end;
if rem(column, 2) == 1,
B = [B zeros(common, 1)];
end;
if rem(common, 2) == 1,
A = [A zeros(size(A, 1), 1)];
B = [B; zeros(1, size(B, 2))];
end;
m = size(A, 1);
n = size(A, 2);
o = size(B, 2);
A11 = A(1:m/2, 1:n/2 );
A12 = A(1:m/2, n/2+1: n);
A21 = A(m/2+1 :m, 1:n/2);
A22 = A(m/2+1 :m ,n/2+1: n);
B11 = A(1:n/2, 1:o/2 );
B12 = A(1:n/2, o/2+1: o);
B21 = A(n/2+1 :n, 1:o/2);
B22 = A(n/2+1 :n, o/2+1: o);
M1 = strassen(A11 + A22, B11 + B22);
M2 = strassen(A21 + A22, B11);
M3 = strassen(A11, B12 - B22);
M4 = strassen(A22, B21 - B11);
M5 = strassen(A11 + A12, B22);
M6 = strassen(A21 - A11, B11 + B12);
M7 = strassen(A12 - A22, B21 + B22);
%C11 = M1 + M4 - M5 + M7;
%C12 = M3 + M5;
%C21 = M2 + M4;
%C22 = M1 + M3 - M2 + M6;
%C = [C11 C12; C21 C22];
%C = C(1:row, 1:column);
C = [(M1 + M4 - M5 + M7) (M3 + M5);(M2 + M4) (M1 + M3 - M2 + M6)](1:row, 1:column);
end;
end;
As Dan said there is a ")" missing in your call. But this may be a copy&paste error. If you use
C = strassen(zeros(1024, 1024), zeros(1024, 1024));
You don't see "strassen called" because the calculation takes long. I haven't waited until it finished and aborted after 30s. If you want to see the recursive calls you can flush stdout:
...
fprintf('strassen called\n');
fflush (stdout);
...
or disable the pager with more off. In this case you'll see
>> C = strassen(zeros(1024, 1024), zeros(1024, 1024));
strassen called
strassen called
strassen called
strassen called
strassen called
what I think is what you would expect.
I've this code that iterate some samples and build a simple linear interpolation between the points:
foreach sample:
base = floor(index_pointer)
frac = index_pointer - base
out = in[base] * (1 - frac) + in[base + 1] * frac
index_pointer += speed
// restart
if(index_pointer >= sample_length)
{
index_pointer = 0
}
using "speed" equal to 1, the game is done. But if the index_pointer is different than 1 (i.e. got fractional part) I need to wrap last/first element keeping the translation consistent.
How would you do this? Double indexes?
Here's an example of values I have. Let say in array of 4 values: [8, 12, 16, 20].
It will be:
1.0*in[0] + 0.0*in[1]=8
0.28*in[0] + 0.72*in[1]=10.88
0.56*in[1] + 0.44*in[2]=13.76
0.84*in[2] + 0.14*in[3]=16.64
0.12*in[2] + 0.88*in[3]=19.52
0.4*in[3] + 0.6*in[4]=8 // wrong; here I need to wrapper
the last point is wrong. [4] will be 0 because I don't have [4], but the first part need to take care of 0.4 and the weight of first sample (I think?).
Just wrap around the indices:
out = in[base] * (1 - frac) + in[(base + 1) % N] * frac
, where % is the modulo operator and N is the number of input samples.
This procedure generates the following line for your sample data (the dashed lines are the interpolated sample points, the circles are the input values):
I think I understand the problem now (answer only applies if I really did...):
You sample values at a nominal speed sn. But actually your sampler samples at a real speed s, where s != sn. Now, you want to create a function which re-samples the series, sampled at speed s, so it yields a series as if it were sampled with speed sn by means of linear interpolation between 2 adjacent samples. Or, your sampler jitters (has variances in time when it actually samples, which is sn + Noise(sn)).
Here is my approach - a function named "re-sample". It takes the sample data and a list of desired re-sample-points.
For any re-sample point which would index outside the raw data, it returns the respective border value.
let resample (data : float array) times =
let N = Array.length data
let maxIndex = N-1
let weight (t : float) =
t - (floor t)
let interpolate x1 x2 w = x1 * (1.0 - w) + x2 * w
let interp t1 t2 w =
//printfn "t1 = %d t2 = %d w = %f" t1 t2 w
interpolate (data.[t1]) (data.[t2]) w
let inter t =
let t1 = int (floor t)
match t1 with
| x when x >= 0 && x < maxIndex ->
let t2 = t1 + 1
interp t1 t2 (weight t)
| x when x >= maxIndex -> data.[maxIndex]
| _ -> data.[0]
times
|> List.map (fun t -> t, inter t)
|> Array.ofList
let raw_data = [8; 12; 16; 20] |> List.map float |> Array.ofList
let resampled = resample raw_data [0.0..0.2..4.0]
And yields:
val resample : data:float array -> times:float list -> (float * float) []
val raw_data : float [] = [|8.0; 12.0; 16.0; 20.0|]
val resampled : (float * float) [] =
[|(0.0, 8.0); (0.2, 8.8); (0.4, 9.6); (0.6, 10.4); (0.8, 11.2); (1.0, 12.0);
(1.2, 12.8); (1.4, 13.6); (1.6, 14.4); (1.8, 15.2); (2.0, 16.0);
(2.2, 16.8); (2.4, 17.6); (2.6, 18.4); (2.8, 19.2); (3.0, 20.0);
(3.2, 20.0); (3.4, 20.0); (3.6, 20.0); (3.8, 20.0); (4.0, 20.0)|]
Now, I still fail to understand the "wrap around" part of your question. In the end, interpolation - in contrast to extrapolation is only defined for values in [0..N-1]. So it is up to you to decide if the function should produce a run time error or simply use the edge values (or 0) for time values out of bounds of your raw data array.
EDIT
As it turned out, it is about how to use a cyclic (ring) buffer for this as well.
Here, a version of the resample function, using a cyclic buffer. Along with some operations.
update adds a new sample value to the ring buffer
read reads the content a ring buffer element as if it were a normal array, indexed from [0..N-1].
initXXX functions which create the ring buffer in various forms.
length which returns the length or capacity of the ring buffer.
The ring buffer logics is factored into a module to keep it all clean.
module Cyclic =
let wrap n x = x % n // % is modulo operator, just like in C/C++
type Series = { A : float array; WritePosition : int }
let init (n : int) =
{ A = Array.init n (fun i -> 0.);
WritePosition = 0
}
let initFromArray a =
let n = Array.length a
{ A = Array.copy a;
WritePosition = 0
}
let initUseArray a =
let n = Array.length a
{ A = a;
WritePosition = 0
}
let update (sample : float ) (series : Series) =
let wrapper = wrap (Array.length series.A)
series.A.[series.WritePosition] <- sample
{ series with
WritePosition = wrapper (series.WritePosition + 1) }
let read i series =
let n = Array.length series.A
let wrapper = wrap (Array.length series.A)
series.A.[wrapper (series.WritePosition + i)]
let length (series : Series) = Array.length (series.A)
let resampleSeries (data : Cyclic.Series) times =
let N = Cyclic.length data
let maxIndex = N-1
let weight (t : float) =
t - (floor t)
let interpolate x1 x2 w = x1 * (1.0 - w) + x2 * w
let interp t1 t2 w =
interpolate (Cyclic.read t1 data) (Cyclic.read t2 data) w
let inter t =
let t1 = int (floor t)
match t1 with
| x when x >= 0 && x < maxIndex ->
let t2 = t1 + 1
interp t1 t2 (weight t)
| x when x >= maxIndex -> Cyclic.read maxIndex data
| _ -> Cyclic.read 0 data
times
|> List.map (fun t -> t, inter t)
|> Array.ofList
let input = raw_data
let rawSeries0 = Cyclic.initFromArray input
(resampleSeries rawSeries0 [0.0..0.2..4.0]) = resampled
I was wandering if anyone made a benchmark on Apache CollectionUtils.
In my simple benchmark:
List<Integer> ints = Arrays.asList(3, 4, 6, 7,8, 0,9,2, 5, 2,1, 35,11, 44, 5,1 ,2);
long start = System.nanoTime();
ArrayList<Integer> filtered = new ArrayList<Integer>(ints.size());
for (Integer anInt : ints) {
if (anInt > 10) {
filtered.add(anInt);
}
}
long end = System.nanoTime();
System.out.println(filtered + " (" + (end - start) + ")");
Predicate<Integer> predicate = new Predicate<Integer>() {
#Override
public boolean evaluate(Integer integer) {
return integer > 10;
}
};
start = System.nanoTime();
filtered.clear();
CollectionUtils.select(ints, predicate,filtered);
end = System.nanoTime();
System.out.println(filtered + " (" + (end - start) + ")");
I got the following results:
[35, 11, 44] (127643)
[35, 11, 44] (3060230)
I must say Im a big fan of this library coz it makes the code clean and testable but currently Im working on performance sensetive project and Im afraid my affection to this library gonna harm the performances.
I know this is a really general question, but any one used this library for production env? and noticed performance issues?
Apart from running it multiple times to check for JVM optimization (I don't know if given the fact that Predicate can be a functional interface, the JVM could not use the new bytecode keyword invokedynamic introduced in Java 7), I think you error rely just after the start:
start = System.nanoTime();
filtered.clear();
CollectionUtils.select(ints, predicate,filtered);
end = System.nanoTime();
System.out.println(filtered + " (" + (end - start) + ")");
I don't think you should evaluate the time filtered.clear() does it work if you want to check differences between CollectionUtils and plain old foreach.
Well, you are basically comparing method invocation overhead with inline code with the latter being obviously faster.
As long as you do not do something that really challenges your cpu, I would be very surprised if this would be the cause of performance problems in your application.
I am trying to create a simple function that takes two dates of format int*int*int and return if the first one is older than the second or not.
fun is_older (date1: (int*int*int), date2: (int*int*int)) =
val in_days1 = (#1 (date1) * 365) + (#2 (date1) * 30) + #3 date1;
val in_days2 = (#1 (date2) * 365) + (#2 (date2) * 30) + #3 date1;
if in_days1 < in_days2
then true
else false
I get this error:
hwk_1.sml:1.53 Error: syntax error: inserting EQUALOP
uncaught exception Compile [Compile: "syntax error"]
raised at: ../compiler/Parse/main/smlfile.sml:15.24-15.46
../compiler/TopLevel/interact/evalloop.sml:44.55
../compiler/TopLevel/interact/evalloop.sml:296.17-296.20
Can anyone help please?
In addition to what has already been mentioned, you also ought to use pattern matching to decompose that 3-tuple. Doing this, you can also throw away the type annotations, as it is now clear that this is a 3-tuple (both for the reader, but more importantly also the type system).
fun is_older ((y1, m1, d1), (y2, m2, d2)) =
let
val days1 = y1 * 365 + m1 * 30 + d1
val days2 = y2 * 365 + m2 * 30 + d2
in
days1 < days2
end
However you could do this a bit smarter. If you have multiple functions working with dates, you could create a nice little helper function toDays. In the below example i have just included inside the isOlder function, but you could put it at top level or inside a local-declaration if you wan't to hide it away
fun isOlder (date1, date2) =
let
fun toDays (y, m, d) = y * 365 + m * 30 + d
in
toDays date1 < toDays date2
end
val in_days1 = (#1 (date1) * 365) + (#2 (date1) * 30) + #3 date1;
val in_days2 = (#1 (date2) * 365) + (#2 (date2) * 30) + #3 date1;
Local val definitions need to be between let and in.
FWIW, I got the same error on one of the other exercises in this same homework:
Error: syntax error: inserting EQUALOP
but in my case, it was happening on the first line. This was confusing to me, because I'm coming from Python, where the error usually happens after the mistake.
Bottom line, and the thing I wanted to know is this: this error means it can't compile the code as written.
P.S. if you use let and in you also have to use end. (You didn't need to use val or let to solve the is_older problem - there is a way to do it with set logic alone).
it worked for me:
fun is_older(diaUno : (int * int * int), diaDos : (int * int * int)) =
let
fun setDayNum(diaUno : (int * int * int)) =
let
val diaUnoInt = (#1 (diaUno) * 365) + (#2 (diaUno) * 30) + #3 diaUno
in
diaUnoInt
end
val dia1 = setDayNum(diaUno)
val dia2 = setDayNum(diaDos)
in
if dia1 < dia2 then diaUno else diaDos
end