CouchDB error when running a query in composer-rest-server? - hyperledger-composer

Problem:
I have built a business network and have been deployed and connected through composer-rest-sever. Adding assets and doing transaction functions are working as expected. But when I try to run a query in there it is giving me an error like this.
{
"error": {
"statusCode": 500,
"name": "Error",
"message": "transaction returned with failure: Error: GET_QUERY_RESULT failed: transaction ID: e21f7d122609b883f415d82786a197a192b90b344e1e30c304da6c21c838c8bb: Couch DB Error:no_usable_index, Status Code:400, Reason:No index exists for this sort, try indexing by the sort fields.",
"stack": "Error: transaction returned with failure: Error: GET_QUERY_RESULT failed: transaction ID: e21f7d122609b883f415d82786a197a192b90b344e1e30c304da6c21c838c8bb: Couch DB Error:no_usable_index, Status Code:400, Reason:No index exists for this sort, try indexing by the sort fields.\n at C:\\Users\\tharindusa\\AppData\\Roaming\\npm\\node_modules\\composer-rest-server\\node_modules\\fabric-client\\lib\\Peer.js:114:16\n at C:\\Users\\tharindusa\\AppData\\Roaming\\npm\\node_modules\\composer-rest-server\\node_modules\\grpc\\src\\client.js:586:7"
}
}
This is how my queries.qry file looks like.
query ListLandTitlesForSale{
description: "List all land titles that are for sale"
statement:
SELECT org.landreg.LandTitle
WHERE (forSale == true)
ORDER BY [id ASC]
}
query ListLandTitlesBySize {
description: "List all land titles in a certain size range"
statement:
SELECT org.landreg.LandTitle
WHERE((area >_$minimumArea) AND (area< _$maximumArea))
ORDER BY [area ASC]
}
query ListOwnedLandTitles {
description: "List land titles owned by specified 'owner'"
statement:
SELECT org.landreg.LandTitle
WHERE (owner == _$owner)
ORDER BY [id ASC]
}
query ListPreviouslyOwnedLandTitles {
description: "List land titles owned by specified 'previousOwner'"
statement:
SELECT org.landreg.LandTitle
WHERE (previousOwners CONTAINS _$previousOwner)
ORDER BY [id ASC]
}
And this is how my .cto file looks.
namespace org.landreg
abstract concept Address {
o String addressLine
o String locality
}
concept DutchAddress extends Address {
o String postalCode regex=/\d{4}[ ]?[A-Z]{2}/
}
enum Gender {
o FEMALE
o MALE
}
participant Individual identified by passportNumber {
o String passportNumber
o DutchAddress address
o Gender gender
}
asset LandTitle identified by id {
o String id
o DutchAddress address
o Integer area range=[1000,]
o Boolean forSale default=false
o Double price optional
--> Individual owner
--> Individual[] previousOwners optional
}
transaction UnlockLandTitle {
--> LandTitle landTitle
}
transaction TransferLandTitle {
--> LandTitle landTitle
--> Individual newOwner
}
Output of docker ps command
TharinduSA#LP-HQ-15957 MINGW64 ~
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
373a6fbe104d dev-peer0.org1.example.com-landreg-0.0.2-d2a95608b3ed19873963889cfc0dc8c12a2ff883775b0c1263b1f676cab351d5 "/bin/sh -c 'cd /usr…" 5 hours ago Up 5 hours dev-peer0.org1.example.com-landreg-0.0.2
00a0859aaee6 hyperledger/fabric-peer:1.2.0 "peer node start" 5 hours ago Up 5 hours 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
83247feca61f hyperledger/fabric-orderer:1.2.0 "orderer" 5 hours ago Up 5 hours 0.0.0.0:7050->7050/tcp orderer.example.com
a388dc58ac56 hyperledger/fabric-ca:1.2.0 "sh -c 'fabric-ca-se…" 5 hours ago Up 5 hours 0.0.0.0:7054->7054/tcp ca.org1.example.com
4ba58264053d hyperledger/fabric-couchdb:0.4.10 "tini -- /docker-ent…" 5 hours ago Up 5 hours 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
5777ea4f350c dev-peer0.org1.example.com-landreg-0.0.1-9515860cfe680a544a37838162e871043219ac02a5fd224ade345b1aa36d0051 "/bin/sh -c 'cd /usr…" 28 hours ago Exited (0) 25 hours ago dev-peer0.org1.example.com-landreg-0.0.1
276df87ca139 dev-peer0.org1.example.com-landreg-0.0.3-2b5a4891dc9504a6280a81420f24ce56b450739cbdfd1d325214910967444f82 "/bin/sh -c 'cd /usr…" 30 hours ago Exited (0) 29 hours ago dev-peer0.org1.example.com-landreg-0.0.3
52a0fd951af6 dev-peer0.org1.example.com-sample-0.0.1-3c933e68ad6460ca423125a9eb2529a68675271a61b9f8b3c6a8cd2e722e893c "/bin/sh -c 'cd /usr…" 31 hours ago Exited (0) 29 hours ago dev-peer0.org1.example.com-sample-0.0.1
Can someone help me to solve this problem? I am using composer version 0.20.2.

Related

Why elasticsearch index performance falls down after frequent write

When I start nodejs script, it deletes old index (if it exist) and according to the config file creates new, after creates Websocket-server and starts to listen incoming connections.
initES() {
this.elasticsearchClient = new elasticsearch.Client({
host: `${Config.elasticSearchHost}:${Config.elasticSearchPort}`,
log: 'trace'
});
let deletePromise = this.elasticsearchClient.indices.delete({index: `${Config.elasticSearchIndex}`});
deletePromise.then(() => {
console.log(`Index ${Config.elasticSearchIndex} deleted`);
}, function(e) {
console.log(e.toJSON())
}).then(() => {
let createPromise = this.elasticsearchClient.indices.create({
index: `${Config.elasticSearchIndex}`,
body: {
settings: {
index: {
number_of_shards: 1,
number_of_replicas: 0
},
analysis: {
analyzer: {
whitespace_analyzer: {
tokenizer: 'whitespace',
filter: ['lowercase']
}
}
}
}
}
});
createPromise.then(() => {
console.log(`Index ${Config.elasticSearchIndex} created`);
}, (e) => {
console.log(e.toJSON());
})
});
}
Script is intended to start just once, at the boot time (through cron), it was written by me, and uses standart ES library (
https://www.elastic.co/guide/en/elasticsearch/client/javascript-api/current/api-reference.html
).
In front, user chooses to calculate orders (~700 items, they calculate by system automatically, with gearman and phantomjs)
At first (first 8 hours or first test) everything is working fine, ES responding good, websocket clients frequently update data, and data is updated in ES index.
If user cancels process, or process is finished and user decides to recalculate (all data is deleted before anything is put on), process of IO in ES becomes slower.
And so on, and after awhile index is filled up to ~340.. ~350 items, not to 700. In some cases ES stops to respond.
Tailing log files of ES shows me tons of lines
Entering safepoint region: GenCollectForAllocation
[2019-05-21T13:46:45.611+0000][9630][gc,start ] GC(271) Pause Young (Allocation Failure)
[2019-05-21T13:46:45.611+0000][9630][gc,task ] GC(271) Using 8 workers of 8 for evacuation
[2019-05-21T13:46:45.616+0000][9630][gc,age ] GC(271) Desired survivor size 17891328 bytes, new threshold 6 (max threshold 6)
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) Age table with threshold 6 (max threshold 6)
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 1: 987344 bytes, 987344 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 2: 5440 bytes, 992784 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 3: 172640 bytes, 1165424 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 4: 535104 bytes, 1700528 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 5: 333224 bytes, 2033752 total
[2019-05-21T13:46:45.617+0000][9630][gc,age ] GC(271) - age 6: 128 bytes, 2033880 total
[2019-05-21T13:46:45.617+0000][9630][gc,heap ] GC(271) ParNew: 282158K->2653K(314560K)
[2019-05-21T13:46:45.617+0000][9630][gc,heap ] GC(271) CMS: 88354K->88355K(699072K)
[2019-05-21T13:46:45.617+0000][9630][gc,metaspace ] GC(271) Metaspace: 85648K->85648K(1128448K)
[2019-05-21T13:46:45.617+0000][9630][gc ] GC(271) Pause Young (Allocation Failure) 361M->88M(989M) 5.387ms
[2019-05-21T13:46:45.617+0000][9630][gc,cpu ] GC(271) User=0.01s Sys=0.00s Real=0.00s
[2019-05-21T13:46:45.617+0000][9630][safepoint ] Leaving safepoint region
[2019-05-21T13:46:45.617+0000][9630][safepoint ] Total time for which application threads were stopped: 0.0057277 seconds, Stopping threads took: 0.0000429 seconds
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Application time: 1.0004453 seconds
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Entering safepoint region: Cleanup
[2019-05-21T13:46:46.617+0000][9630][safepoint ] Leaving safepoint region
But to be precise, I dont see anyting critical (except memory failure allocation).
And even if everything go well these lines also appear in log.
If I restart my script (which deletes old and creates new index), ES updates these items fast, as it does only for first time
So my question is:
Why ES looses it's performance if I
insert/update/read/delete data ... insert/update/read/delete data ...
and its working ok, if I
insert/update/read restart script insert/update/read/
?
There is nothing to do with Elasticsearch.
It was my fault in not closing websocket connections, which led to server slow down, loosing it's resources.
Sorry guys for taking your time

How To Count the tickets (Tickets) that created on a date (X) before certain date (M) and Solved after (M)

I need to retrieve the "Count of Solved tickets" that were created before the 'Created Date" and solved after the "created date" group by the "Created date" Column using ORACLE SQL
Ticket# Created_Date Solved_Date
3315279 12-MAR-19 15-MAR-19
3355379 10-MAR-19 14-MAR-19
3378633 11-MAR-19 15-MAR-19
3470592 13-MAR-19 16-MAR-19
3472784 13-MAR-19 16-MAR-19
3472930 13-MAR-19 16-MAR-19
3473119 13-MAR-19 16-MAR-19
3474194 11-MAR-19 14-MAR-19
3721765 12-MAR-19 16-MAR-19
3723124 12-FEB-19 16-MAR-19
3723286 07-MAR-19 14-MAR-19
3724733 05-MAR-19 16-MAR-19
3724894 03-MAR-19 14-MAR-19
3750270 09-MAR-19 14-MAR-19
3751118 06-MAR-19 14-MAR-19
From comments:
12-MAR-19: 8 as there are 8 Tickets created before that date and solved after it.
10-MAR-19: 5 as there are 5 Tickets created before that date and solved after it.
Here is the query -
select t1.creation_date,count(t1.ticketno) as count
from ticket_demo t1, ticket_demo t2
where t2.creation_date <t1.creation_date and t2.solved_date>t1.creation_date
group by t1.creation_date;
TICKETNO CREATION_DATE SOLVED_DATE
1 01-01-18 12-04-18
2 01-12-17 04-01-18
3 01-11-17 01-01-18
4 01-02-18 28-02-18
5 03-04-18 04-05-18
6 01-04-18 04-05-18
7 01-01-18 05-06-18
O/P -
CREATION_DATE COUNT
03-04-18 3
01-04-18 2
01-02-18 2
01-01-18 2
01-12-17 1

Algorithm to calculate a date for complex occupation management

Hello fellow Stack Overflowers,
I have a situation, where I need some help choosing the best way to make an algorithm work, the objective is to manage the occupation of a resource (Lets consider the resource A) to have multiple tasks, and where each task takes a specified amount of time to complete. At this first stage I don't want to involve multiple variables, so lets keep it the simple way, lets consider he only has a schedule of the working days.
For example:
1 - We have 1 resource, resource A
2 - Resource A works from 8 am to 4 pm, monday to friday, to keep it simple by now, he doesn't have lunch for now, so, 8 hours of work a day.
3 - Resource A has 5 tasks to complete, to avoid complexity at this level, lets supose each one will take exactly 10 hours to complete.
4 - Resource A will start working on this tasks at 2018-05-16 exactly at 2 pm.
Problem:
Now, all I need to know is the correct finish date for all the 5 tasks, but considering all the previous limitations.
In this case, he has 6 working days and additionaly 2 hours of the 7th day.
The expected result that I want would be: 2018-05-24 (at 4 pm).
Implementation:
I thought about 2 options, and would like to have feedback on this options, or other options that I might not be considering.
Algorithm 1
1 - Create a list of "slots", where each "slot" would represent 1 hour, for x days.
2 - Cross this list of slots with the hour schedule of the resource, to remove all the slots where the resource isn't here. This would return a list with the slots that he can actually work.
3 - Occupy the remaining slots with the tasks that I have for him.
4 - Finnaly, check the date/hour of the last occupied slot.
Disadvantage: I think this might be an overkill solution, considering that I don't want to consider his occupation for the future, all I want is to know when will the tasks be completed.
Algorithm 2
1 - Add the task hours (50 hours) to the starting date, getting the expectedFinishDate. (Would get expectedFinishDate = 2018-05-18 (at 4 pm))
2 - Cross the hours, between starting date and expectedFinishDate with the schedule, to get the quantity of hours that he won't work. (would basically get the unavailable hours, 16 hours a day, would result in remainingHoursForCalc = 32 hours).
3 - calculate new expectedFinishDate with the unavailable hours, would add this 32 hours to the previous 2018-05-18 (at 4 pm).
4 - Repeat point 2 and 3 with new expectedFinishDate untill remainingHoursForCalc = 0.
Disadvantage: This would result in a recursive method or in a very weird while loop, again, I think this might be overkill for calculation of a simple date.
What would you suggest? Is there any other option that I might not be considering that would make this simpler? Or you think there is a way to improve any of this 2 algorithms to make it work?
Improved version:
import java.util.Calendar;
import java.util.Date;
public class Main {
public static void main(String args[]) throws Exception
{
Date d=new Date();
System.out.println(d);
d.setMinutes(0);
d.setSeconds(0);
d.setHours(13);
Calendar c=Calendar.getInstance();
c.setTime(d);
c.set(Calendar.YEAR, 2018);
c.set(Calendar.MONTH, Calendar.MAY);
c.set(Calendar.DAY_OF_MONTH, 17);
//c.add(Calendar.HOUR, -24-5);
d=c.getTime();
//int workHours=11;
int hoursArray[] = {1,2,3,4,5, 10,11,12, 19,20, 40};
for(int workHours : hoursArray)
{
try
{
Date end=getEndOfTask(d, workHours);
System.out.println("a task starting at "+d+" and lasting "+workHours
+ " hours will end at " +end);
}
catch(Exception e)
{
System.out.println(e.getMessage());
}
}
}
public static Date getEndOfTask(Date startOfTask, int workingHours) throws Exception
{
int totalHours=0;//including non-working hours
//startOfTask +totalHours =endOfTask
int startHour=startOfTask.getHours();
if(startHour<8 || startHour>16)
throw new Exception("a task cannot start outside the working hours interval");
System.out.println("startHour="+startHour);
int startDayOfWeek=startOfTask.getDay();//start date's day of week; Wednesday=3
System.out.println("startDayOfWeek="+startDayOfWeek);
if(startDayOfWeek==6 || startDayOfWeek==0)
throw new Exception("a task cannot start on Saturdays on Sundays");
int remainingHoursUntilDayEnd=16-startHour;
System.out.println("remainingHoursUntilDayEnd="+remainingHoursUntilDayEnd);
/*some discussion here: if task starts at 12:30, we have 3h30min
* until the end of the program; however, getHours() will return 12, which
* substracted from 16 will give 4h. It will work fine if task starts at 12:00,
* or, generally, at the begining of the hour; let's assume a task will start at HH:00*/
int remainingDaysUntilWeekEnd=5-startDayOfWeek;
System.out.println("remainingDaysUntilWeekEnd="+remainingDaysUntilWeekEnd);
int completeWorkDays = (workingHours-remainingHoursUntilDayEnd)/8;
System.out.println("completeWorkDays="+completeWorkDays);
//excluding both the start day, and the end day, if they are not fully occupied by the task
int workingHoursLastDay=(workingHours-remainingHoursUntilDayEnd)%8;
System.out.println("workingHoursLastDay="+workingHoursLastDay);
/* workingHours=remainingHoursUntilDayEnd+(8*completeWorkDays)+workingHoursLastDay */
int numberOfWeekends=(int)Math.ceil( (completeWorkDays-remainingDaysUntilWeekEnd)/5.0 );
if((completeWorkDays-remainingDaysUntilWeekEnd)%5==0)
{
if(workingHoursLastDay>0)
{
numberOfWeekends++;
}
}
System.out.println("numberOfWeekends="+numberOfWeekends);
totalHours+=(int)Math.min(remainingHoursUntilDayEnd, workingHours);//covers the case
//when task lasts 1 or 2 hours, and we have maybe 4h until end of day; that's why i use Math.min
if(completeWorkDays>0 || workingHoursLastDay>0)
{
totalHours+=8;//the hours of the current day between 16:00 and 24:00
//it might be the case that completeWorkDays is 0, yet the task spans up to tommorrow
//so we still have to add these 8h
}
if(completeWorkDays>0)//redundant if, because 24*0=0
{
totalHours+=24*completeWorkDays;//for every 8 working h, we have a total of 24 h that have
//to be added to the date
}
if(workingHoursLastDay>0)
{
totalHours+=8;//the hours between 00.00 AM and 8 AM
totalHours+=workingHoursLastDay;
}
if(numberOfWeekends>0)
{
totalHours+=48*numberOfWeekends;//every weekend between start and end dates means two days
}
System.out.println("totalHours="+totalHours);
Calendar calendar=Calendar.getInstance();
calendar.setTime(startOfTask);
calendar.add(Calendar.HOUR, totalHours);
return calendar.getTime();
}
}
You may adjust the hoursArray[], or d.setHours along with c.set(Calendar.DAY_OF_MONTH, to test various start dates along with various task lengths.
There is still a bug , due to the addition of the 8 hours between 16:00 and 24:00:
a task starting at Thu May 17 13:00:00 EEST 2018 and lasting 11 hours will end at Sat May 19 00:00:00 EEST 2018.
I've kept a lot of print statements, they are useful for debugging purposes.
Here is the terminology explained:
I agree that algorithm 1 is overkill.
I think I would make sure I had the conditions right: hours per day (8), working days (Mon, Tue, Wed, Thu, Fri). Would then divide the hours required (5 * 10 = 50) by the hours per day so I know a minimum of how many working days are needed (50 / 8 = 6). Slightly more advanced, divide by hours per week first (50 / 40 = 1 week). Count working days from the start date to get a first shot at the end date. There was probably a remainder from the division, so use this to determine whether the tasks can end on this day or run into the next working day.

Information Retrieval :URL hits in a time frame

Algorithm Challenge :
Problem statement :
How would you design a logging system for something like Google , you should be able to query for the number of times a URL was opened within two time frames.
i/p : start_time , end_time , URL1
o/p : number of times URL1 was opened between start and end time.
Some specs :
Database is not an optimal solution
A URL might have been opened multiple times for given time stamp.
A URL might have been opened a large number of times within two time stamps.
start_time and end_time can be a month apart.
time could be granular to a second.
One solution :
Hash of a hash
Key Value
URL Hash----> T1 CumFrequency
Eg :
Amazon Hash--> T CumFreq
11 00 am 3 ( opened 3 times at 11:00 am )
11 15 am 4 ( opened 1 time at 11:15 am , cumfreq is 3+1=4)
11 30 am 11 ( opened 4 times at 11:30 am , cumfreq is 3+4+4=11)
i/p : 11 : 10 am , 11 : 37 am , Amazon
the o.p can be obtained by subtracting , last timestamp less then 11:10 which 11:00 am , and last active time stamp less than 11:37 am which is 11:30 am. Hence the result is
11-3 = 8 ....
Can we do better ?

EF Linq query comparing data from multiple rows

I would like to create a Linq query that compares date from multiple rows in a single table.
The table consists of data that polls a web-services for balance data for account. Unfortunately the polling interval is not a 100% deterministic which means there can be 0-1-more entries for each account per day.
For the application i would need this data to be reformatted in a certain formatted (see below under output).
I included sample data and descriptions of the table.
Can anybody help me with a EF Linq query that will produce the required output?
table:
id The account id
balance The available credits in the account at the time of the measurement
create_date The datetime when the data was retrieved
Table name:Balances
Field: id (int)
Field: balance (bigint)
Field: create_date (datetime)
sample data:
id balance create_date
3 40 2012-04-02 07:01:00.627
1 55 2012-04-02 13:41:50.427
2 9 2012-04-02 03:41:50.727
1 40 2012-04-02 16:21:50.027
1 49 2012-04-02 16:55:50.127
1 74 2012-04-02 23:41:50.627
1 90 2012-04-02 23:44:50.427
3 3 2012-04-02 23:51:50.827
3 -10 2012-04-03 07:01:00.627
1 0 2012-04-03 13:41:50.427
2 999 2012-04-03 03:41:50.727
1 50 2012-04-03 15:21:50.027
1 49 2012-04-03 16:55:50.127
1 74 2012-04-03 23:41:50.627
2 -10 2012-04-03 07:41:50.727
1 100 2012-04-03 23:44:50.427
3 0 2012-04-03 23:51:50.827
expected output:
id The account id
date The data component which was used to produce the date in the row
balance_last_measurement The balance at the last measurement of the date
difference The difference in balance between the first- and last measurement of the date
On 2012-04-02 id 2 only has 1 measurement which sets the difference value equal to the last(and only) measurement.
id date balance_last_measurement difference
1 2012-04-02 90 35
1 2012-04-03 100 10
2 2012-04-02 9 9
2 2012-04-03 -10 -19
3 2012-04-02 3 -37
3 2012-04-03 0 37
update 2012-04-10 20:06
The answer from Raphaël Althaus is really good but i did make a small mistake in the original request. The difference field in the 'expected output' should be either:
the difference between the last measurement of the previous day and the last measurement of the day
if there is no previous day then first measurement of the day should be used and the last measurement
Is this possible at all? It seems to be quite complex?
I would try something like that.
var query = db.Balances
.OrderBy(m => m.Id)
.ThenBy(m => m.CreationDate)
.GroupBy(m => new
{
id = m.Id,
year = SqlFunctions.DatePart("mm", m.CreationDate),
month = SqlFunctions.DatePart("dd", m.CreationDate),
day = SqlFunctions.DatePart("yyyy", m.CreationDate)
}).ToList()//enumerate there, this is what we need from db
.Select(g => new
{
id = g.Key.id,
date = new DateTime(g.Key.year, g.Key.month, g.Key.day),
last_balance = g.Select(m => m.BalanceValue).LastOrDefault(),
difference = (g.Count() == 1 ? g.First().BalanceValue : g.Last().BalanceValue - g.First().BalanceValue)
});
Well, a probable not optimized solution, but just see if it seems to work.
First, we create a result class
public class BalanceResult
{
public int Id { get; set; }
public DateTime CreationDate { get; set; }
public IList<int> BalanceResults { get; set; }
public int Difference { get; set; }
public int LastBalanecResultOfDay {get { return BalanceResults.Last(); }}
public bool HasManyResults {get { return BalanceResults != null && BalanceResults.Count > 1; }}
public int DailyDifference { get { return HasManyResults ? BalanceResults.Last() - BalanceResults.First() : BalanceResults.First(); } }
}
then we change a little bit our query
var query = db.Balances
.GroupBy(m => new
{
id = m.Id,
year = SqlFunctions.DatePart("mm", m.CreationDate),
month = SqlFunctions.DatePart("dd", m.CreationDate),
day = SqlFunctions.DatePart("yyyy", m.CreationDate)
}).ToList()//enumerate there, this is what we need from db
.Select(g => new BalanceResult
{
Id = g.Key.id,
CreationDate = new DateTime(g.Key.year, g.Key.month, g.Key.day),
BalanceResults = g.OrderBy(l => l.CreationDate).Select(l => l.BalanceValue).ToList()
}).ToList();
and finally
foreach (var balanceResult in balanceResults.ToList())
{
var previousDayBalanceResult = balanceResults.FirstOrDefault(m => m.Id == balanceResult.Id && m.CreationDate == balanceResult.CreationDate.AddDays(-1));
balanceResult.Difference = previousDayBalanceResult != null ? balanceResult.LastBalanecResultOfDay - previousDayBalanceResult.LastBalanecResultOfDay : balanceResult.DailyDifference;
}
as indicated, performance (use of dictionaries, for example), code readability should of course be improved, but... that's the idea !

Resources