Combine and Sum the data with the same date - laravel

Forgive me for my bad english because it's not my first language
I have a blockers on how to combine and add some of the data
I have this data from my first collection
first Collection
[
{
"dateTime": "10/01/2022",
"gc": 0,
"credit": 20,
"debit": 1000,
"voucher": 0,
"e_gift": 0,
"cash": 3000
},
{
"dateTime": "10/02/2022",
"gc": 0,
"credit": 10,
"debit": 10,
"voucher": 0,
"e_gift": 0,
"cash": 100
},
]
2nd Collection
[
{
"dateTime": "10/01/2022",
"Gross Total": 6000,
"Discount": 300
},
{
"dateTime": "10/02/2022",
"Gross Total": 4000,
"Discount": 100
},
]
What result i want is they will combine based on date and all the data from first collection will be add to gross total
result is like this
[
{
"dateTime": "10/01/2022",
"gc": 0,
"credit": 20,
"debit": 1000,
"voucher": 0,
"e_gift": 0,
"cash": 3000
"Gross Total": 6000,
"Discount": 300
"total amount" : 10020
},
{
"dateTime": "10/02/2022",
"gc": 0,
"credit": 10,
"debit": 10,
"voucher": 0,
"e_gift": 0,
"cash": 100
"Gross Total": 4000,
"Discount": 100,
"total amount" : 4120
},
]

let's call them collection1 and collection2, using Collection methods such as merge and push this should work:
$merged = collect();
foreach(collection1 as $e){
$temp = collect($e)->merge($collect2->where('dateTime', $e['dateTime'])->first());
$merged->push($temp);
}
return $merged;
Notice that i've created these variables using a sample from your code above:
$collect2 = collect([
[
"dateTime"=> "10/01/2022",
"Gross Total"=> 6000,
"Discount"=> 300
],
[
"dateTime"=> "10/02/2022",
"Gross Total"=> 4000,
"Discount"=> 100
],
]);
please check the Documentation for more details

Related

Retrieving data from REST API

I am trying to implement my first project in Laravel that will contain APIs, more specifically the Sportmonks API. What is the best way to get the data and display it in my view?
I have managed to display some of the data, but I do not know the correct way to display the data from the "standings", as well as from the tables that it has in it (overall, home, away, total)
API returns
{
"data": [{
"id": 77447501,
"name": "1st Phase",
"league_id": 501,
"season_id": 17141,
"round_id": 195000,
"round_name": 33,
"type": "Group Stage",
"stage_id": 77447501,
"stage_name": "1st Phase",
"resource": "stage",
"standings": {
"data": [{
"position": 1,
"team_id": 62,
"team_name": "Rangers",
"round_id": 195000,
"round_name": 33,
"group_id": null,
"group_name": null,
"overall": {
"games_played": 33,
"won": 28,
"draw": 5,
"lost": 0,
"goals_scored": 78,
"goals_against": 10,
"points": 89
},
"home": {
"games_played": 16,
"won": 16,
"draw": 0,
"lost": 0,
"goals_scored": 47,
"goals_against": 2,
"points": 48
},
"away": {
"games_played": 17,
"won": 12,
"draw": 5,
"lost": 0,
"goals_scored": 31,
"goals_against": 8,
"points": 41
},
"total": {
"goal_difference": "68",
"points": 89
},
"result": "Championship Round",
"points": 89,
"recent_form": "WWWWD",
"status": null
}],....
}
}]
}
Controller
public function index() {
$response = Http::get('apiurl');
$response->json();
$result = json_decode($response, true);
$matches = $result['data'];
return view('/api', compact('matches'));
}
Instead of returning json you can return object
$response = Http::get('apiurl');
$result=$response->object();
$matches=$result->data;
return view('/api', compact('matches'));
then in your view
#foreach($matches as $match)
#foeach($match->standings->data as $standing)
{{$standing->team_name??null}}
#endforeach
#endforeach

Convert Flash Coordinate into GeoJSON

I have a legacy .swf file that my team used to create a custom map.
The .swf file looks like this with following format:
{
"Signature": "CWS",
"Version": 8,
"FileLength": 87736,
"FrameSize": {
"Xmin": 0,
"Xmax": 14400,
"Ymin": 0,
"Ymax": 10000
},
"FrameRate": 12,
"FrameCount": 1,
"Tags": [
{
"TagName": "FileAttributes",
"Length": 4,
"Reserved": 0,
"HasMetaData": 0,
"SWFFlagsAS3": 0,
"SWFFlagsNoCrossDomainCache": 0,
"SWFFlagsUseNetwork": 0,
"UNDEFINED": 0
},
{
"TagName": "SetBackgroundColor",
"Length": 3,
"BackgroundColor": [
51,
51,
51
]
},
{
"TagName": "Protect",
"Length": 0
},
{
"TagName": "DefineShape4",
"Length": 309,
"ShapeId": 1,
"ShapeBounds": {
"Xmin": 10629,
"Xmax": 12137,
"Ymin": 4084,
"Ymax": 4748
},
"EdgeBounds": {
"Xmin": 10630,
"Xmax": 12136,
"Ymin": 4085,
"Ymax": 4747
},
"Reserved": 0,
"UsesFillWindingRule": 0,
"UsesNonScalingStrokes": 0,
"UsesScalingStrokes": 1,
"Shapes": {
"FillStyles": [
{
"FillStyleType": 0,
"FillStyleName": "solid fill",
"Color": [
255,
255,
102,
255
]
}
],
"LineStyles": [
{
"Width": 2,
"StartCapStyle": 0,
"JoinStyle": 0,
"HasFillFlag": 0,
"NoHScaleFlag": 0,
"NoVScaleFlag": 0,
"PixelHintingFlag": 0,
"Reserved": 0,
"NoClose": 0,
"EndCapStyle": 0,
"Color": [
255,
255,
255,
255
]
},
{
"Width": 2,
"StartCapStyle": 0,
"JoinStyle": 0,
"HasFillFlag": 0,
"NoHScaleFlag": 0,
"NoVScaleFlag": 0,
"PixelHintingFlag": 0,
"Reserved": 0,
"NoClose": 0,
"EndCapStyle": 0,
"Color": [
255,
255,
102,
255
]
}
],
"FillBits": 1,
"LineBits": 2,
"ShapeRecords": [
{
"RecordType": "stylechange",
"MoveDeltaX": 10630,
"MoveDeltaY": 4306,
"LineStyle": 1
},
{
"RecordType": "straightedge",
"LineType": "General",
"DeltaX": 23,
"DeltaY": -1
},
},
}
What format is this? and is there a way to convert this to GeoJson format so I can use it with d3.js?
To be specific, this is the data data for US combatant command(COCOM) map. I could not find the GeoJson format of this map in the entire Internet, so my only hope is to covert the legacy data into GeoJson
I ended up drawing my own cocom map using Geojson.io. I don't think there is a simple way to convert ShapeRecord to GeoJson since it is a completely different coordinate system.

How can I resolve the increase in index size when using nested objects in elasticsearch?

The total number of data is 1 billion.
When I configure an index by setting some fields of data as nested objects, the number of data increases and the index size increases.
There are about 20 nested objects in a document.
When I index 1 billion documents, the number of indexes is 20 billion, and the index size is about 20TB.
However, when I remove nested objects, the number of indexes is 1 billion, and the index size is about 5TB.
It's simply removed nested object and can not provide services with this index structure.
I know why nested objects have a higher index count than a simple object configuration.
But I ask why the index is four times larger and how to fix it.
version of elasticsearch : 5.1.1
The Sample Data is as follows:
nested object Mapping : idds, ishs, resources, versions
{
"fileType": {
"asdFormat": 1
},
"dac": {
"pe": {
"cal": {
"d1": -4634692645508395000,
"d2": -5805223225419042000,
"d3": -1705264433
},
"bytes": "6a7068e0",
"entry": 0,
"count": 7,
"css": {
"idh": 0,
"ish": 0,
"ifh": 0,
"ioh": 0,
"ish": 0,
"ied": 0,
"exp": 0,
"imp": 0,
"sec": 0
},
"ff": {
"field1": 23117,
"field2": 144,
"field3": 3,
"field4": 0,
"field5": 4,
"field6": 0,
"field7": 65535,
"field8": 0,
"field9": 184,
"field10": 0,
"field11": 0,
"field12": 0,
"field13": 64,
"field14": 0,
"field15": 40104,
"field16": 64563,
"field17": 0,
"field18": 0,
"field19": 0,
"field20": 0,
"field21": 0,
"field22": 0,
"field23": 0,
"field24": 0,
"field25": 0,
"field26": 0,
"field27": 0,
"field28": 0,
"field29": 0,
"field30": 0,
"field31": 224
},
"ifh": {
"mc": 332,
"nos": 3,
"time": 1091599505,
"ps": 0,
"ns": 0,
"soh": 224,
"chart": 271
},
"ioh": {
"magic": 267,
"mlv": 7,
"nlv": 10,
"soc": 80384,
"soid": 137216,
"soud": 0,
"aep": 70290,
"boc": 4096,
"bod": 86016,
"aib": "16777216",
"si": 4096,
"fa": 512,
"mosv": 5,
"nosv": 1,
"miv": 5,
"niv": 1,
"msv": 4,
"nsv": 0,
"wv": 0,
"si": 262144,
"sh": 1024,
"cs": 0,
"ss": 2,
"dllchart": 32768,
"ssr": "262144",
"ssc": "262144",
"ssh": "1048576",
"shc": "4096",
"lf": 0,
"nor": 16
},
"idds": [
{
"id": 1,
"address": 77504,
"size": 300
},
{
"id": 2,
"address": 106496,
"size": 134960
},
{
"id": 6,
"address": 5264,
"size": 28
},
{
"id": 11,
"address": 592,
"size": 300
},
{
"id": 12,
"address": 4096,
"size": 1156
}
],
"ishs": [
{
"id": 0,
"name": ".text",
"size": 79920,
"address": 4096,
"srd": 80384,
"ptr": 1024,
"ptrl": 0,
"ptl": 0,
"nor": 0,
"nol": 0,
"chart": 3758096480,
"ex1": 60404022,
"ex2": 61903965,
"ex": 61153993.5
},
{
"id": 1,
"name": ".data",
"size": 17884,
"address": 86016,
"srd": 2048,
"ptr": 81408,
"ptrl": 0,
"ptl": 0,
"nor": 0,
"nol": 0,
"chart": 3221225536,
"ex1": 27817394,
"ex2": -1,
"ex": 27817394
},
{
"id": 2,
"name": ".rsrc",
"size": 155648,
"address": 106496,
"srd": 135680,
"ptr": 83456,
"ptrl": 0,
"ptl": 0,
"nor": 0,
"nol": 0,
"chart": 3758096448,
"ex1": 38215005,
"ex2": 46960547,
"ex": 42587776
}
],
"resources": [
{
"id": 2,
"count": 3,
"hash": 658696779440676200
},
{
"id": 3,
"count": 14,
"hash": 4671329014159995000
},
{
"id": 5,
"count": 30,
"hash": -6413921454731808000
},
{
"id": 6,
"count": 17,
"hash": 8148183923057157000
},
{
"id": 14,
"count": 4,
"hash": 8004262029246967000
},
{
"id": 16,
"count": 1,
"hash": 7310592488525726000
},
{
"id": 2147487240,
"count": 2,
"hash": -7466967570237519000
}
],
"upx": {
"path": "xps",
"d64": 3570326159822345700
},
"versions": [
{
"language": 1042,
"codePage": 1200,
"companyName": "Microsoft Corporation",
"fileDescription": "Files and Settings Transfer Wizard",
"fileVersion": "5.1.2600.2180 (xpsp_sp2_rtm.040803-2158)",
"internalName": "Microsoft",
"legalCopyright": "Copyright (C) Microsoft Corp. 1999-2000",
"originalFileName": "calc.exe",
"productName": "Microsoft(R) Windows (R) 2000 Operating System",
"productVersion": "5.1.2600.2180"
}
],
"import": {
"dll": [
"GDI32.dll",
"KERNEL32.dll",
"USER32.dll",
"ole32.dll",
"ADVAPI32.dll",
"COMCTL32.dll",
"SHELL32.dll",
"msvcrt.dll",
"comdlg32.dll",
"SHLWAPI.dll",
"SETUPAPI.dll",
"Cabinet.dll",
"LOG.dll",
"MIGISM.dll"
],
"count": 14,
"d1": -149422985349905340,
"d2": -5344971616648705000,
"d3": 947564411044974800
},
"ddSec0": {
"d1": -3007779250746558000,
"d4": -2515772085422514700
},
"ddSec2": {
"d2": -4422408392580008000,
"d4": -8199520081862749000
},
"ddSec3": {
"d1": -8199520081862749000
},
"cdp": {
"d1": 787971,
"d2": 39,
"d3": 101980696,
"d4": 3,
"d5": 285349133
},
"cde": {
"d1": 67242500,
"d2": 33687042,
"d3": 218303490,
"d4": 1663632132,
"d5": 0
},
"cdm": {
"d1": 319293444,
"d2": 2819,
"d3": 168364553,
"d4": 50467081,
"d5": 198664
},
"cdb": {
"d1": 0,
"d2": 0,
"d3": 0,
"d4": 0,
"d5": 0
},
"mm": {
"d0": -3545367393134139000,
"d1": 1008464166428372900,
"d2": -6313842304565328000,
"d3": -5015640502060250000
},
"ser": 17744,
"ideal": 0,
"map": 130,
"ol": 0
}
},
"fileSize": 219136
}

Compute difference between field and aggregated field

I have to run complex aggregation and one of its steps is computing sum of sold_qty field, and then I need to subtract this sum with non aggregated field all_qty. My data looks like:
{item_id: XXX, sold_qty: 1, all_qty: 20, price: 100 }
{item_id: XXX, sold_qty: 3, all_qty: 20, price: 100 }
{item_id: YYY, sold_qty: 1, all_qty: 20, price: 80 }
These are transactions from offer. The all_qty and price fields are redundant - express single values from other structure - offers and just duplicated in all transactions from single offer (identified by item_id).
In the terms of SQL what I need is:
SELECT (all_qty - sum(sold_qty)) * price GROUP BY item_id
What I've done is aggregation
'{
"query": {"term": {"seller": 9059247}},
"size": 0,
"aggs": {
"group_by_offer": {
"terms": { "field": "item_id", size: 0},
"aggs": { "sold_sum": {"sum": {"field": "sold_qty"}}}
}
}
}'
But I don't know what to do next to achieve my goal.
Since you are already storing redundant fields, if I were you, I would also store the result of all_price = all_qty * price and sold_price = sold_qty * price. It's is not mandatory but it will be faster at execution time than executing scripts to make the same computation.
{item_id: XXX, sold_qty: 1, sold_price: 20, all_qty: 20, price: 100, all_price: 2000 }
{item_id: XXX, sold_qty: 3, sold_price: 300, all_qty: 20, price: 100, all_price: 2000 }
{item_id: YYY, sold_qty: 1, sold_price: 80, all_qty: 20, price: 80, all_price: 1600 }
All you'd have to do next is to sum sold_price and average all_price and simply get the difference between both using a bucket_script pipeline aggregation:
{
"query": {
"term": {
"seller": 9059247
}
},
"size": 0,
"aggs": {
"group_by_offer": {
"terms": {
"field": "item_id",
"size": 0
},
"aggs": {
"sold_sum": {
"sum": {
"field": "sold_price"
}
},
"all_sum": {
"avg": {
"field": "all_price"
}
},
"diff": {
"bucket_script": {
"buckets_path": {
"sold": "sold_sum",
"all": "all_sum"
},
"script": "params.all - params.sold"
}
}
}
}
}
}

GetStats duration and interval parameters, clarifying API documentation for Jelastic API

https://docs.jelastic.com/api/?class=environment.Control&member=GetStats
At the above link in the Jelastic API documentation for the GetStats method there are two parameters duration and interval.
When querying the api i can't figure out how these two parameters interact with each other.
If i query with the below i would expect 100 records at a resolution of 1 minute
/1.0/environment/control/rest/getstats?domain=[myDomiain]&session=[MySession]&duration=6000&interval=60&nodeid=[MyNode]
What i get back is 4 records for each hour so i'm unsure of how the parameters work.
Should i be using GetSumStats?
My final question would be what format are the cpu and mem stats in? MHz and Bytes?
{
"iops_used": 0,
"duration": 3600,
"cpumhz": 3,
"start": "2016-05-03 08:00:00",
"disk": 2141,
"mem": 194840,
"cpu": 12254,
"capacity": 0,
"net": {
"in_int": 703019,
"out_int": 566947,
"in_ext": 46222,
"out_ext": 367209
}
},
{
"iops_used": 0,
"duration": 3600,
"cpumhz": 3,
"start": "2016-05-03 09:00:00",
"disk": 2141,
"mem": 171992,
"cpu": 10076,
"capacity": 0,
"net": {
"in_int": 156703,
"out_int": 314023,
"in_ext": 12627,
"out_ext": 13535
}
},
{
"iops_used": 0,
"duration": 3580,
"cpumhz": 3,
"start": "2016-05-03 10:00:00",
"disk": 2141,
"mem": 172400,
"cpu": 11198,
"capacity": 0,
"net": {
"in_int": 515521,
"out_int": 551317,
"in_ext": 10329,
"out_ext": 17161
}
},
{
"iops_used": 0,
"duration": 3601,
"cpumhz": 3,
"start": "2016-05-03 11:00:00",
"disk": 2141,
"mem": 172610,
"cpu": 10032,
"capacity": 0,
"net": {
"in_int": 153394,
"out_int": 310694,
"in_ext": 10285,
"out_ext": 11210
}
}
#dlearious, for using interval equal 60 you should set duration value to 3600. This is due to the fact that Jelastic keeps detailed data hourly.
Also, you can start from minimal interval = 20.
Jelastic shows cpu in milliseconds and mem in Bytes.

Resources