Validate 2 depth array on Laravel - laravel

Is there a way to validate this array(), Nothing works so far:
[
{
"transaction": {
"user_id": 6,
"month": 12,
"year": 2084
},
"entities": [
{
"name": "Allan Botsford",
"value": 3,
"is_total": false,
"type": "CASH"
},
{
"name": "Luisa Schiller Sr.",
"value": 6266,
"is_total": false,
"type": "CASH"
},
{
"name": "Susie Deckow MD",
"value": 506700,
"is_total": false,
"type": "CASH"
}
]
},
{
"transaction": {
"user_id": 7,
"month": 5,
"year": 2002
},
"entities": [
{
"name": "Raquel Jast",
"value": 7,
"is_total": false,
"type": "CASH"
},
{
"name": "Wendell Herman I",
"value": 4480,
"is_total": false,
"type": "CASH"
},
{
"name": "Oceane Greenfelder DDS",
"value": 46344,
"is_total": false,
"type": "CASH"
}
]
}
]
I can validate the transaction with the following rules:
[
'*.transaction.month' => 'required|numeric',
'*.transaction.year' => 'required|numeric',
'*.transaction.transaction_date' => 'sometimes|date_format:Y-m-d'
]
The problem is in the nested entities array because the following rules are ignored:
return [
'*.entities.*.is_total' => 'required|boolean',
'*.entities.*.name' => 'required|string',
'*.entities.*.value' => 'required|numeric',
'*.entities.*.type' => ['required', Rule::in(CashTemporaryInvestment::TYPES)]
]
I don't find any hint on laravel documentation. I will appreciate any help. I am using Laravel 7

Laravel Validator wildcard will work using the following preg_match() as seen in the Validator class
$pattern = str_replace('\*', '([^\.]+)', preg_quote($this->getPrimaryAttribute($attribute), '/'));
So your pattern will work if you can access the attributes using
[0]['entities'][0]['is_total'] for your rule '*.entities.*.is_total' => 'required|boolean',

Related

Logstash doesn't filter out JSON from the Twitter API

I want to remove unnecessary fields. There are many of them. I'm using JSON filter plugin for Logstash but it doesn't work properly. It doesn't want to filter the data or just doesn't send it to the output.
I've tried to use the mutate field but without success.
I want to remove for example the entities field which is a top-level field but none of my configs are working. I also want to remove some nested fields...
Here's my example JSON from the Twitter API:
{
"retweet_count": 0,
"created_at": "Mon Dec 14 18:43:09 +0000 2020",
"place": null,
"in_reply_to_user_id_str": null,
"lang": "pl",
"filter_level": "low",
"possibly_sensitive": false,
"id": 1338555139993591800,
"id_str": "1338555139993591814",
"quote_count": 0,
"is_quote_status": false,
"geo": null,
"entities": {
"symbols": [],
"user_mentions": [],
"urls": [
{
"indices": [
117,
140
],
"url": "xxx",
"expanded_url": "xxx"
}
],
"hashtags": [
{
"text": "koronawirus",
"indices": [
84,
96
]
},
{
"text": "COVID19",
"indices": [
97,
105
]
},
{
"text": "Lockdown",
"indices": [
106,
115
]
}
]
},
"timestamp_ms": "1607971389183",
"reply_count": 0,
"retweeted": false,
"text": "W Wielkiej Brytanii wykryto nowy wariant koronawirusa. Kolejne kraje z lockdownem­čĹç\n\n#koronawirus #COVID19 #Lockdown\n\nxxx",
"contributors": null,
"truncated": false,
"in_reply_to_user_id": null,
"source": "Twitter Web App",
"#timestamp": "2020-12-14T18:43:09.000Z",
"in_reply_to_screen_name": null,
"favorited": false,
"in_reply_to_status_id": null,
"user": {
"created_at": "Tue May 12 09:11:01 +0000 2009",
"profile_use_background_image": false,
"lang": null,
"contributors_enabled": false,
"profile_text_color": "000000",
"id": 39464882,
"id_str": "39464882",
"following": null,
"geo_enabled": false,
"profile_sidebar_fill_color": "000000",
"is_translator": false,
"protected": false,
"profile_image_url": "xxx",
"profile_link_color": "3B94D9",
"name": "Salon24.pl",
"profile_sidebar_border_color": "000000",
"favourites_count": 309,
"profile_background_image_url": "xxx",
"followers_count": 17473,
"description": null,
"location": "Polska",
"url": "xxx",
"profile_background_color": "000000",
"utc_offset": null,
"profile_background_image_url_https": "xxx",
"default_profile": false,
"follow_request_sent": null,
"verified": false,
"translator_type": "none",
"friends_count": 1028,
"time_zone": null,
"default_profile_image": false,
"screen_name": "Salon24pl",
"profile_image_url_https": "xxx",
"statuses_count": 48490,
"notifications": null,
"listed_count": 203,
"profile_background_tile": false
},
"in_reply_to_status_id_str": null,
"favorite_count": 0,
"#version": "1",
"coordinates": null
}
And here's my actual config:
input {
twitter {
id => "logstash_to_kafka_plugin"
consumer_key => "xxx"
consumer_secret => "xxx"
oauth_token => "xxx"
oauth_token_secret => "xxx"
keywords => [ "koronawirus" ]
full_tweet => true
ignore_retweets => true
}
}
filter {
json {
source => "message"
remove_field => [ "[message][entities]"]
}
}
output {
kafka {
codec => json
topic_id => "twitter_tweets"
}
}
I've tried different ways to indicate that field like:
remove_field => [ "entities" ] or
remove_field => [ "[entities]" ]
but that didn't work either.
Try adding a mutate filter with remove_field after the json filter block so the new mutate filter executes after the fields have been cretaed to the root via the json filter.
Your filter could look something like
filter {
json {
source => "message"
}
mutate {
remove_field => ["entities", "user.created_at"] // works for nested field as well
}
}

Generating data tables in elastic search

I'm trying to make a data table which consists of some calculations
******************************************************
** Bidder * Request * CPM * Revenue * Response Time **
******************************************************
I've created an index which holds all the data, so my data is stored in following format:
{
"data": {
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 78,
"relation": "eq"
},
"max_score": 1,
"hits": [
{
"_index": "nits_media_bid_won",
"_type": "nits_media_data_collection",
"_id": "MIyt6m8BWa2IbVphmPUh",
"_score": 1,
"_source": {
"bidderCode": "appnexus",
"width": 300,
"height": 600,
"statusMessage": "Bid available",
"adId": "43d59b34fd61b5",
"requestId": "2c6d19dcc536c3",
"mediaType": "banner",
"source": "client",
"cpm": 0.5,
"creativeId": 98493581,
"currency": "USD",
"netRevenue": true,
"ttl": 300,
"adUnitCode": "/19968336/header-bid-tag-0",
"appnexus": {
"buyerMemberId": 9325
},
"meta": {
"advertiserId": 2529885
},
"originalCpm": 0.5,
"originalCurrency": "USD",
"auctionId": "a628c0c0-bd4d-4f2a-9011-82fab780910e",
"responseTimestamp": 1580190231422,
"requestTimestamp": 1580190231022,
"bidder": "appnexus",
"timeToRespond": 400,
"pbLg": "0.50",
"pbMg": "0.50",
"pbHg": "0.50",
"pbAg": "0.50",
"pbDg": "0.50",
"pbCg": null,
"size": "300x600",
"adserverTargeting": {
"hb_bidder": "appnexus",
"hb_adid": "43d59b34fd61b5",
"hb_pb": "0.50",
"hb_size": "300x600",
"hb_source": "client",
"hb_format": "banner"
},
"status": "rendered",
"params": [
{
"placementId": 13144370
}
],
"nits_account": "asjdfagsd2384vasgd19",
"nits_url": "http://nitsmedia.local/run-ad",
"session_id": "YTGpETKSk2nHwLRB6GbP",
"timestamp": "2020-01-28T05:43:51.702Z",
"geo_data": {
"continent": "North America",
"address_format": "{{recipient}}\n{{street}}\n{{city}} {{region_short}} {{postalcode}}\n{{country}}",
"alpha2": "US",
"alpha3": "USA",
"country_code": "1",
"international_prefix": "011",
"ioc": "USA",
"gec": "US",
"name": "United States of America",
"national_destination_code_lengths": [
3
],
"national_number_lengths": [
10
],
"national_prefix": "1",
"number": "840",
"region": "Americas",
"subregion": "Northern America",
"world_region": "AMER",
"un_locode": "US",
"nationality": "American",
"postal_code": true,
"unofficial_names": [
"United States",
"Vereinigte Staaten von Amerika",
"États-Unis",
"Estados Unidos",
"アメリカ合衆国",
"Verenigde Staten"
],
"languages_official": [
"en"
],
"languages_spoken": [
"en"
],
"geo": {
"latitude": 37.09024000000000143018041853792965412139892578125,
"latitude_dec": "39.44325637817383",
"longitude": -95.7128909999999990532160154543817043304443359375,
"longitude_dec": "-98.95733642578125",
"max_latitude": 71.5388001000000031126546673476696014404296875,
"max_longitude": -66.8854170000000038953658076934516429901123046875,
"min_latitude": 18.77629999999999910187398199923336505889892578125,
"min_longitude": 170.595699999999993679011822678148746490478515625,
"bounds": {
"northeast": {
"lat": 71.5388001000000031126546673476696014404296875,
"lng": -66.8854170000000038953658076934516429901123046875
},
"southwest": {
"lat": 18.77629999999999910187398199923336505889892578125,
"lng": 170.595699999999993679011822678148746490478515625
}
}
},
"currency_code": "USD",
"start_of_week": "sunday"
}
}
},
//Remaining data set....
]
},
}
}
So as per my data set I want to fetch all unique bidderCode (which will be represented as Bidder in the table) and make the data with calculation respective to it. For example
Request - This will be total number of docs count in aggregation
CPM - CPM will be sum of all CPM divided by 1000
Revenue - Total CPM multiplied by 1000
Response time - Average of (responseTimestamp - requestTimestamp)
How can I achieve this, I'm bit confused with it. I tried building the blocks by:
return $this->elasticsearch->search([
'index' => 'nits_media_bid_won',
'body' => [
'query' => $query,
'aggs' => [
'unique_bidders' => [
'terms' => ['field' => 'bidderCode.keyword']
],
'aggs' => [
'sum' => [
'cpm' => [
'field' => 'cpm',
'script' => '_value / 1000'
]
]
],
]
]
]);
But it is showing me error:
{
"error":{
"root_cause":[
{
"type":"x_content_parse_exception",
"reason":"[1:112] [sum] unknown field [cpm], parser not found"
}
],
"type":"x_content_parse_exception",
"reason":"[1:112] [sum] unknown field [cpm], parser not found"
},
"status":400
}
I'm new to this help me out in it. Thanks.
ElasticSearch isn't wrong -- you've swapped the aggregation name with its type. It cannot parse the agg type cpm.
Here's the corrected query:
GET nits_media_bid_won/_search
{
"size": 0,
"aggs": {
"unique_bidders": {
"terms": {
"field": "bidderCode.keyword",
"size": 10
},
"aggs": {
"cpm": { <----------
"sum": { <----------
"field": "cpm",
"script": "_value / 1000"
}
}
}
}
}
}

Show just the name of collection?

I have pivot table post_tag between posts and tags table.
I want to show the tags name for each post in resource API.
Resource file code snippets:
public function toArray($request)
{
return [
'Name'=> $this->title,
'Description'=> $this->desc,
'Image'=> $this->image_path,
'Posted By'=> $this->user->name,
'Category Name'=> $this->category->name,
'Tags' => $this->tags,
];
}
Relation added in Post model:
public function tags()
{
return $this->belongsToMany(Tag::class)->withTimestamps();
}
One of Results of an API:
"Name": "first post",
"Description": "desc of the post",
"Image": "https://blog.test/uploads/post/SjvDfC1Zk5UzGO6ViRbjUQTMocwCh0lzEYW1Gufp.jpeg",
"Posted By": "test",
"Category Name": "web dev",
"Tags": [
{
"id": 1,
"name": "HTML",
"created_at": "2020-01-08 19:19:55",
"updated_at": "2020-01-08 19:19:55",
"pivot": {
"post_id": 1,
"tag_id": 1,
"created_at": "2020-01-08 19:21:40",
"updated_at": "2020-01-08 19:21:40"
}
},
{
"id": 2,
"name": "CSS",
"created_at": "2020-01-08 19:19:55",
"updated_at": "2020-01-08 19:19:55",
"pivot": {
"post_id": 1,
"tag_id": 2,
"created_at": "2020-01-08 19:21:40",
"updated_at": "2020-01-08 19:21:40"
}
},
I need to show like that JUST names:
"Tags" : {
'HTML',
'CSS',
'JS',
}
You must be use like this
return [
'Name'=> $this->title,
'Description'=> $this->desc,
'Image'=> $this->image_path,
'Posted By'=> $this->user->name,
'Category Name'=> $this->category->name,
'Tags' => $this->tags->pluck('name')
];

Attempt to index document gives error: "only value lists are allowed in serialized settings"

When attempting to index the following document:
{
"branch": "master",
"classes": [
{
"content_count": 2,
"documentation": "",
"extends": [],
"generic": "",
"implements": [],
"line": 10,
"line_count": 36,
"modifiers": [
"public"
],
"name": "removeDuplicateFromString"
}
],
"commit_hash": "e53249ba2381d2f20f3d4493ad70e2da0abb3b05",
"contributors": [
{
"id": "7676016",
"name": "varunu28",
"url": "https://github.com/varunu28"
}
],
"enums": [],
"fields": [],
"filename": "removeDuplicateFromString.java",
"imports": [
{
"name": "java.io.BufferedReader",
"wildcard": false
},
{
"name": "java.io.InputStreamReader",
"wildcard": false
}
],
"interfaces": [],
"license": "",
"methods": [
{
"cyclomatic_complexity": 1,
"documentation": "",
"generic": "",
"line": 11,
"line_count": 9,
"modifiers": [
"public",
"static"
],
"name": "main",
"params": [
{
"name": "args",
"type": "String[]"
}
],
"parent": "removeDuplicateFromString",
"type_": "void"
},
{
"cyclomatic_complexity": 5,
"documentation": "",
"generic": "",
"line": 29,
"line_count": 16,
"modifiers": [
"public",
"static"
],
"name": "removeDuplicate",
"params": [
{
"name": "s",
"type": "String"
}
],
"parent": "removeDuplicateFromString",
"type_": "String"
}
],
"number_forks": 1695,
"number_stars": 4000,
"number_watchs": 394,
"package": "",
"path": "Others",
"repository": "TheAlgorithms/Java"
}
I get the following error:
{"error":{"root_cause":[{"type":"settings_exception","reason":"Failed to load settings from [{\"interfaces\":[],\"imports\":[{\"name\":\"java.io.BufferedReader\",\"wildcard\":false},{\"name\":\"java.io.InputStreamReader\",\"wildcard\":false}],\"package\":\"\",\"methods\":[{\"parent\":\"removeDuplicateFromString\",\"line_count\":9,\"line\":11,\"documentation\":\"\",\"name\":\"main\",\"cyclomatic_complexity\":1,\"modifiers\":[\"public\",\"static\"],\"params\":[{\"name\":\"args\",\"type\":\"String[]\"}],\"type_\":\"void\",\"generic\":\"\"},{\"parent\":\"removeDuplicateFromString\",\"line_count\":16,\"line\":29,\"documentation\":\"\",\"name\":\"removeDuplicate\",\"cyclomatic_complexity\":5,\"modifiers\":[\"public\",\"static\"],\"params\":[{\"name\":\"s\",\"type\":\"String\"}],\"type_\":\"String\",\"generic\":\"\"}],\"number_forks\":1695,\"classes\":[{\"implements\":[],\"line_count\":36,\"extends\":[],\"line\":10,\"documentation\":\"\",\"name\":\"removeDuplicateFromString\",\"content_count\":2,\"modifiers\":[\"public\"],\"generic\":\"\"}],\"repository\":\"TheAlgorithms/Java\",\"branch\":\"master\",\"commit_hash\":\"e53249ba2381d2f20f3d4493ad70e2da0abb3b05\",\"enums\":[],\"path\":\"Others\",\"license\":\"\",\"filename\":\"removeDuplicateFromString.java\",\"number_watchs\":394,\"contributors\":[{\"name\":\"varunu28\",\"id\":\"7676016\",\"url\":\"https://github.com/varunu28\"}],\"fields\":[],\"number_stars\":4000}]"}],"type":"settings_exception","reason":"Failed to load settings from [{\"interfaces\":[],\"imports\":[{\"name\":\"java.io.BufferedReader\",\"wildcard\":false},{\"name\":\"java.io.InputStreamReader\",\"wildcard\":false}],\"package\":\"\",\"methods\":[{\"parent\":\"removeDuplicateFromString\",\"line_count\":9,\"line\":11,\"documentation\":\"\",\"name\":\"main\",\"cyclomatic_complexity\":1,\"modifiers\":[\"public\",\"static\"],\"params\":[{\"name\":\"args\",\"type\":\"String[]\"}],\"type_\":\"void\",\"generic\":\"\"},{\"parent\":\"removeDuplicateFromString\",\"line_count\":16,\"line\":29,\"documentation\":\"\",\"name\":\"removeDuplicate\",\"cyclomatic_complexity\":5,\"modifiers\":[\"public\",\"static\"],\"params\":[{\"name\":\"s\",\"type\":\"String\"}],\"type_\":\"String\",\"generic\":\"\"}],\"number_forks\":1695,\"classes\":[{\"implements\":[],\"line_count\":36,\"extends\":[],\"line\":10,\"documentation\":\"\",\"name\":\"removeDuplicateFromString\",\"content_count\":2,\"modifiers\":[\"public\"],\"generic\":\"\"}],\"repository\":\"TheAlgorithms/Java\",\"branch\":\"master\",\"commit_hash\":\"e53249ba2381d2f20f3d4493ad70e2da0abb3b05\",\"enums\":[],\"path\":\"Others\",\"license\":\"\",\"filename\":\"removeDuplicateFromString.java\",\"number_watchs\":394,\"contributors\":[{\"name\":\"varunu28\",\"id\":\"7676016\",\"url\":\"https://github.com/varunu28\"}],\"fields\":[],\"number_stars\":4000}]","caused_by":{"type":"illegal_state_exception","reason":"only value lists are allowed in serialized settings"}},"status":500}
From which I've gathered that the main issues are either described in the part saying that:
{"type":"illegal_state_exception","reason":"only value lists are allowed in serialized settings"}}
Or:
"error":{"root_cause":[{"type":"settings_exception","reason":"Failed to load settings from [{\"interfaces\":[],\"imports\": ........
But I cannot find any information about this error or what it could be caused by. I've tried indexing both using a predefined index with mappings and to a non-existing index. Nothing seems to work.
Why can't I index this document?
It turns out that, as Farid mentioned in the comments section, I was using the wrong command when indexing from the command line.
The correct command to run is
curl -X POST -H 'Content-Type: application/json' [index location] -d [data]
Where the key is that you use POST and not PUT which is what I was doing.
Adding this for the ones using Kibana Dev Tools.
The key is to use an document type after an index name when adding the document
POST /{index name}/{document type}
{
request body (document) goes here.
}

Filter on aggregated bucket keys?

Given data model structure like this,
{
Id: 123,
"string_facet": [
{
"name": "make",
"value": "Audi"
},
{
"name": "carListType",
"value": "PERSON EU"
},
{
"name": "modelType",
"value": ""
},
{
"name": "engineBrand",
"value": "APT"
},
{
"name": "typeDescription",
"value": "8D2"
}
],
"number_facet": [
{
"name": "typeNumber",
"value": 4614
},
{
"name": "serialNumber",
"value": 2
},
{
"name": "engineSize",
"value": 18
},
{
"name": "horsePower",
"value": 125
},
{
"name": "kw",
"value": 92
},
{
"name": "engineVolume",
"value": 1781
},
{
"name": "listType",
"value": 0
}
],
"dateTime_facet": [
{
"name": "fromDate",
"value": "1999-04-01T00:00:00"
},
{
"name": "toDate",
"value": "2000-10-01T00:00:00"
}
]
}
I want to get aggregates facet names, and values per name. However, I'm only interested in facets that have specific names, such as: make and engineBrand. Note that facets are of type nested.
I have tried the following .NEST expression, but it still returns all of the facet names.
.Global("global", g => g
.Aggregations(ag => ag
.Filter("global_makes", f => f
.Filter(ff => ff
.Nested(n => n
.Path("string_facet")
.Filter(pf => pf.Term("string_facet.name", "make")))
)
.Aggregations(agg => agg
.Nested("nested_string_facet", nested => nested
.Path("string_facet")
.Aggregations(stringFacet => stringFacet
.Terms("name", nameAgg => nameAgg.Field("string_facet.name").Size(0)
.Aggregations(nameAggNext => nameAggNext
.Terms("value", valueAgg => valueAgg.Field("string_facet.value").Size(0))
)
)
)
)
)
)
)
)
);
I have a filter within global (to lose scope of a passed in query), and then filter only on string_facet.name which match "make", but results still include all other names as well. How do I filter out aggregation to include only buckets where name is "make"?
This helped. https://github.com/elastic/elasticsearch/issues/4449
Essentially had to move filter part deeper into aggregation.

Resources