I'm using laravel-mysql-spatial package to store geo-cordinates in database. while using it in other places like, everything works fine. but when I use it in an observer, the error came
Call to undefined method Grimzy\LaravelMysqlSpatial\Eloquent\SpatialExpression::getLat()
The following code is not giving point object.
public function created(Beneficiary $beneficiary)
{
dd($beneficiary);
}
but when I retrive the data using the created id like below, it worked fine then
public function created(Beneficiary $beneficiary)
{
$beneficiary = Beneficiary::find($beneficiary->id);
dd($beneficiary);
}
but the above is not considered as a good practice. I'm already having an object and making an another call for it.
Expected Result. This result came after I make a call to retrieve the same data
#attributes: array:19 [▼
"id" => 95
"name" => "Test beneficiary"
"phone" => "80572*****"
"coordinates" => Point {#547 ▼
#lat: 30.3165
#lng: 78.0322
}
This is how I'm getting the result.
#attributes: array:16 [▼
"name" => "Test beneficiary"
"phone" => "80572*****"
"coordinates" => SpatialExpression {#521 ▼
#value: Point {#510 ▼
#lat: 30.3165
#lng: 78.0322
}
}
Do the following:
$p = Point::fromWKT($elm->coordinates->getSpatialValue());
Now you can use:
$p->getLng()
$p->getLat()
Related
To begin with, the date format that that is stored in the index is 2021-09-16T14:06:02.000000Z
When I log in with the option to remember the user and then I log out, I get the following error.
The response from ElasticSearch is
array:3 [▼
"took" => 1
"errors" => true
"items" => array:1 [▼
0 => array:1 [▼
"index" => array:5 [▼
"_index" => "users"
"_type" => "_doc"
"_id" => "313"
"status" => 400
"error" => array:3 [▼
"type" => "mapper_parsing_exception"
"reason" => "failed to parse field [created_at] of type [date] in document with id '313'. Preview of field's value: '2021-09-16 11:37:49'"
"caused_by" => array:3 [▼
"type" => "illegal_argument_exception"
"reason" => "failed to parse date field [2021-09-16 11:37:49] with format [strict_date_optional_time||epoch_millis]"
"caused_by" => array:2 [▼
"type" => "date_time_parse_exception"
"reason" => "Failed to parse with all enclosed parsers"
]
]
]
]
]
]
]
This happens because when a user logs out, the remember_token attribute is modified and since the User model is modified, the index is updated.
The problem is that when it tries to update the index, the format of the date it tries to store in the index is not 2021-09-16T14:06:02.000000Z anymore
Instead, now the date format is 2021-09-16 11:37:49 and therefore there is a conflict in the date format that is already in the index and the date format that it tries to store.
This happens only when the framework updates the User model, when a user logs out.
This does not happen if I update the attribute of any model myself.
UPDATED
I just noticed that then laravel updates the remember_token, it disables the timestamps and that is why the date format changes to 2021-09-16 11:37:49.
However, I still don't know how to fix this issue.
It seem that your application break the rule (date type format) required by Elasticsearch. Type date format default to strict_date_optional_time||epoch_millis, see the doc.
Either you fix your application so that it write 2021-09-16T14:06:02.000000Z rather than 2021-09-16 11:37:49, or just 2021-09-16, because default format
require that, see the doc Built In Formats:
date_optional_time or strict_date_optional_time
A generic ISO datetime parser, where the date must include the year at a minimum, and the time (separated by T), is optional. Examples: yyyy-MM-dd'T'HH:mm:ss.SSSZ or yyyy-MM-dd.
Or you change the mapping of your Elasticsearch index to allow the format of 2021-09-16 11:37:49 use multiple date formats when mapping. You will need to explicitly set type of the index (and perform a _reindex afterwards to pull the data into your new index if necessary).
PUT my-index-000001
{
"mappings": {
"properties": {
"created_at": {
"type": "date",
"format": "yyyy-MM-dd HH:mm:ss||strict_date_optional_time ||epoch_millis"
}
}
}
}
The yyyy-MM-dd HH:mm:ss format should be able to parse your data entry similar to 2021-09-16 11:37:49.
Hopefully this will help.
I just faced the same issue and this format solved my problem: 2023-01-10 20:15:25.000+0300.
Other formats I had tried but not working:
2022-01-01
2021-09-16 11:37:49
2023-01-19 05:30:00.000Z
2023-01-19 05:30:00.0000
Versions:
Firebird 3.0 with PDO
Laravel 7
I'm using the Eloquent and I'm using package for connection with database https://packagist.org/packages/harrygulliford/laravel-firebird.
OBS: In Windows Serve work very well, but in Linux don't (In CentOS 7 and Ubuntu Server 20.04LTS).
I'm using Laravel with Firebird and I've problems with fields of type NUMERIC that return wrong values.
Example:
A query that should turn back 190,65, returns 0.0001.
This is the SQL DDL:
Item::selectRaw("CODIGO,DESCRICAO,PRECOVAREJO,PRECOATACADO,PRECOESPECIAL")->get();
This is return in json:
{ "data": [ { "CODIGO": "123456", "DESCRICAO": "DESCRIPTION EXAMPLE", "PRECOVAREJO": "0.0001", "PRECOATACADO": "0.0001", "PRECOESPECIAL": "0.0001" } ] }
Create Table:
CREATE TABLE ITENS ( PRECOVAREJO NUMERIC(15,3), PRECOATACADO NUMERIC(15,3), PRECOESPECIAL NUMERIC(15,3), CODIGO VARCHAR(8) NOT NULL, DESCRICAO VARCHAR(80) );
#Mark Rotteveel
This is the query builded by Eloquent Laravel:
array:2 [ 0 => array:3 [ "query" => "select count(*) as "aggregate" from "ITENS" where "ITENS"."DATACANCELAMENTO" is null" "bindings" => [] "time" => 33.17 ] 1 => array:3 [ "query" => """select CODIGO, DESCRICAO, PRECOVAREJO, PRECOATACADO, PRECOESPECIAL from "ITENS" where "ITENS"."DATACANCELAMENTO" is null order by "CODIGO" asc fetch first 10 rows only """ "bindings" => [] "time" => 8.39 ] ]
This is return expected
Image return expected
I solved the problem by updating the PHP version to version 7.4.16.
The problem was resolved in version 7.4.0
I also created a file in /etc/php.d/30-pdo_firebird.ini with the content: extension = pdo_firebird.
https://www.php.net/ChangeLog-7.php#7.4.0
resolve the bug
https://bugs.php.net/bug.php?id=65690
Thanks to everyone who helped
So when i use retrieve an object using get, i got a normal result
Code:
Contact::select(\DB::raw("CONCAT(COALESCE(`name`,''),' ',COALESCE(`last_name`,'')) AS display_name"),'id','name','last_name')->where('id',2382)->get()
Result:
[
"display_name" => "OFNA • CASA "
"id" => 2382
"name" => "OFNA • CASA"
"last_name" => null
]
but if i do a ->pluck() or ->toArray() i got this result:
[
"display_name" => b"Ofna €¢ Casa "
"id" => 2382
"name" => "OFNA • CASA"
"last_name" => null
]
For some reason the display_name is encoding incorrectly when converting to an Array. is there a way to fix this? or is a Laravel issue?
Thanks
My Laravel version is 6.8
I made a workaround, but im sure there should be a fix to this issue
This is my work around, map the get and then use the pluck on the collection after the map
get()->
map(function($object){
return [
'name'=>$object->name.' '.$object->last_name,
'id'=>$object->id
];
})->pluck('name','id');
It does work but im sure there should be a better way or maybe report it to Laravel.
Hope someone knows better about this.
THanks
Let's say I have this kind of log :
Jun 2 00:00:00 192.168.14.4 date=2016-06-01 time=23:56:05
devname=POPB-FW-01 devid=FG1K2D3I14800220 logid=1059028704 type=utm
subtype=app-ctrl eventtype=app-ctrl-all level=information vd="root"
appid=40568 user="" srcip=10.20.4.35 srcport=52438
srcintf="VRF-PUBLIC" dstip=125.209.230.238 dstport=443 dstintf="OUT"
proto=6 service="HTTPS" sessionid=424666004 applist="Monitor-all"
appcat="Web.Others" app="HTTPS.BROWSER" action=pass
hostname="lcs.naver.com" url="/" msg="Web.Others: HTTPS.BROWSER,"
apprisk=medium
So with this code below, I can regex the timestamp and the ip in future elastic fields :
filter {
grok {
match => {"message" => "%{SYSLOGTIMESTAMP:timestamp} %{client}" }
}
}
Now, how do I automatically get fields for the rest of the log ? Is there a simple way to say :
The thing before the "=" is the field name and the thing after is the value.
So I can obtain a JSON for elastic index with many fields for each log line :
{
"path" => "C:/Users/yoyo/Documents/yuyu/temp.txt",
"#timestamp" => 2017-11-29T10:50:18.947Z,
"#version" => "1",
"client" => "192.168.14.4",
"timestamp" => "Jun 2 00:00:00",
"date" => "2016-06-01",
"time" => "23:56:05",
"devname" => "POPB-FW-01 ",
"devid" => "FG1K2D3I14800220",
etc,...
}
Thanks in advance
Okay, I am really dumb
It was easy, rather than search on google, how to match equals, I just had to search key value matching with logstash.
So I just have to write :
filter {
kv {
}
}
And it's done !
Sorry
This is kind of a follow up from another one of my questions:
JSON parser in logstash ignoring data?
But this time I feel like the problem is more clear then last time and might be easier for someone to answer.
I'm using the JSON parser like this:
json #Parse all the JSON
{
source => "MFD_JSON"
target => "PARSED"
add_field => { "%{FAMILY_ID}" => "%{[PARSED][platform][family_id][1]}_%{[PARSED][platform][family_id][0]}" }
}
The part of the output for one the logs in logstash.stdout looks like this:
"FACILITY_NUM" => "1",
"LEVEL_NUM" => "7",
"PROGRAM" => "mfd_status",
"TIMESTAMP" => "2016-01-12T11:00:44.570Z",
MORE FIELDS
There are a whole bunch of fields that like the ones above that work when I remove the JSON code. When I add the JSON filter, the whole log just disappears form elasticserach/kibana for some reason. The bit added by the JSON filter is bellow:
"PARSED" => {
"platform" => {
"boot_mode" => [
[0] 2,
[1] "NAND"
],
"boot_ver" => [
[0] 6,
[1] 1,
[2] 32576,
[3] 0
],
WHOLE LOT OF OTHER VARIABLES
"family_id" => [
[0] 14,
[1] "Hatchetfish"
],
A WHOLE LOT MORE VARIABLES
},
"flash" => [
[0] 131072,
[1] 7634944
],
"can_id" => 1700,
"version" => {
"kernel" => "3.0.35 #2 SMP PREEMPT Thu Aug 20 10:40:42 UTC 2015",
"platform" => "17.0.32576-r1",
"product" => "next",
"app" => "53.1.9",
"boot" => "2013.04 (Aug 20 2015 - 10:33:51)"
}
},
"%{FAMILY_ID}" => "Hatchetfish 14"
Lets pretend the JSON won't work, I'm okay with that now, that shouldn't mess with everything else to do with the log from elasticsearch/kibana. Also, at the end I've got FAMILY_ID as a field that I added separately using add_field. At the very least that should show up, right?
If someone's seen something like this before it would be great help.
Also sorry for spamming almost the same question twice.
SAMPLE LOG LINE:
1452470936.88 1448975468.00 1 7 mfd_status 000E91DCB5A2 load {"up":[38,1.66,0.40,0.13],"mem":[967364,584900,3596,116772],"cpu":[1299,812,1791,3157,480,144],"cpu_dvfs":[996,1589,792,871,396,1320],"cpu_op":[996,50]}
The sample line will be parsed (Everything after load is JSON), and in stdout I can see that it is parsed successfully, But I don't see it in elasticsearch.
This is my output code:
elasticsearch
{
hosts => ["localhost:9200"]
document_id => "%{fingerprint}"
}
stdout { codec => rubydebug }
A lot of my logstash filter is in the other question, but I think like all the relevant parts are in this question now.
If you want to check it out here's the link: JSON parser in logstash ignoring data?
Answering my own question here. It's not the ideal answer, but if anyone has a similar problem as me you can try this out.
json #Parse all the JSON
{
source => "MFD_JSON"
target => "PARSED"
add_field => { "%{FAMILY_ID}" => "%{[PARSED][platform][family_id][1]}_%{[PARSED][platform][family_id][0]}" }
}
That's how I parsed all the JSON before, I kept at the trial and error hoping I'd get it sometime. I was about to just use a grok filter to get bits that I wanted, which is a option if this doesn't work for you. I came back to this later, and thought "What if I removed everything after" because of some crazy reason that I've forgotten. In the end I did this:
json
{
source => "MFD_JSON"
target => "PARSED_JSON"
add_field => { "FAMILY_ID" => "%{[PARSED_JSON][platform][family_id][1]}_%{[PARSED_JSON][platform][family_id][0]}" }
remove_field => [ "PARSED_JSON" ]
}
So, extract the field/fields your interested in, and then remove the field made by the parser at the end. That's what worked for me. I don't know why, but it might work for other people too.