Is there any way I can get only count of the data in response payload without any value array?
I am using ODataV4.0 with Webapi 2.2.
Currently it returns all the values and count when I query something like:
http://odata/People?$count=true
I just need something like "#odata.count":1, "value":[] or without "value".
Is the only way to have function for this job?
Set the $top to zero and $count to true.
For example:
http://services.odata.org/V4/Northwind/Northwind.svc/Customers?$count=true&$top=0
returns the count but no results
{
"#odata.context": "http://services.odata.org/V4/Northwind/Northwind.svc/$metadata#Customers",
"#odata.count": 91,
"value": []
}
Count is calculated after applying the $filter, but without factoring in $top and $skip.
For example: http://services.odata.org/V4/Northwind/Northwind.svc/Customers?$count=true&$top=0&$filter=Country%20eq%20%27Germany%27
informs you that there are 11 results where the Country is 'Germany', but without returning any records in the response.
You can also append $count as a path element to just get a raw count, E.G.,
https://services.odata.org/V4/Northwind/Northwind.svc/Customers/$count
This will also work with filters, etc, applied:
https://services.odata.org/V4/Northwind/Northwind.svc/Customers/$count?$filter=Country%20eq%20%27Germany%27
For a count of Customers in Germany.
Related
I have a serialized String like this
$string = '[{"name":"FOO"},{"name":""},{"name":"BAR"}]';
I am trying to process it via Laravel Collection's filter method and eliminate items without a defined "name" property.
$collection = collect(\json_decode($string));
$collection = $collection->filter(function($v){
return !empty($v->name);
});
$string = \json_encode($collection->toArray());
dd($string);
Normally I am expecting something like this:
[{"name":"FOO"},{"name":"BAR"}]
But I'm getting something like this:
{"0":{"name":"FOO"},"2":{"name":"BAR"}}
Funny thing is, if I skip the filtering process or return true every time, I keep getting the string in the desired format. Removing the toArray() call has the same result. I don't want to keep the numeric indices as associative object keys.
Why this anomaly? And what should I do to get the serialized data in desired format?
In PHP arrays the index key must be unique.
In your case you have the key 'name' and collection automatically assigns the index key to all items in the collection.
To overcome that problem just call
$string = \json_encode($collection->values());
So i am trying to get distinct collections like this
$data = DB::table('project_1_data')->distinct('Farmer_BankVerificationNumber_Farmer')->get();
But when i do a count($data) I get 1600 but when i run
$data = DB::table('project_1_data')->distinct('Farmer_BankVerificationNumber_Farmer')->count();
I get 1440. This is weird as i only want collection with distinct field 'Farmer_BankVerificationNumber_Farmer'. How do i write the query correctly?
It's because you're asking your query to count the distinct values in the wrong place.
You need to tell your count parameter what field you would like to count on. So you'll basically ask the query to get the database information, separate the distinct values and then count how many of a specific field are distinct.
Your finished query should look like this:
$data = DB::table('project_1_data')
->distinct()
->count('Farmer_BankVerificationNumber_Farmer');
I am performing an Elasticsearch query using the high-level-rest-api for Java and expect to see records that are either active or do not have a reference id. I'm querying by name for the records and if I hit the index directly with /_search?q=, I see the results I want.
Is my logic correct (pseudo-code):
postFilters.MUST {
Should {
MustNotExist {referenceId}
Must {status = Active}
}
Should {
MustNotExist {referenceId}
Must {type = Person}
}
}
What I get are records that are active with a reference id. But, I want to include records that also do not have a referenceId, hence why I have MustNotExist {referenceId}.
For simplicity, the second Should clause can be dropped (for testing) as the first one is not working as expected by itself.
In my case, I had to use a match query instead of a term query because the value I was querying for was not a primitive or a String. For example, the part where Must, type = Person, Person was an enum, and so looking for "Person" was not quite right, whereas match allowed it to "match".
I'm trying to build a threaded comments system by grouping the parent_ids together and limit the results using taken.
Comment table
$table->increments('id');
$table->text('content');
$table->integer('post_id');
$table->integer('parent_id')->index()->nullable();
$table->string('username');
$table->string('user_image')->default('http://lorempixel.com/60/60/people/');
$table->timestamps();
When I don't use take() to limit the result the JSON outputs as expected by being groupedBy('parent_id')
Query without take()
$post->comments->groupBy('parent_id');
JSON Output example
Query with take()
$post->comments->take(5)->groupBy('parent_id')
When I use take() it changes the JSON output to no longer include the grouped by parent_id as keys.
JSON Output example
How do I limit results without having an effect on the JSON output?
edited
Post Controller
public function index(Post $post)
{
$comments = $post->comments->groupBy('parent_id');
return $comments;
}
Edit
Why were the other replies deleted from this thread?
Edit 2
Oddly enough this query works based on the limit I set. So if I set the limit to a lower limit like 5 I get the grouped by keys outputting normally and JSON. However, if I set the limit by 5 I don't get those keys. See the JSON outputs above:
$comments = DB::table('comments')
->where('post_id',1)
->orderByDesc('created_at')
->limit(7)
->get();
return collect($comments)->groupBy('parent_id');
This is caused by the way json_encode() handles arrays with numeric keys.
In the case without ->take(5), the array keys are [0, 1, 6, 9]. They are not sequential and so get encoded as a JSON object.
In the case with ->take(5), the array keys are [0, 1]. These are sequential and so get encoded as a JSON array.
You can solve this by using null instead of 0 for parentless comments. Using null to represent non-existent data is also a better solution in general.
I have a simple document named Order structure with the fields id, name,
userId and timeScheduled.
What I would like to do is create a view where I can find the
document.id for those who's userId is some value and timeScheduledis
after a given date.
My view:
"by_users_after_time": {
"map": "function(doc) { if (doc.userId && doc.timeScheduled) {
emit([doc.timeScheduled, doc.userId], doc._id); }}"
}
If I do
localhost:5984/orders/_design/Order/_view/by_users_after_time?startKey="[2012-01-01T11:40:52.280Z,f98ba9a518650a6c15c566fc6f00c157]"
I get every result back. Is there a way to access key[1] to do an if
doc.userId == key[1] or something along those lines and simply emit on the
time?
This would be the SQL equivalent of
select id from Order where userId =
"f98ba9a518650a6c15c566fc6f00c157" and timeScheduled >
2012-01-01T11:40:52.280Z;
I did quite a few Google searches but I can't seem to find a good tutorial
on working with multiple keys. It's also possible that my approach is
entirely flawed so any guidance would be appreciated.
You only need to reverse the key, because username is known:
function (doc) {
if (doc.userId && doc.timeScheduled) {
emit([doc.userId, doc.timeScheduled], 1);
}
}
Then query with:
?startkey=["f98ba9a518650a6c15c566fc6f00c157","2012-01-01T11:40:52.280Z"]
NOTES:
the query parameter is startkey, not startKey;
the value of startkey is an array, not a string. Then the double quotes go around the username and date values, not around the array.
I emit 1 as value, instead of doc._id, to save disk-space. Every row of the result has an id field with the doc._id, then there's no need to repeat it.
don't forget to set an endkey=["f98ba9a518650a6c15c566fc6f00c157",{}], otherwise you get the data of all users > "f98ba9a518650a6c15c566fc6f00c157"
The answer actually came from the couchdb mailing list:
Essentially, the Date.parse() doesn't like the +0000 on the timestamps. By
doing a substring and removing the +0000, everything worked.
For the record,
document.write(new Date("2012-02-13T16:18:19.565+0000")); //Outputs Invalid
Date
document.write(Date.parse("2012-02-13T16:18:19.565+0000")); //Outputs NaN
But if you remove the +0000, both lines of code work perfectly.