I have a problem sorting the data in my redux-store.
My store looks like this:
state = {
ids: [1,2,3,4, /*...*/],
entities: {
1: {
id: 1,
time: 50
}
//...
}
};
In my reducer function I want to sort the ids, in this way my data will always be sorted and ready to use, without needing to sort it every time:
case LOAD_LIST_SUCCESS:
return Object.assign({}, state,
{ids: [state.ids, ...action.payload.normalized.result].sort()}, //Here is my problem...
{entities: [...state.entities, ...action.payload.normalized.entities.items});
How can I sort the ids array by the entities time property?
To sort by the time property, you would need to do something more advanced with the sort function.
I've made a quick example:
[state.ids, ...action.payload.normalized.result].sort((id1, id2) => {
const entity1 = state.entities[id1]
const entity2 = state.entities[id2]
if (entity1.time < entity2.time) {
return -1
}
if (entity1.time > entity2.time) {
return 1
}
return 0
})
However, this approach will not take into account any normalized entities attached to the current action because the state's entities list will have not been updated yet, so you might need to assign the new state to a variable, then sort the ids based on the entities in the new state, and THEN return it as the result of the LOAD_LIST_SUCCESS case.
Related
I have this query that works
async find(ctx) {
let { _start, _limit } = ctx.request.query;
console.log(ctx.request.query)
_limit ? 0 : (_limit = 10);
const entities = await strapi.services["course-series"].find({});
return entities.map((entity) => {
// Do I sort them here or in the url query (and how)
entity.courses = entity.courses.slice(_start, _limit);
return sanitizeEntity(entity, { model: strapi.models["course-series"] });
});
}
The idea is that I can load 10 courses from each series at first and then get the next 10...
I just realized that the first 10 I am getting are not the recent ones.
As I commented // Do I sort them here or in the url query (and how)
What version of Strapi do you use?
What does this line do strapi.services["course-series"].find({})? How did you build this find method in the service? What does it do? Does it accept params?
Personally I'd do something like that (assuming you're working with Strapi version > 4:
const entities = await strapi.entityService.findMany('api::course-series.course-series', {
fields: [/* list the course-series fields you want to populate */],
populate: {
courses: {
fields: [/* list the course fields you want to populate */],
sort: 'createdAt:desc', // You can use id, publishedAt or updatedAt here, depends on your sorting prefrences
offset: _start,
limit: _limit // I must admit I haven't tested `offset` and `limit` on the populated related field
}
}
})
// ...the rest of your code, if needed
Read more about Entity Service API here.
Doing it the way you did it, you will always first retrieve the full list of courses for each course-series, and then run costly operations like mapping (the lesser of 2 evils) and above all sorting.
I have a problem.
I want to create query to update some fileds on multiple model,
it should looks like this:
mutation{
updateInternalOrder( input: {
state: {
connect: 1
}
id_internal_orders: [1,2] <= here
}){
id_internal_orders
qty
state {
id_internal_orders_states,
name
}
}
}
In this query i would like to assign(update) id_internal_orders_states(in states)
in id_internal_orders that has id: 1 and 2.
How to do that?
Schema(lighthouse-php) that works only if i provide a single id, not array:
extend type Mutation {
updateInternalOrder(input: UpdateInternalOrders! #spread): InternalOrders #update
}
input UpdateInternalOrders {
id_internal_orders: Int!
state: InternalOrdersStatesHasOne
qty: Int
id_supplier: Int
}
input InternalOrdersStatesHasOne {
connect: Int
}
Instead of this
input UpdateInternalOrders {
id_internal_orders: Int!
state: InternalOrdersStatesHasOne
qty: Int
id_supplier: Int
}
Your schema should look like this
input UpdateInternalOrders {
id_internal_orders: [Int]!
state: InternalOrdersStatesHasOne
qty: Int
id_supplier: Int
}
So this way id_internal_orders will be define as an array
Solution for the error Argument 2 passed to Nuwave\\Lighthouse\\Execution\\Arguments\\ArgPartitioner::relationMethods() must be an instance of Illuminate\\Database\\Eloquent\\Model, instance of Illuminate\\Database\\Eloquent\\Collection given
The error you get is because you might be using an ORM. The data passed to the mutation is a collection, probably because you manipulate model generated by your ORM. GraphQL expect an array and not a Collection.
You must either convert the collection in array. But this is not recommended. In case there is a collection of object with collection. You’ll have to convert the collection and all the collection inside each object of the parent collection. This can get complicated very fast.
Or you can find a way to not manipulate your model in your front end and manipulate data transfer object instead. But I can’t really help you here since I don’t know where the data come from.
I have a group for which elements after reduction look like this pseudocode :
{
key:"somevalue",
value: {
sum: the_total,
names:{
a: a_number,
b: b_number,
c:c_number
}
}
}
In my dc-js geoChoropleth graph the valueAccessor is (d) => d.value.sum
In my title, I would like to use the names component of my reduction. But when I use .title((d) => {...}), I can onjly access the key and the value resulting from the valueAccessor function instead of the original record.
Is that meant to be ?
This is a peculiarity of the geoChoropleth chart.
Most charts bind the group data directly to chart elements, but since the geoChoropleth chart has two sources of data, the map and the group, it binds the map data and hides the group data.
Here is the direct culprit:
_renderTitles (regionG, layerIndex, data) {
if (this.renderTitle()) {
regionG.selectAll('title').text(d => {
const key = this._getKey(layerIndex, d);
const value = data[key];
return this.title()({key: key, value: value});
});
}
}
It is creating key/value objects itself, and the value, as you deduced, comes from the valueAccessor:
_generateLayeredData () {
const data = {};
const groupAll = this.data();
for (let i = 0; i < groupAll.length; ++i) {
data[this.keyAccessor()(groupAll[i])] = this.valueAccessor()(groupAll[i]);
}
return data;
}
Sorry this is not a complete answer, but I would suggest adding a pretransition handler that replaces the titles, or alternately, using the key passed to the title accessor to lookup the data you need.
As I noted in the issue linked above, I think this is a pretty serious design bug.
I have MutableLiveData> in ViewModel. How can we sort it based on book name, id and so on.
You can use map function to transform your LiveData.
val unsortedBooks: LiveData<Book> = //...
val sortedBooks: LiveData<Book> = Transformations
.map(unsortedBooks, Function { books ->
//sort your `books` here and return the sorted list
})
I'm running into some strange behavior when using Parse.Query.find() and am hoping someone can show me my errors. My scenario is that I'm including an array field in my query and sometimes, at a random record, some of the included array elements are null. I've verified that the array elements are indeed NOT null. Additionally, if I use each() instead of find(), I don't see this problem. Also, if I reduce the # of records I read at a time (CHUNK_SIZE) from Parse's 1000 maximum to 500, things work, so I'm not sure what's going on.
Here's the code I'm using.
/**
Iterates over a query using find(), which is very fast, compared to each().
Works up to 10,000 records maximum.
#param query Parse.Query to iterate.
#param callback Called for each batch of records.
#return Promise, fulfilled when iteration is done.
*/
function findAll(query, callback) {
var queryCount = 0;
var startTime = new Date();
var CHUNK_SIZE=1000;
query.limit(CHUNK_SIZE);
var queryFind = function() {
query.skip(CHUNK_SIZE * queryCount);
queryCount++;
return query.find().then(function(rows) {
callback(rows);
if (rows.length == CHUNK_SIZE) {
return queryFind();
}
});
}
return queryFind();
}
// Example of how to use findAll.
function fetchTree() {
var records = 0;
var query = new Parse.Query('TreeNode');
query.include('scores');
return findAll(query, function(nodes) {
nodes.forEach(function(node) {
records++;
node.get('scores').forEach(function(score, scoreIndex) {
if (!score) {
throw "Null score at row " + node.id + "/" + records + " index " + scoreIndex;
}
});
});
}, true);
}
fetchTree();
Thanks in advance.
Parse limits rows returned per query to a default of 50 with a max of 1000.
This limit includes related records, so if you get 10 records that each have on average 50 pointers in their array and you include() them you are using 500/1000 max records for your query.