Why the array is empty after observer? - android-room

I am trying to get the values from the Room database:
val cars = mutableListOf<String>()
carsDb.getAll.observe(viewLifecycleOwner) {
cars = it.toMutableList()
// cars.size = 5
}
// cars.size = 0
Why can't I get the values outside the observer?
I am facing this issuing every time.

Looks like a problem of synchronisation, the db call carsDb.getAll.observe() is likely asynchronous, so the cars.size=5 you get inside the function is resolved later than the cars.size=0you got just after calling the db call (but before the request is resolved)

Related

How does Apollo paginated "read" and "merge" work?

I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.
Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data

Model's scope function is showing wrong value, but only in production server

I'm facing a weird problem which only happen in the production server (It's 100% normal in my local). I'm using Laravel 8, and have a model named Student. Inside the model, I created several scope methods like this:
public function scopeWhereAlreadySubmitData($query)
{
return $query->where('status_data_id', '!=', 1);
}
public function scopeWhereAlreadySubmitDocument($query)
{
return $query->where('status_document_id', '!=', 1);
}
Then I use the methods above in an AJAX controller to chain it with count method. Something like:
public function getAggregates()
{
$student = Student::with(['something', 'somethingElse']);
$count_student_already_submit_data = $student->whereAlreadySubmitData()->count();
$count_student_already_submit_document = $student->whereAlreadySubmitDocument()->count();
return compact('count_student_already_submit_data', 'count_student_already_submit_document');
}
Here's is where the weird thing happens: while the controller above produce the correct value in my local, in production count_student_already_submit_data has a correct value but count_student_already_submit_document has zero value. Seeing at the database directly, count_student_already_submit_document should have value more than zero as there are many records has status_document_id not equals to 1.
I've also tried to use the scopeWhereAlreadySubmitDocument method in tinker. Both in local and production, it shows the correct value, not just zero.
Another thing to note is that Student model actually had 4 scope methods like the above, not just 2. 3 of them is working correctly, and only 1 is not. Plus, there's another controller using all the scope methods above and all of them are showing the correct value.
Have you ever face such thing? What could be the problem behind this? Your input is appreciated.
Turns out this is because the query builder is effected by the given chain. The weird thing I mentioned about "local vs production" is likely doesn't gets noticed in the local because its only has a few records. I should've use clone() when chaining the builder. See the topic below for more information.
Laravel query builder - re-use query with amended where statement

How to transfer variables between 2 function type code templates

My channel is receiving HL7 messages and I have 2 transformers in my channel. I am capturing all the data from the HL7 message in one transformer like:
- var vACCNo= msg['PID']['PID.17']['PID.17.1'].toString();
- var vSTATE=msg['PID']['PID.11']['PID.11.4'].toString();
....
In the second transformer I am pushing all this data into a external DB as insert statement like insert into table x values (vACCNo, vSTATE....).
In the above design without doing anything data captured in first transformer is available in second and it works.
Now I am planning to get rid of these 2 transformers and move these into code templates, where I'm planning to create a separate function for each of these transformer.
But how I can pass variables captured in first function to second one?
Thanks
When you say 2 transformers, I assume you mean two steps in the same transformer? Different transformer steps compile into the same javascript function, so they share the same variable context/scope. To actually pass values to a different transformer (like from your source transformer to a destination transformer) normally you would use the channelMap for this.
In your (presumed) situation, you can add all of your variables to an object that you return from the first function. Pass the object to the second function.
Code Templates
function getValues(msg) {
var fieldWithComplicatedAssignment = '';
var result = {
vACCNo: msg['PID']['PID.17']['PID.17.1'].toString(),
vSTATE: msg['PID']['PID.11']['PID.11.4'].toString(),
fieldWithComplicatedAssignment: fieldWithComplicatedAssignment
};
if (optionalCondition) {
result.optionalField = '';
}
return result;
}
function insertIntoDB(obj) {
// insert into table x values (obj.vACCNo, obj.vSTATE....)
// return a result status indicating succeeded or failure (or
// just throw an error from this function)
}
Transformer steps
var obj = getValues(msg);
var result = insertIntoDb(obj);

Entity Framework Core: Database operation expected to affect 1 row(s) but actually affected 0 row(s) [duplicate]

This question already has answers here:
Unable to edit db entries using EFCore, EntityState.Modified: "Database operation expected to affect 1 row(s) but actually affected 0 row(s)."
(17 answers)
Closed 2 years ago.
I am using Sql Server on Linux with EC Core (2.1.0-preview1-final)
I am trying to update some data coming from a web Api service (PUT) request. The data (ticket) is passed correctly and gets deserialised into an object (ticket).
When I try to update, I use the following code:
public Ticket UpdateTicket(Ticket ticket)
{
using (var ctx = new SupportTicketContext(_connectionString))
{
ctx.Entry(ticket).State = EntityState.Modified;
ctx.SaveChanges(); // <== **BLOWS UP HERE**
var result = ctx.Tickets
.First(t => t.TicketId == ticket.TicketId);
return result;
}
}
The code throws the following error:
Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data
may have been modified or deleted since entities were loaded
I am able to Insert and fetch from the database, no problem.
If I restart Visual Studio, the error occurs usually on the second time I try to update ANY ticket (i.e. any other ticketId - it seems to be on the second and subsequent requests).
The updates are unpredictably successful! Sometimes I can update another ticket and it goes through (even on the 3rd or susequent request)
I have tried a number of modifications to the code, including
ctx.Attach(ticket);
but this does not help.
How do I get it to update the database successfully?
Any ideas on how to debug this? Logging seems to be difficult to set up.
Any ideas greatly appreciated.
There were two errors. First, I tried to call the Update() method without specifying a DbSet:
_applicationDbContext.Update(user);
Correct way:
_applicationDbContext.Users.Update(user);
The second error was trying to update a model that without first pulling it from the database:
public bool UpdateUser(IdentityUser model)
{
_applicationDbContext.Users.Update(model);
_applicationDbContext.SaveChanges();
return true;
}
Nope.
I first needed to retrieve it from the database, then update it:
public bool UpdateUser(IdentityUser model)
{
var user = _applicationDbContext.Users.FirstOrDefault(u => u.Id == model.Id);
user.PhoneNumberConfirmed = model.PhoneNumberConfirmed;
_applicationDbContext.Users.Update(user);
_applicationDbContext.SaveChanges();
return true;
}
Had such problem too, but the reason was a trigger I was using.
Changed the trigger type from "INSTEAD OF INSERT" to "AFTER INSERT" and instead of modifying and inserting the data I was allowing my command to insert the data to the table and then updated it using the trigger.

Salesforce bulkify code - is this not already bulkified?

We have a class in salesforce that is called form a trigger. When using Apex Data Loader this trigger throws an error oppafterupdate: System.LimitException: Too many SOQL queries: 101
I commented out a line of code that calls the following static method in a class we wrote and there are no more errors with respect to the governing limit. So I can verify the method below is the culprit.
I'm new to this, but I know that Apex code should be bulkified, and DML (and SOQL) statements should not be used inside of loops. What you want to do is put objects in a collection and use DML statements against the collection.
So I modified the method below; I declared a list, I added Task objects to the list, and I ran a DML statement on the list. I commented out the update statement inside the loop.
//close all tasks linked to an opty or lead
public static void closeTasks(string sId) {
List<Task> TasksToUpdate = new List<Task>{}; //added this
List<Task> t = [SELECT Id, Status, WhatId from Task WHERE WhatId =: sId]; //opty
if (t.isEmpty()==false) {
for (Task c: t) {
c.Status = 'Completed';
TasksToUpdate.add(c); //added this
//update c;
}
}
update TasksToUpdate; //Added this
}
Why am I still getting the above error when I run the code in our sandbox? I thought I took care of this issue but apparently there is something else here? Please help.. I need to be pointed in the right direction.
Thanks in advance for your assistance
You have "fixed" the update part but the code still fails on the too many SELECTs.
We would need to see your trigger's code but it seems to me you're calling your function in a loop in that trigger. So if say 200 Opportunities are updated, your function is called 200 times and in the function's body you have 1 SOQL... Call it more than 100 times and boom, headshot.
Try to modify the function to pass a collection of Ids:
Set<Id> ids = trigger.newMap().keyset();
betterCloseTasks(ids);
And the improved function could look like this:
public static void betterCloseTasks(Set<Id> ids){
List<Task> tasksToClose = [SELECT Id
FROM Task
WHERE WhatId IN :ids AND Status != 'Completed'];
if(!tasksToClose.isEmpty()){
for(Task t : tasksToClose){
t.Status = 'Completed';
}
update tasksToClose;
}
}
Now you have 1 SOQL and 1 update operation no matter whether you update 1 or hundreds of opportunities. It can still fail on some other limits like max 10000 updated records in one transaction but that's a battle for another day ;)

Resources