We have a class in salesforce that is called form a trigger. When using Apex Data Loader this trigger throws an error oppafterupdate: System.LimitException: Too many SOQL queries: 101
I commented out a line of code that calls the following static method in a class we wrote and there are no more errors with respect to the governing limit. So I can verify the method below is the culprit.
I'm new to this, but I know that Apex code should be bulkified, and DML (and SOQL) statements should not be used inside of loops. What you want to do is put objects in a collection and use DML statements against the collection.
So I modified the method below; I declared a list, I added Task objects to the list, and I ran a DML statement on the list. I commented out the update statement inside the loop.
//close all tasks linked to an opty or lead
public static void closeTasks(string sId) {
List<Task> TasksToUpdate = new List<Task>{}; //added this
List<Task> t = [SELECT Id, Status, WhatId from Task WHERE WhatId =: sId]; //opty
if (t.isEmpty()==false) {
for (Task c: t) {
c.Status = 'Completed';
TasksToUpdate.add(c); //added this
//update c;
}
}
update TasksToUpdate; //Added this
}
Why am I still getting the above error when I run the code in our sandbox? I thought I took care of this issue but apparently there is something else here? Please help.. I need to be pointed in the right direction.
Thanks in advance for your assistance
You have "fixed" the update part but the code still fails on the too many SELECTs.
We would need to see your trigger's code but it seems to me you're calling your function in a loop in that trigger. So if say 200 Opportunities are updated, your function is called 200 times and in the function's body you have 1 SOQL... Call it more than 100 times and boom, headshot.
Try to modify the function to pass a collection of Ids:
Set<Id> ids = trigger.newMap().keyset();
betterCloseTasks(ids);
And the improved function could look like this:
public static void betterCloseTasks(Set<Id> ids){
List<Task> tasksToClose = [SELECT Id
FROM Task
WHERE WhatId IN :ids AND Status != 'Completed'];
if(!tasksToClose.isEmpty()){
for(Task t : tasksToClose){
t.Status = 'Completed';
}
update tasksToClose;
}
}
Now you have 1 SOQL and 1 update operation no matter whether you update 1 or hundreds of opportunities. It can still fail on some other limits like max 10000 updated records in one transaction but that's a battle for another day ;)
Related
This question already has answers here:
Unable to edit db entries using EFCore, EntityState.Modified: "Database operation expected to affect 1 row(s) but actually affected 0 row(s)."
(17 answers)
Closed 2 years ago.
I am using Sql Server on Linux with EC Core (2.1.0-preview1-final)
I am trying to update some data coming from a web Api service (PUT) request. The data (ticket) is passed correctly and gets deserialised into an object (ticket).
When I try to update, I use the following code:
public Ticket UpdateTicket(Ticket ticket)
{
using (var ctx = new SupportTicketContext(_connectionString))
{
ctx.Entry(ticket).State = EntityState.Modified;
ctx.SaveChanges(); // <== **BLOWS UP HERE**
var result = ctx.Tickets
.First(t => t.TicketId == ticket.TicketId);
return result;
}
}
The code throws the following error:
Database operation expected to affect 1 row(s) but actually affected 0 row(s). Data
may have been modified or deleted since entities were loaded
I am able to Insert and fetch from the database, no problem.
If I restart Visual Studio, the error occurs usually on the second time I try to update ANY ticket (i.e. any other ticketId - it seems to be on the second and subsequent requests).
The updates are unpredictably successful! Sometimes I can update another ticket and it goes through (even on the 3rd or susequent request)
I have tried a number of modifications to the code, including
ctx.Attach(ticket);
but this does not help.
How do I get it to update the database successfully?
Any ideas on how to debug this? Logging seems to be difficult to set up.
Any ideas greatly appreciated.
There were two errors. First, I tried to call the Update() method without specifying a DbSet:
_applicationDbContext.Update(user);
Correct way:
_applicationDbContext.Users.Update(user);
The second error was trying to update a model that without first pulling it from the database:
public bool UpdateUser(IdentityUser model)
{
_applicationDbContext.Users.Update(model);
_applicationDbContext.SaveChanges();
return true;
}
Nope.
I first needed to retrieve it from the database, then update it:
public bool UpdateUser(IdentityUser model)
{
var user = _applicationDbContext.Users.FirstOrDefault(u => u.Id == model.Id);
user.PhoneNumberConfirmed = model.PhoneNumberConfirmed;
_applicationDbContext.Users.Update(user);
_applicationDbContext.SaveChanges();
return true;
}
Had such problem too, but the reason was a trigger I was using.
Changed the trigger type from "INSTEAD OF INSERT" to "AFTER INSERT" and instead of modifying and inserting the data I was allowing my command to insert the data to the table and then updated it using the trigger.
I'm using EF 5 with Oracle database.
I'm doing a select count in a table with a specific parameter. When I'm using EF, the query returns the value 31, as expected, But the result takes about 10 seconds to be returned.
using (var serv = new Aperam.SIP.PXP.Negocio.Modelos.SIP_PA())
{
var teste = (from ens in serv.PA_ENSAIOS_UM
where ens.COD_IDENT_UNMET == "FBLDY3840"
select ens).Count();
}
If I execute the simple query bellow the result is the same (31), but the result is showed in 500 milisecond.
SELECT
count(*)
FROM
PA_ENSAIOS_UM
WHERE
COD_IDENT_UNMET 'FBLDY3840'
There are a way to improve the performance when I'm using EF?
Note: There are 13.000.000 lines in this table.
Here are some things you can try:
Capture the query that is being generated and see if it is the same as the one you are using. Details can be found here, but essentially, you will instantiate your DbContext (let's call it "_context") and then set the Database.Log property to be the logging method. It's fine if this method doesn't actually do anything--you can just set a breakpoint in there and see what's going on.
So, as an example: define a logging function (I have a static class called "Logging" which uses nLog to write to files)
public static void LogQuery(string queryData)
{
if (string.IsNullOrWhiteSpace(queryData))
return;
var message = string.Format("{0}{1}",
queryData.Trim().Contains(Environment.NewLine) ?
Environment.NewLine : "", queryData);
_sqlLogger.Info(message);
_genLogger.Trace($"EntityFW query (len {message.Length} chars)");
}
Then when you create your context point to LogQuery:
_context.Database.Log = Logging.LogQuery;
When you do your tests, remember that often the first run is the slowest because the server has to actually do the work, but on the subsequent runs, it often uses cached data. Try running your tests 2-3 times back to back and see if they don't start to run in the same time.
I don't know if it generates the same query or not, but try this other form (which should be functionally equivalent, but may provide better time)
var teste = serv.PA_ENSAIOS_UM.Count(ens=>ens.COD_IDENT_UNMET == "FBLDY3840");
I'm wondering if the version you have pulls data from the DB and THEN counts it. If so, this other syntax may leave all the work to be done at the server, where it belongs. Not sure, though, esp. since I haven't ever used EF with Oracle and I don't know if it behaves the same as SQL or not.
Why isn't the exception triggered? Linq's "Any()" is not considering the new entries?
MyContext db = new MyContext();
foreach (string email in {"asdf#gmail.com", "asdf#gmail.com"})
{
Person person = new Person();
person.Email = email;
if (db.Persons.Any(p => p.Email.Equals(email))
{
throw new Exception("Email already used!");
}
db.Persons.Add(person);
}
db.SaveChanges()
Shouldn't the exception be triggered on the second iteration?
The previous code is adapted for the question, but the real scenario is the following:
I receive an excel of persons and I iterate over it adding every row as a person to db.Persons, checking their emails aren't already used in the db. The problem is when there are repeated emails in the worksheet itself (two rows with the same email)
Yes - queries (by design) are only computed against the data source. If you want to query in-memory items you can also query the Local store:
if (db.Persons.Any(p => p.Email.Equals(email) ||
db.Persons.Local.Any(p => p.Email.Equals(email) )
However - since YOU are in control of what's added to the store wouldn't it make sense to check for duplicates in your code instead of in EF? Or is this just a contrived example?
Also, throwing an exception for an already existing item seems like a poor design as well - exceptions can be expensive, and if the client does not know to catch them (and in this case compare the message of the exception) they can cause the entire program to terminate unexpectedly.
A call to db.Persons will always trigger a database query, but those new Persons are not yet persisted to the database.
I imagine if you look at the data in debug, you'll see that the new person isn't there on the second iteration. If you were to set MyContext db = new MyContext() again, it would be, but you wouldn't do that in a real situation.
What is the actual use case you need to solve? This example doesn't seem like it would happen in a real situation.
If you're comparing against the db, your code should work. If you need to prevent dups being entered, it should happen elsewhere - on the client or checking the C# collection before you start writing it to the db.
I'm using NHibernate 3.2 and I have a repository method that looks like:
public IEnumerable<MyModel> GetActiveMyModel()
{
return from m in Session.Query<MyModel>()
where m.Active == true
select m;
}
Which works as expected. However, sometimes when I use this method I want to filter it further:
var models = MyRepository.GetActiveMyModel();
var filtered = from m in models
where m.ID < 100
select new { m.Name };
Which produces the same SQL as the first one and the second filter and select must be done after the fact. I thought the whole point in LINQ is that it formed an expression tree that was unravelled when it's needed and therefore the correct SQL for the job could be created, saving my database requests.
If not, it means all of my repository methods have to return exactly what is needed and I can't make use of LINQ further down the chain without taking a penalty.
Have I got this wrong?
Updated
In response to the comment below: I omitted the line where I iterate over the results, which causes the initial SQL to be run (WHERE Active = 1) and the second filter (ID < 100) is obviously done in .NET.
Also, If I replace the second chunk of code with
var models = MyRepository.GetActiveMyModel();
var filtered = from m in models
where m.Items.Count > 0
select new { m.Name };
It generates the initial SQL to retrieve the active records and then runs a separate SQL statement for each record to find out how many Items it has, rather than writing something like I'd expect:
SELECT Name
FROM MyModel m
WHERE Active = 1
AND (SELECT COUNT(*) FROM Items WHERE MyModelID = m.ID) > 0
You are returning IEnumerable<MyModel> from the method, which will cause in-memory evaluation from that point on, even if the underlying sequence is IQueryable<MyModel>.
If you want to allow code after GetActiveMyModel to add to the SQL query, return IQueryable<MyModel> instead.
You're running IEnumerable's extension method "Where" instead of IQueryable's. It will still evaluate lazily and give the same output, however it evaluates the IQueryable on entry and you're filtering the collection in memory instead of against the database.
When you later add an extra condition on another table (the count), it has to lazily fetch each and every one of the Items collections from the database since it has already evaluated the IQueryable before it knew about the condition.
(Yes, I would also like to be the extensive extension methods on IEnumerable to instead be virtual members, but, alas, they're not)
I need to trigger updates of a whole bunch of records in a Salesforce database without really updating any values. This is to make a few formulas to recalculate some fields.
Here's what I tried - a schedulable class (say I want it to run every night):
global class acmePortfolioDummyUpdate implements Schedulable
{
global void execute(SchedulableContext SC)
{
for (Acme_Portfolio__c p : [Select Id From Acme_Portfolio__c]) {
update(p);
}
}
}
update(p) is a DML statement and Salesforce limits the number of them to 150. In my case it's about a few thousands of records.
Also, I need to do this across many different portfolios. SF limits the number of scheduled classes to 10.
Any workaround for this?
Thanks
Try Batch Apex. You can schedule Your batch using schedulable class. Correct me if I'm wrong but aren't formulas recalculated each time You read them?
Edit: Comment don't have enought space.
I'm not guarantee this will compile (dont have access to org right now), but try sth like this:
global class batchClass implements Database.batchable<sObject>{
global Database.QueryLocator start(Database.BatchableContext BC){
return Database.getQueryLocator('Select Id From Acme_Portfolio__c');
}
global void execute(Database.BatchableContext BC, List<sObject> scope){
update scope;
}
global void finish(Database.BatchableContext BC){
}
}
And run this from system log:
Database.executeBatch(new batchClass());