Query DB using apex in salesforce - apex-code

com and apex triggers
I have two objects namely Customer_c and Order_c
I am trying to write a trigger to delete an order entry belonging to a customer who has been made inactive.
Basically i want to trigger on update for customer_c table
retrieve an entry from customer_c where the Active_c( boolean value) has been made false upon update and take that customers 'Name' and look up in the Order_c table and delete all the 'Name'(Orders) belonging to that customer.
Below is my trigger code. when I am trying to save the trigger in salesforce.
I get the following error:
Error: Compile Error: unexpected token: 'res2' at line 13 column 19
Could anyone please help me on this?
trigger NewCustomerActive on Customer__c(after update) {
List<Customer__c> res2 =
[SELECT Name FROM Customer__c j WHERE j.Active__c = false];
List<Order__c> res =
[SELECT Name FROM Order__c WHERE Customer__c = res2];
}

Change it to
trigger NewCustomerActive on Customer__c(after update) {
List<Customer__c> res2 =
[SELECT Name FROM Customer__c j WHERE j.Active__c = false];
List<Order__c> res =
[SELECT Name FROM Order__c WHERE Customer__c in:res2];
}

Or if you want to save soql statements:
trigger NewCustomerActive on Customer__c(after update) {
List<Order__c> res =
[SELECT Name FROM Order__c WHERE Customer__r.Active__c = false];
}

Related

Oracle Update with Joins

I'm trying to convert few MS Access Queries into Oracle. Following is one of the query from MS Access.
UPDATE [RESULT] INNER JOIN [MASTER]
ON ([RESULT].[LAST_NAME] = [MASTER].[LAST_NAME])
AND ([RESULT].[FIRST_NAME] = [MASTER].[FIRST_NAME])
AND ([RESULT].[DOCUMENT_NUMBER] = [MASTER].[DOCUMENT_NUMBER])
AND ([RESULT].[BATCH_ID] = [MASTER].[LEAD_ID])
SET [MASTER].[CLOSURE_REASON] = "Closed For Name and Document Number Match",
[MASTER].[RESULT_ID] = [RESULT].[ID],
[MASTER].[RESULT_PID] = [RESULT].[PID]
WHERE (([MASTER].[CLOSURE_REASON] Is Null)
AND ([MASTER].[REC_CODE] = "A1")
AND ([RESULT].[EVENT_DATE] = [MASTER].[EVENT_DATE])
AND ([RESULT].[EVENT_TYPE] = "Open")
AND ([MASTER].[DOCUMENT_NUMBER] Is Not Null)
AND ([MASTER].[DOCUMENT_NUMBER)] "null"));
First I received ORA-01779: cannot modify a column which maps to a non key-preserved table Error. I followed different examples (including MERGE) from your site and modified my original query. Now, I receive ORA-30926: unable to get a stable set of rows in the source tables Error.
Most of the examples showed only one join between the tables but I have to make more joins based on my requirements.
Any help translating this query in to Oracle would be Great. Thanks!
I believe this should be equivalent.
UPDATE master m
SET closure_reason = 'Closed For Name and Document Number Match',
(result_id, result_pid) = (SELECT r.id, r.pid
FROM result r
WHERE m.last_name = r.last_name
AND m.first_name = r.first_name
AND m.lead_id = r.batch_id
AND m.document_number = r.document_number
AND m.event_date = r.event_date
AND r.event_type = 'Open')
WHERE m.closure_reason IS NULL
AND m.rec_code = 'A1'
AND m.document_number IS NOT NULL
AND m.document_number != 'null'
AND EXISTS( SELECT 1
FROM result r
WHERE m.last_name = r.last_name
AND m.first_name = r.first_name
AND m.lead_id = r.batch_id
AND m.document_number = r.document_number
AND m.event_date = r.event_date
AND r.event_type = 'Open' )
Obviously, however, this isn't tested. If you could post the DDL to create your tables, the DML to insert a few rows, and show the expected result, we could test our code and would likely be able to give you more accurate answers.

EntityFramework - SaveChanges not saving but SQL seen in Profiler

I have read through most every post on EF-SaveChanges and do not believe my answer lies in any of those posts.
I am using C#, .NET 4, EF 4.3.1, SQL Server 2008R2, VS 2k11 Beta, AutoMapper.
Here is my code:
using (Model.AnimalRescueEntities context = new Model.AnimalRescueEntities())
{
using (TransactionScope transaction = new TransactionScope())
{
context.Connection.Open();
//Retrieve the event
eventDB = context.Events.Single(e => e.ID == eventRegVM.EventID);
eventOrgBaseDB = context.Entity_Base.Single(b => b.ID == eventDB.Entity_Organisation.ID);
eventRegVM.Event = Mapper.Map<Model.Event, EventsViewModel>(eventDB);
eventRegVM.Event.Entity_Organisation.Entity_Base = Mapper.Map<Model.Entity_Base, Entity_BaseViewModel>(eventOrgBaseDB);
//saves Event_Registration
eventRegDB = Mapper.Map<Event_RegistrationViewModel, Model.Event_Registration>(eventRegVM);
eventRegDB.Event = eventDB;
eventRegDB.EventID = eventDB.ID;
eventRegDB.Event.Entity_Organisation = context.Entity_Organisation.Single(o => o.ID == eventOrgBaseDB.ID);
eventRegDB.Event.Entity_Organisation.Entity_Base = eventOrgBaseDB;
//Add the link between EVENT and REGISTRATION
context.Event_Registration.AddObject(eventRegDB);
int numChanges = context.SaveChanges();
var regs = context.Event_Registration.Where(r => r.ID != null).ToList();
}
}
I have SQL Profiler running in the background and when SaveChanges is called I see this SQL code (numChanges is 1):
exec sp_executesql N'declare #generated_keys table([ID] uniqueidentifier)
insert [dbo].[Event_Registration]([EventID], [DateSubmitted], [HasPaid], [PaymentMethod], [Comments], [AmountPaid])
output inserted.[ID] into #generated_keys
values (#0, #1, null, #2, null, #3)
select t.[ID]
from #generated_keys as g join [dbo].[Event_Registration] as t on g.[ID] = t.[ID]
where ##ROWCOUNT > 0',N'#0 uniqueidentifier,#1 datetime2(7),#2 int,#3 decimal(19,4)',#0='1D841F75-AEA1-4ED1-B3F0-4E3994D7FC0D',#1='2012-07-04 14:59:45.5239309',#2=0,#3=0
regs will contain three exist rows and my new row. However, I cannot run a SELECT statement in SQL Server and see my new row. Running this many times will get me the same result - three existing rows in the database and a new fourth row that will never make it to the database.
eventRegDB also contains the GUID created for the primary key, ID; I assume SQL server does this but not 100% sure of that.
I have taken the above TSQL and run it in a query window against my database - I get new rows in my Event_Registration table after that - that is how the three existing rows were created.
I see now exceptions or other errors generated and cannot find any reason this would not save to the database. Any ideas? If you want to see the schema for the SQL, how to recreate the database, the code (any or all), then ask - this is all hosted on http://animalrescue.codeplex.com/ but I haven't stored this code, yet.
You are missing transaction.Complete() so your transaction is never committed. When the using block for TransactionScope completes your transaction is rolled back.

Best Practice Checking for duplicate rows before inserting list of items

I have a an array of objects that I want to enter into the database.
My method call looks like this.
public void Add(CardElement[] cardElements){
foreach (var cardElement in cardElements)
{
Data.Entry(cardElement).State = System.Data.EntityState.Added;
}
Data.SaveChanges();
}
The database table resembles this
MS SQL = Table mytable Columns a,b,c,d,e,f
Unique Constraint a,b,c
The data I want to insert resembles this.
var obj [] = new [] {
new MyObject () { a = 1, b =1, c = 1 },
new MyObject () { a = 1, b =1, c = 2 }
new MyObject () { a = 1, b =1, c = 3 }
};
So, I want to check the database for these three rows before I add them to the database.
I could do something like but I assume this should cause some extra trips to the database.
private bool checkExists()...
foreach (var cardElement in cardElements)
{
var exists = (from ce in Data.CardElements
where ce.CardId == cardElement.CardId
where ce.Area == cardElement.Area
where ce.ElementName == cardElement.ElementName
select ce).Any();
if(exists return true)
}
return false
So, how could I handle this more gracefully?
Is it even worth trying to accomplish this using linq?
Should I write some stored procedures for performance?
I agree that you should let the db make the decision.
Please have a look at using UPSERT as stated in this post
Why not just attempt the insert and let the database tell you if any unique constraint violations have occurred (using try/catch)?
The problem is that even if you query data somebody else can insert the record between your query and saving changes. You will still have to handle exception for violating unique constraint despite your additional queries - and yes, every check will do additional trip to database.
If your main concern is performance use stored procedure where you can additionally use table hint to lock table for inserts during initial check for existence.

Failed to batch insert in Subsonic3 with error "Must declare the scalar variable..."

I have met a problem about inserting multiple rows in a batch with Subsonic3. My development environment includes:
1. Visual Studio 2010, but use .NET 3.5
2. Active Record Mode in SubSonic 3.0.0.4
3. SQL Server 2005 express
4. Northwind sample database
I am using Active Reecord mode to insert mutiple "Product" into table "Products". If I insert the rows one by one, either call "aProduct.Add()" or call "Insert.Execute()" mutiple times (just like the codes below), it works fine.
private static Product[] CreateProducts(int count)
{
Product[] products = new Product[count];
for (int index = 0; index < products.Length; ++index)
{
products[index] = new Product
{
ProductName = string.Format("cheka-test-{0}", index.ToString()),
Discontinued = (index % 2 == 0),
};
}
return products;
}
private static void SucceedByMultiExecuteInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
foreach (Insert insert in inserts)
insert.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
but if I use "BatchQuery" like the codes below,
private static void FailByBatchInsert()
{
Product[] products = CreateProducts(2);
// -------------------------------- prepare batch
NorthwindDB db = new NorthwindDB();
BatchQuery batchquery = new BatchQuery(db.Provider, db.QueryProvider);
var inserts = from prod in products
select db.Insert.Into<Product>(x => x.ProductName, x => x.Discontinued).Values(prod.ProductName, prod.Discontinued);
foreach (Insert insert in inserts)
batchquery.Queue(insert);
// -------------------------------- batch insert
var selectAll = Product.All();
Console.WriteLine("--- before total rows = {0}", selectAll.Count().ToString());
batchquery.Execute();
Console.WriteLine("+++ after inserting {0} rows, now total rows = {1}",
products.Length.ToString(), selectAll.Count().ToString());
}
then it failed with the exception :
"
Unhandled Exception: System.Data.SqlClient.SqlException: Must declare the scalar variable "#ins_ProductName".
Must declare the scalar variable "#ins_ProductName".
"
Please give me some help to solve this problem. Many thanks.
I ran into this problem as well. If you look at the query it's attempting to run, you'll see it doing something like this (this isn't actual code but you'll get the point):
exec_sql N'insert into MyTable (SomeField) Values (#ins_SomeField)',N'#0 varchar(32)','#0=SomeValue'
For some reason it defines the parameters in the query with "#ins_"+FieldName but then passes the parameters as ordinals. I have yet to determine the pattern for why/when it does this but I've lost enough time during this dev cycle futzing with SubSonic to try and diagnose the problem properly.
The work-around I implemented will involve you downloading the 3.0.0.4 source from github and making a change on line 179 of Insert.cs.
Where it reads
ParameterName = _provider.ParameterPrefix + "ins_" + columnName.ToAlphaNumericOnly(),
Changing it to
ParameterName = _provider.ParameterPrefix + Inserts.Count.ToString(),
seemed to do the trick for me. I make no warranties about this solution for you, expressed or implied. It did work for me but your mileage may vary.
I should also note that there's similar logic around the "update" statements as well in Update.cs on lines 181 and 194 but I haven't had these give me problems... yet.
Honestly, I don't think SubSonic is ready for primetime and that's a shame because I really like how Rob set it up. That said, it's in my product for better or worse now so you make the best with what you got.

Help required to optimize LINQ query

I am looking to optimize my LINQ query because although it works right, the SQL it generates is convoluted and inefficient...
Basically, I am looking to select customers (as CustomerDisplay objects) who ordered the required product (reqdProdId), and are registered with a credit card number (stored as a row in RegisteredCustomer table with a foreign key CustId)
var q = from cust in db.Customers
join regCust in db.RegisteredCustomers on cust.ID equals regCust.CustId
where cust.CustomerProducts.Any(co => co.ProductID == reqdProdId)
where regCust.CreditCardNumber != null && regCust.Authorized == true
select new CustomerDisplay
{
Id = cust.Id,
Name = cust.Person.DisplayName,
RegNumber = cust.RegNumber
};
As an overview, a Customer has a corresponding Person which has the Name; PersonID is a foreign key in Customer table.
If I look at the SQL generated, I see all columns being selected from the Person table. Fyi, DisplayName is an extension method which uses Customer.FirstName and LastName. Any ideas how I can limit the columns from Person?
Secondly, I want to get rid of the Any clause (and use a sub-query) to select all other CustomerIds who have the required ProductID, because it (understandably) generates an Exists clause.
As you may know, LINQ has a known issue with junction tables, so I cannot just do a cust.CustomerProducts.Products.
How can I select all Customers in the junction table with the required ProductID?
Any help/advice is appreciated.
The first step is to start your query from CustomerProducts (as Alex Said):
IQueryable<CustomerDisplay> myCustDisplay =
from custProd in db.CustomerProducts
join regCust in db.RegisteredCustomers
on custProd.Customer.ID equals regCust.CustId
where
custProd.ProductID == reqProdId
&& regCust.CreditCardNumber != null
&& regCust.Authorized == true
select new CustomerDisplay
{
Id = cust.Id,
Name = cust.Person.Name,
RegNumber = cust.RegNumber
};
This will simplify your syntax and hopefully result in a better execution plan.
Next, you should consider creating a foreign key relationship between Customers and RegisteredCustomers. This would result in a query that looked like this:
IQueryable<CustomerDisplay> myCustDisplay =
from custProd in db.CustomerProducts
where
custProd.ProductID == reqProdId
&& custProd.Customer.RegisteredCustomer.CreditCardNumber != null
&& custProd.Customer.RegisteredCustomer.Authorized == true
select new CustomerDisplay
{
Id = cust.Id,
Name = cust.Person.Name,
RegNumber = cust.RegNumber
};
Finally, for optimum speed, have LINQ compile your query at compile time, rather than run time by using a compiled query:
Func<MyDataContext, SearchParameters, IQueryable<CustomerDisplay>>
GetCustWithProd =
System.Data.Linq.CompiledQuery.Compile(
(MyDataContext db, SearchParameters myParams) =>
from custProd in db.CustomerProducts
where
custProd.ProductID == myParams.reqProdId
&& custProd.Customer.RegisteredCustomer.CreditCardNumber != null
&& custProd.Customer.RegisteredCustomer.Authorized == true
select new CustomerDisplay
{
Id = cust.Id,
Name = cust.Person.Name,
RegNumber = cust.RegNumber
};
);
You can call the compiled query like this:
IQueryable<CustomerDisplay> myCustDisplay = GetCustWithProd(db, myParams);
I'd suggest starting your query from the product in question, e.g. something like:
from cp in db.CustomerProducts
join .....
where cp.ProductID == reqdProdID
As you have found, using a property defined as an extension function or in a partial class will require that the entire object is hydrated first and then the select projection is done on the client side because the server has no knowledge of these additional properties. Be glad that your code ran at all. If you were to use the non-mapped value elsewhere in your query (other than in the projection), you would likely see a run-time exception. You can see this if you try to use the Customer.Person.DisplayName property in a Where clause. As you have found, the fix is to do the string concatenation in the projection clause directly.
Lame Duck, I think there is a bug in your code as the cust variable used in your select clause isn't declared elsewhere as a source local variable (in the from clauses).

Resources