IBM DB2 + Doctrine - Auto increment in composite primary keys - laravel

My problem is easy to understand and many mentioned here in Stackoverflow with references to Doctrine docs.
Every entity with a composite key cannot use an id generator other
than "ASSIGNED". That means the ID fields have to have their values
assigned before you call EntityManager#persist($entity).
I tried this, getting last generated ID, adding + 1 to its value and persisting entity. The problem is that a third party software that uses the same IBM DB2 database, cannot add a row, because the auto increment index is not updated when I insert a row in that way.
Is there a way to make this work or a way to update the table auto increment index?
Thanks in advance.
EDIT
In order to help you to better understand what I want/have to achieve, I will show you my example.
EntityClass
class Entity
{
/**
* #ORM\Id
* #ORM\Column(type="string", name="serie")
*/
protected $serie;
/**
* #ORM\Id
* #ORM\Column(type="integer", name="reference")
* #ORM\GeneratedValue
*/
protected $reference;
// More code...
}
Multiple primary keys are allowed by doctrine, but for some reason, when I fill the entity this way
$entity = new Entity();
$entity->set("serie", date('Y')); // Custom setter that search the property and sets the value. In this case, current year as string
// More assignements, except for the autoincrement value
$em->persist($entity);
$em->flush();
It throws an exception saying that one of the ids is not filled and MUST be filled in a composite key entity, but it is an auto increment column and I need to make it work that way, find a way to get the next auto increment value for the table or update auto increment value for the table in IBM DB2. If not, the other third party software will crash if I get the max value of the auto increment column, increase that value by one and assign it to the entity manually.
The query:
SELECT presence FROM DB2ADMIN.PRESENCES WHERE serie LIKE 2017 ORDER BY presence DESC FETCH FIRST 1 ROWS ONLY;
If you need any further information, let me know.

There's two ways to do this, but since you don't have access to transactions (and apparently don't care about gaps), I don't recommend one of them.
The first way, which I'm recommending you not use, is to create a table to hold the generated value, incrementing that, and returning it. I previously answered a question about this for SQL Server, but the concept should translate. Note that some of the utility is lost since you can't generate the value in a trigger, but should still work. The primary remaining issue is that the table represents a bottleneck, which you're not getting much benefit out of.
The second way is just to use a separate SEQUENCE for each year. This is somewhat problematic in that you'd need to create the object each year, but would be much faster to get a number. You'd also be essentially guaranteed to have gaps.
Note that I'm always a little suspicious of primary keys where the value is incremented and has value, especially if gaps are allowed.

Related

How to stop a user from entering duplicate values

In one of my Java EE applications, I used a registration page to register new user and as soon as one registers, his/her registered values will be inserted into Oracle database. But there is no way of detecting duplicate values. So I thought about adding unique constraint to some column values. But later I learned, I can't declare more than one column as unique( In my case I already declared userid as primary key). But I need to make more than one column values unique (like emialid field). Again only adding unique can't help as if a user submits a form with duplicate value an exception will be caught and user won't be able to understand as he will be redirected to a blank page. So I have 2 questions.
1) How can I inform the user about inserting duplicate values?
and
2) How can I make more than one column unique in Oracle?
N.B. I don't know javascript!!
First, you certainly can declare multiple unique constraints on a table. You can declare that userid is a primary key and then declare emailid as unique. You can declare as many unique constraints as you'd like.
Second, your application would need to catch the duplicate key constraint and do something useful with it. Redirecting the user to a blank page would not be useful-- your application ought to catch the constraint exception and present a useful message to the user. For example, if you get an exception stating that the constraint UK_EMAILID was violated, you'd probably want to present an error message to the user saying something along the lines of "This email address already exists."
If you are using JPA, you can build a unique constraints :
#Entity
#Table(name = "entity_table_name", uniqueConstraints={
#UniqueConstraint(columnNames={"uniqueField1"}), // Unique value on one field.
#UniqueConstraint(columnNames={"uniqueField2", "uniqueField3"}) // Unique combination.
})
public class YourEntity {
private Long id;
private String uniqueField1;
private String uniqueField2;
private String uniqueField3;
private String uniqueField4;
// ...
}
The implementation (hibernate, eclipseLink) will take care of the oracle part.

Doctrine many-to-many relationship wants to create a table twice when I create a migration

Before I describe my problem, it might actually make it clearer if I start with the error I'm getting:
$ ./app/console doc:mig:diff
[Doctrine\DBAL\Schema\SchemaException]
The table with name 'user_media_area' already exists.
That's absolutely true - user_media_area does exist. I created it in a previous migration and I don't understand why Symfony is trying to create the table again.
My problem has something to do with a many-to-many relationship. I have a table called user, a table called media_area and a table called user_media_area.
Here's the code where I tell user about media_area (Entity/User.php):
/**
* #ORM\ManyToMany(targetEntity="MediaArea", inversedBy="mediaAreas")
* #JoinTable(name="user_media_area",
* joinColumns={#JoinColumn(name="user_id", referencedColumnName="id")},
* inverseJoinColumns={#JoinColumn(name="media_area_id", referencedColumnName="id")}
* )
*/
private $mediaAreas;
And here's where I tell media_area about user (Entity/MediaArea.php):
/**
* #ORM\ManyToMany(targetEntity="User", mappedBy="users")
*/
private $users;
What's interesting is that if I remove that JoinTable stuff from Entity/User.php, ./app/console doctrine:migrations:diff will work again:
/**
* #ORM\ManyToMany(targetEntity="MediaArea", inversedBy="mediaAreas")
*/
private $mediaAreas;
However, it's a little off: it now wants to create a new table called mediaarea, which I don't want. My table already exists and it's called media_area.
So it looks like either way, Symfony is trying to create a table based on this ManyToMany thing in my User class, and the only reason the problem goes away when I remove the JoinTable is that the name of the table it wants to create (mediaarea) no longer matches the actual name of my table (media_area).
So my question is: Why does it want to create a new table at all? What am I doing wrong?
(I know it's possible that my naming conventions are off. Symfony and Doctrine's database examples are frustratingly devoid of multi-term column names, so I don't always know if I'm supposed to do media_area or mediaArea.)
According to the Association Mapping explanation on the official docs, the #JoinColumn and #JoinTable definitions are usually optional and have sensible default values, being:
name: "<fieldname>_id"
referencedColumnName: "id"
From that we can conclude that there is really no concrete difference between the two implementations you presented.
However, when it comes to migration, the creation of the table is a pretty common and expected behaviour. The thing is the table should always get deleted and created again, which is not happenning.
About the table name issue, the default behaviour of Doctrine 2 about this:
/**
* #ORM\ManyToMany(targetEntity="MediaArea", inversedBy="mediaAreas")
*/
private $mediaAreas;
Is to try and create a table called mediaarea. Again, perfectly normal.
If you want to declare a specific name for the table of an entity, you should do this:
/**
* #ORM\Table(name="my_table")
*/
class Something
I'm not sure if that helps you at all, but I guess it puts you, at least, on the right track.

LINQ to Entities - How best to obtain the IDENTITY value after calling SaveChanges()

There have been numerous questions posed on this site relating to the retrieval of the IDENTITY after an insert is performed. The way we have been getting the identity is to make the call below, immediately after calling SaveChanges();
context.MyClass.OrderByDescending(c => c.Id).FirstOrDefault();
This seems to work consistently may be completely adequate; however, it has the appearence of opening up a potential for error, should another record be added in between the calls. So the first question is, given that EF performs withing a transacional context, is this method sound?
Secondly, the answer provided to the following question suggests there may be a better way.
Linq to SQL - How to find the the value of the IDENTITY column after InsertOnSubmit()
In that answer, after calling SubmitChanges(), the following call (where "tst" represents the user's class) retrieves the value.
Response.Write("id:" + tst.id.ToString)
This appears to work exactly the same way in LINQ to Entities, where after the call to save changes the instance of the class now includes the id.
context.MyClass.Add(myClass);
context.SaveChanges();
int myNewIdentity = myClass.Id;
Since we are asking for the the actual ID of the class instance (actual record) it would appear to be failsafe. And, it seems logical that the designers of EF should make such basic functionality available. Can anyone confirm that this is proper way to get the identity or at least a best practice?
Yes, LINQ-to-Entities (and LINQ-to-SQL for that matter) will set the generated identity column back in the entity for you after SaveChanges is called. It will also do so for any foreign keys that couldn't be set ahead of time (for instance, a new parent row + a new child row are saved together, and after SaveChanges you'll have the right value in the child row's FK value).
Your particular concern is documented in the 'Working with Entity Keys' page:
http://msdn.microsoft.com/en-us/library/dd283139.aspx
The particular section is 'Entity Keys and Added Objects' and the particular steps are:
4 - If the INSERT operation succeeds, server-generated values are written back to the ObjectStateEntry.
5 - The ObjectStateEntry updates the object with the server-generated value.

Random ID generation on Sign Up - Database Performance

I am making a site that each account will have an ID.
But, I didn't want to make it incrementable, meaning:
id=1
id=2
...
id=1000
What I want is to have random IDs:
id=2355
id=5647734
id=23532
...
(The reason is to avoid robots to check all accounts profiles by just incrementing a ID in URL - and maybe other reason, but that is not the question)
But, I am worried about performance on registration.
It will be something like this:
while (RANDOM_ID is not taken): generate new RANDOM_ID
On generating a new ID for the new account, I will query database (MySQL) to check if the ID exists, for each generation.
Is there any better solution for this?
Is there any disadvantage of using random IDs?
Thanks in advance.
There are many, many reasons not to do this:
Your solution, as written, is not transactionally-safe; two transactions at the same time could both generate the same "random" ID.
If you serialize the transaction in order to make it safe, you will slaughter performance because the query will keep every single collision row locked until it finds a spare ID.
Using a random ID as the primary key will fragment the hell out of your clustered index. This is bad enough with uuids - the whole point of an auto-generated identity column is so you can generate a safe sequence out of it.
Why not use a regular primary key, but just don't use that in any of your URLs? Generate a secondary non-sequential ID along with it - such as a uuid - index it, and use this column in any public-facing segments of your application instead of the primary key if you are really worried about security.
You can use UUIDs. It's a unique identifier generated based partly on timestamp. It's almost certainly guaranteed to be unique so you don't have to do a query to check.
i do not know what language you're using, but there should be library or sample code for this for most languages.
Yes you can use UUID but keep your auto_increment field. Just add a new field and set it so something like: md5(microtime(true).rand()) or whatever other method you like and use that unike key along the site to make the links instead to expose the primary key in urls.

Using Linq SubmitChanges without TimeStamp and StoredProcedures the same time

I am using Sql tables without rowversion or timestamp. However, I need to use Linq to update certain values in the table. Since Linq cannot know which values to update, I am using a second DataContext to retrieve the current object from database and use both the database and the actual object as Input for the Attach method like so:
Public Sub SaveCustomer(ByVal cust As Customer)
Using dc As New AppDataContext()
If (cust.Id > 0) Then
Dim tempCust As Customer = Nothing
Using dc2 As New AppDataContext()
tempCust = dc2.Customers.Single(Function(c) c.Id = cust.Id)
End Using
dc.Customers.Attach(cust, tempCust)
Else
dc.Customers.InsertOnSubmit(cust)
End If
dc.SubmitChanges()
End Using
End Sub
While this does work, I have a problem though: I am also using StoredProcedures to update some fields of Customer at certain times. Now imagine the following workflow:
Get customer from database
Set a customer field to a new value
Use a stored procedure to update another customer field
Call SaveCustomer
What happens now, is, that the SaveCustomer method retrieves the current object from the database which does not contain the value set in code, but DOES contain the value set by the stored procedure. When attaching this with the actual object and then submit, it will update the value set in code also in the database and ... tadaaaa... set the other one to NULL, since the actual object does not contain the changed made by the stored procedure.
Was that understandable?
Is there any best practice to solve this problem?
If you make changes behind the back of the ORM, and don't use concurrency checking - then you are going to have problems. You don't show what you did in step "3", but IMO you should update the object model to reflect these changes, perhaps using OUTPUT TSQL paramaters. Or; stick to object-oriented.
Of course, doing anything without concurrency checking is a good way to lose data - so my preferred option is simply "add a rowversion". Otherwise, you could perhaps read the updated object out and merge things... somehow guessing what the right data is...
If you're going to disconnect your object from one context and use another one for the update, you need to either retain the original object, use a row version, or implement some sort of hashing routine in your database and retain the hash as part of your object. Of these, I highly recommend the Rowversion option as well. Using the current value as the original value like you are trying to do is only asking for concurrency problems.

Resources