Using Oracle 10g CLOB with Grails 2.0.1 - oracle

I'm working on a project using Oracle 10g and Grails v2.0.1.
I'm trying to use a CLOB data type for a text input field in my Domain class, and it doesn't seem to be working. My first attempt was based on what I read here about GORM, where is says to use type: 'text', like this example:
class Address {
String number
String postCode
static mapping = {
postCode type: 'text'
}
}
Grails mapped that to a LONG data type in my DB, which is not desirable
2nd attempt was to try type: 'clob'. That WAS effective in getting my DB datatype to be CLOB, but resulted in a class cast error because the property itself was defined as a string, i.e. String postCode. (Note that I've never seen type:'clob' in documentation, but I could deduce from the dialect class that clob might be a valid input there)
I subsequently tried defining the property as a java.sql.Clob, i.e. Clob postCode;, and that didn't work at all. No error messages, but nothing was getting persisted to the DB either.
I took a final step of keeping the Clob approach, but using a transient String property in which the getters/setters attempt to map the transient String value to the persistent Clob field. But I cannot figure out how to get my string value into the Clob. Grails doesn't throw an error, but the println() after my attempted assignment never prints. I've tried using myClob.setString(1, theString) to make an assignment, but with no success.
So to make a long story short, I can't seem to use a Clob in my scenario, and I'm wondering if anyone else has seen this and been able to overcome it. If so, can you tell me what I might be doing wrong?
OR... is there a way to override the datatype Grails chooses such that I could force it to map postCode type: 'text' to a CLOB? I'm not proficient with Hibernate, so I'm not sure how to go about that if there's a way.
Side note: prior to our upgrade from Grails 1.3.7 to 2.0.1, I'm pretty sure the type: 'text' did, in fact, map to a CLOB in Oracle. So this might be new to 2.0.1.

I think I found an answer tucked into the documentation on Custom Hibernate Types.
In that case, override the column's SQL type using the sqlType attribute
This appears to be working.
Looks like I'm able to use that to force my DB type to be CLOB while keeping the java type a String. In other words, maybe type chooses both a DB type and a Java type for handling the field? But sqlType gives a little more granularity to specify the DB type to use.
So the sample Domain class above should look like this in my case:
class Address {
String number
String postCode
static mapping = {
postCode sqlType: 'clob'
}
}
I gleaned this from another StackOverflow question on the topic (the question itself clued me in, whereas the accepted answer misled me!):
How do I get Grails to map my String field to a Clob?
I spent a day trying to figure this all out, and it was incredibly frustrating. So maybe my notes on the topic here will help someone else avoid that experience!
And while I'm keeping notes here... this post proved somewhat useful in terms of troubleshooting how to get more specific in my mappings:
http://omaha-seattle.blogspot.com/2010/02/grails-hibernate-custom-data-type.html
Interesting code from that is reproduced here:
//CONFIG.GROOVY (maps a custom SixDecimal type)
grails.gorm.default.mapping = {
'user-type'( type: SixDecimalUserType, class: SixDecimal )
}

Probably too late but I have found a really simple solution to this problem:
class MyDomain{
String myField
...
static mapping = {
myField type:'materialized_clob'
}
}
Nothing else is required, writes and reads works like charm! :D
Hope it helps.
Ivan.

Ivan's answer works for me. MaterializedClob is the Hibernate JDBC type for a Java String.

Related

GraphQL Nexus - adding custom scalar type

GraphQL Nexus is fairly new and the documentation appears to be lacking. In addition the examples are lacking as well. From the example and from the doc I am trying to add my own NON-GraphQL scalar type. I created my own scalar following the example in the documentation however when I try to call it in an objecttype I get the read underline. What am I doing wrong?
To resolve this issue what I did was:
1. once you create your own scalar type such as:
#json.ts **FILE NAME MATTERS**
export const JSONScalar = scalarType({
name: "JSON",
asNexusMethod: "json",
description: "JSON scalar type",
...})
2. Once I called the new type in a separate object I had to add this above my field for it to compile, you may not have too:
//#ts-ignore
t.topjson("data");
3. In my make schema I added the scalar code first:
const schema = makeSchema({
types: [JSONScalar, MyObject, BlahObject],
The name of the file is very important I think that is how the schema is generated and looks for the new type. I also think you have to be sure to compile that code first in the makeSchema however I did not try to switch around the order as I spent a lot of time trying to figure out how to make my own scalar type work.
This may have been self explanatory for more seasoned Nexus developers however I am a novice so these steps escaped me.
Happy coding!
You can also do this:
t.field("data", {type: "JSON"});
That works perfectly fine with TypeScript.
And I wish they had better examples and documentation. Their examples only cover the most trivial use cases. I'm getting very heavily into GraphQL, Prisma, Nexus and related tool for a huge data set. So I'm literally hitting every limitation and lack of documentation that exists.
But alas we can pool our knowledge here.

How to ensure certain format on property

I'm wondering if it's possible to describe a format, that an interface property should have. For example:
interface User {
age?: number,
name: string,
birthdate: string // should have format 'YYYY-MM-DD'
}
I read about decorators but it seems to only apply to classes, not interfaces.
I'm building an API with node/express and want to have input validation. So I'm considering Celebrate which can take joi type Schema to validate input. But I would like to use TypeScript instead to define my Schema / view model... As you see I try to use an Interface to define how the input of a given end point should look like:
age: number, optional
name: string
birthdate: string in format "YYYY-MM-DD"
Any hints and help much appreciated :)
Any hints and help much appreciated :)
First and foremost : you will have to write code for the validation. It will not happen magically.
Two approaches:
Out of band validation
You use validate(obj) => {errors?}. You create a validate function that takes and object and tells you any errors if any. You can write such a function yourself quite easily.
In band validation
Instead of {birthdate:string} you have something like {birthdate:FieldState<string>} where FieldState maintains validations and errors for a particular field. This is approach taken by https://formstate.github.io/#/ but you can easily create something similar yourself.
A note on validators
I like validators as simple (value) => error? (value to optional error) as they can be framework agnostic and used / reused to death. This is the validator used by formstate as well. Of course this is just my opinion and you can experiment with what suits your needs 🌹

Query by Enum in RavenDB

I have used the convention:
store.Conventions.SaveEnumsAsIntegers = true;
Enums are now being persisted as integers correctly, but, when i try to query using an enum the query gets translated with the enums in their string representation, which gives me no results.
session.Query<Entity>().Where(x => x.EnumProp == MyEnum.Value1);
It was my impression that SaveEnumsAsIntegers converts both when persisted and when querying as per this post:
Querying an enum property persisted as an integer is not working
Can anyone help ?
I have tested this against RavenDB 2330 and it is working as expected.
See the passing unit test here.
If there's something you are doing differently, please update your question. Thanks.

How does protocol buffer handle versioning?

How does protocol buffers handle type versioning?
For example, when I need to change a type definition over time? Like adding and removing fields.
Google designed protobuf to be pretty forgiving with versioning:
unexpected data is either stored as "extensions" (making it round-trip safe), or silently dropped, depending on the implementation
new fields are generally added as "optional", meaning that old data can be loaded successfully
however:
do not renumber fields - that would break existing data
you should not normally change the way any given field is stored (i.e. from a fixed-with 32-bit int to a "varint")
Generally speaking, though - it will just work, and you don't need to worry much about versioning.
I know this is an old question, but I ran into this recently. The way I got around it is using facades, and run-time decisions to serialize. This way I can deprecate/upgrade a field into a new type, with old and new messages handling it gracefully.
I am using Marc Gravell's protobuf.net (v2.3.5), and C#, but the theory of facades would work for any language and Google's original protobuf implementation.
My old class had a Timestamp of DateTime which I wanted to change to include the "Kind" (a .NET anachronism). Adding this effectively meant it serialized to 9 bytes instead of 8, which would be a breaking serialization change!
[ProtoMember(3, Name = "Timestamp")]
public DateTime Timestamp { get; set; }
A fundamental of protobuf is NEVER to change the proto ids! I wanted to read old serialized binaries, which meant "3" was here to stay.
So,
I renamed the old property and made it private (yes, it can still deserialize through reflection magic), but my API no longer shows it useable!
[ProtoMember(3, Name = "Timestamp-v1")]
private DateTime __Timestamp_v1 = DateTime.MinValue;
I created a new Timestamp property, with a new proto id, and included the DateTime.Kind
[ProtoMember(30002, Name = "Timestamp", DataFormat = ProtoBuf.DataFormat.WellKnown)]
public DateTime Timestamp { get; set; }
I added a "AfterDeserialization" method to update our new time, in the case of old messages
[ProtoAfterDeserialization]
private void AfterDeserialization()
{
//V2 Timestamp includes a "kind" - we will stop using __Timestamp - so keep it up to date
if (__Timestamp_v1 != DateTime.MinValue)
{
//Assume the timestamp was in UTC - as it was...
Timestamp = new DateTime(__Timestamp_v1.Ticks, DateTimeKind.Utc) //This is for old messages - we'll update our V2 timestamp...
}
}
Now, I have the old and new messages serializing/deserializing correctly, and my Timestamp now includes DateTime.Kind! Nothing broken.
However, this does mean that BOTH fields will be in all new messages going forward. So the final touch is to use a run-time serialization decision to exclude the old Timestamp (note this won't work if it was using protobuf's required attribute!!!)
bool ShouldSerialize__Timestamp_v1()
{
return __Timestamp_v1 != DateTime.MinValue;
}
And thats it. I have a nice unit test which does it from end-to-end if anyone wants it...
I know my method relies on .NET magic, but I reckon the concept could be translated to other languages....

Debugging LINQ to SQL SubmitChanges()

I am having a really hard time attempting to debug LINQ to SQL and submitting changes.
I have been using http://weblogs.asp.net/scottgu/archive/2007/07/31/linq-to-sql-debug-visualizer.aspx, which works great for debugging simple queries.
I'm working in the DataContext Class for my project with the following snippet from my application:
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.UpdateJobMaster(newJobToCreate);
this.SubmitChanges();
I will catch some very odd exceptions when I run this.SubmitChanges;
Index was outside the bounds of the array.
The stack trace goes places I cannot step into:
at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager`3.TryCreateKeyFromValues(Object[] values, MultiKey`2& k)
at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache`2.Find(Object[] keyValues)
at System.Data.Linq.IdentityManager.StandardIdentityManager.Find(MetaType type, Object[] keyValues)
at System.Data.Linq.CommonDataServices.GetCachedObject(MetaType type, Object[] keyValues)
at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance)
at System.Data.Linq.ChangeProcessor.BuildEdgeMaps()
at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode)
at System.Data.Linq.DataContext.SubmitChanges()
at JobTrakDataContext.CreateNewJob(NewJob job, String userName) in D:\JobTrakDataContext.cs:line 1119
Does anyone have any tools or techniques they use? Am I missing something simple?
EDIT:
I've setup .net debugging using Slace's suggestion, however the .net 3.5 code is not yet available: http://referencesource.microsoft.com/netframework.aspx
EDIT2:
I've changed to InsertOnSubmit as per sirrocco's suggestion, still getting the same error.
EDIT3:
I've implemented Sam's suggestions trying to log the SQL generated and to catch the ChangeExceptoinException. These suggestions do not shed any more light, I'm never actually getting to generate SQL when my exception is being thrown.
EDIT4:
I found an answer that works for me below. Its just a theory but it has fixed my current issue.
I always found useful to know exactly what changes are being sent to the DataContext in the SubmitChanges() method.
I use the DataContext.GetChangeSet() method, it returns a ChangeSet object instance that holds 3 read-only IList's of objects which have either been added, modified, or removed.
You can place a breakpoint just before the SubmitChanges method call, and add a Watch (or Quick Watch) containing:
ctx.GetChangeSet();
Where ctx is the current instance of your DataContext, and then you'll be able to track all the changes that will be effective on the SubmitChanges call.
First, thanks everyone for the help, I finally found it.
The solution was to drop the .dbml file from the project, add a blank .dbml file and repopulate it with the tables needed for my project from the 'Server Explorer'.
I noticed a couple of things while I was doing this:
There are a few tables in the system named with two words and a space in between the words, i.e. 'Job Master'. When I was pulling that table back into the .dbml file it would create a table called 'Job_Master', it would replace the space with an underscore.
In the orginal .dbml file one of my developers had gone through the .dbml file and removed all of the underscores, thus 'Job_Master' would become 'JobMaster' in the .dbml file. In code we could then refer to the table in a more, for us, standard naming convention.
My theory is that somewhere, the translation from 'JobMaster' to 'Job Master' while was lost while doing the projection, and I kept coming up with the array out of bounds error.
It is only a theory. If someone can better explain it I would love to have a concrete answer here.
My first debugging action would be to look at the generated SQL:
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.UpdateJobMaster(newJobToCreate);
this.Log = Console.Out; // prints the SQL to the debug console
this.SubmitChanges();
The second would be to capture the ChangeConflictException and have a look at the details of failure.
catch (ChangeConflictException e)
{
Console.WriteLine("Optimistic concurrency error.");
Console.WriteLine(e.Message);
Console.ReadLine();
foreach (ObjectChangeConflict occ in db.ChangeConflicts)
{
MetaTable metatable = db.Mapping.GetTable(occ.Object.GetType());
Customer entityInConflict = (Customer)occ.Object;
Console.WriteLine("Table name: {0}", metatable.TableName);
Console.Write("Customer ID: ");
Console.WriteLine(entityInConflict.CustomerID);
foreach (MemberChangeConflict mcc in occ.MemberConflicts)
{
object currVal = mcc.CurrentValue;
object origVal = mcc.OriginalValue;
object databaseVal = mcc.DatabaseValue;
MemberInfo mi = mcc.Member;
Console.WriteLine("Member: {0}", mi.Name);
Console.WriteLine("current value: {0}", currVal);
Console.WriteLine("original value: {0}", origVal);
Console.WriteLine("database value: {0}", databaseVal);
}
}
}
You can create a partial class for your DataContext and use the Created or what have you partial method to setup the log to the console.out wrapped in an #if DEBUG.. this will help you to see the queries executed while debugging any instance of the datacontext you are using.
I have found this useful while debugging LINQ to SQL exceptions..
partial void OnCreated()
{
#if DEBUG
this.Log = Console.Out;
#endif
}
The error you are referring to above is usually caused by associations pointing in the wrong direction. This happens very easily when manually adding associations to the designer since the association arrows in the L2S designer point backwards when compared to data modelling tools.
It would be nice if they threw a more descriptive exception, and maybe they will in a future version. (Damien / Matt...?)
This is what I did
...
var builder = new StringBuilder();
try
{
context.Log = new StringWriter(builder);
context.MY_TABLE.InsertAllOnSubmit(someData);
context.SubmitChanges();
}
finally
{
Log.InfoFormat("Some meaningful message here... ={0}", builder);
}
A simple solution could be to run a trace on your database and inspect the queries run against it - filtered ofcourse to sort out other applications etc. accessing the database.
That ofcourse only helps once you get past the exceptions...
VS 2008 has the ability to debug though the .NET framework (http://blogs.msdn.com/sburke/archive/2008/01/16/configuring-visual-studio-to-debug-net-framework-source-code.aspx)
This is probably your best bet, you can see what's happening and what all the properties are at the exact point in time
Why do you do UpdateJobMaster on a new instance ? Shouldn't it be InsertOnSubmit ?
JobMaster newJobToCreate = new JobMaster();
newJobToCreate.JobID = 9999
newJobToCreate.ProjectID = "New Project";
this.InsertOnSubmit(newJobToCreate);
this.SubmitChanges();
This almost certainly won't be everyone's root cause, but I encountered this exact same exception in my project - and found that the root cause was that an exception was being thrown during construction of an entity class. Oddly, the true exception is "lost" and instead manifests as an ArgumentOutOfRange exception originating at the iterator of the Linq statement that retrieves the object/s.
If you are receiving this error and you have introduced OnCreated or OnLoaded methods on your POCOs, try stepping through those methods.
Hrm.
Taking a WAG (Wild Ass Guess), it looks to me like LINQ - SQL is trying to find an object with an id that doesn't exist, based somehow on the creation of the JobMaster class. Are there foreign keys related to that table such that LINQ to SQL would attempt to fetch an instance of a class, which may not exist? You seem to be setting the ProjectID of the new object to a string - do you really have an id that's a string? If you're trying to set it to a new project, you'll need to create a new project and get its id.
Lastly, what does UpdateJobMaster do? Could it be doing something such that the above would apply?
We have actually stopped using the Linq to SQL designer for our large projects and this problem is one of the main reasons. We also change a lot of the default values for names, data types and relationships and every once in a while the designer would lose those changes. I never did find an exact reason, and I can't reliably reproduce it.
That, along with the other limitations caused us to drop the designer and design the classes by hand. After we got used to the patterns, it is actually easier than using the designer.
I posted a similar question earlier today here: Strange LINQ Exception (Index out of bounds).
It's a different use case - where this bug happens during a SubmitChanges(), mine happens during a simple query, but it is also an Index out of range error.
Cross posting in this question in case the combination of data in the questions helps a good Samaritan answer either :)
Check that all the "primary key" columns in your dbml actually relate to the primary keys on the database tables. I just had a situation where the designer decided to put an extra PK column in the dbml, which meant LINQ to SQL couldn't find both sides of a foreign key when saving.
I recently encountered the same issue: what I did was
Proce proces = unit.Proces.Single(u => u.ProcesTypeId == (from pt in context.ProcesTypes
where pt.Name == "Fix-O"
select pt).Single().ProcesTypeId &&
u.UnitId == UnitId);
Instead of:
Proce proces = context.Proces.Single(u => u.ProcesTypeId == (from pt in context.ProcesTypes
where pt.Name == "Fix-O"
select pt).Single().ProcesTypeId &&
u.UnitId == UnitId);
Where context was obviously the DataContext object and "unit" an instance of Unit object, a Data Class from a dbml file.
Next, I used the "proce" object to set a property in an instance of another Data Class object. Probably the LINQ engine could not check whether the property I was setting from the "proce" object, was allowed in the INSERT command that was going to have to be created by LINQ to add the other Data Class object to the database.
I had the same non speaking error.
I had a foreign key relation to a column of a table that was not the primary key of the table, but a unique column.
When I changed the unique column to be the primary key of the table the problem went away.
Hope this helps anyone!
Posted my experiences with this exception in an answer to SO# 237415
I ended up on this question when trying to debug my LINQ ChangeConflictException. In the end I realized the problem was that I manually added a property to a table in my DBML file, but I forgot to set the properties like Nullable (should have been true in my case) and Server Data Type
Hope this helps someone.
This is a long time ago, but I had the same problem and the error was because of a trigger with a select statement. Something like
CREATE TRIGGER NAME ON TABLE1 AFTER UPDATE AS SELECT table1.key from table1
inner join inserted on table1.key = inserted.key
When linq-to-sql runs the update command, it also runs a select statement to receive the auto generated values in the same query and expecting the first record set to contains the columns "asked for" but in this case the first row was the columns from the select statement in the trigger. So linq-to-sql was expecting two autogenerated columns, but it only received one column (with wrong data) and that was causing this exception.

Resources