When I run SqlMetal it generates all views and functions with a capital letter.
Is their a way to make it generate it in whatever case is in the database?
When I use the UI to build the DBML and CS file it handles this properly, however when I script it SqlMetal seems to make them upper case.
This isn't a huge deal however it makes me wonder if their are other subtle changes I don't know about or if I'm simply doing something stupid.
Here is what I'm doing, if it helps:
SqlMetal.exe /conn:"Data Source=server1\developer2008;Initial Catalog=Dingo;Integrated Security=true" /views /functions /sprocs /pluralize /language:csharp /namespace:"Foo_Api" /context:"DataClassesFooDataContext" /dbml:"foo.dbml"
SqlMetal.exe /code:"foo.designer.cs" "foo.dbml"
Could you perform a transform (it is XML) on foo.dbml between getting the DB schema and generating the code?
Related
I have a legacy VB application that still has some life in it, and I am wanting to translate it to another language.
I plan to write a Ruby script, possibly utilising a parser, to extract all strings from the three million lines of source, replace them with constants, and move them to a string resource file that can be used to provide translations.
Is anyone aware of a script/library that could be used to intelligently extract the strings?
I'm not aware of any existing off-the-shelf tool that you could use. We created a tool like this at my work and it worked well. The FRM file format is quite simple (although only briefly documented). We wrote a tool that (1) extracted all strings from control definitions and (2) generated the code to reload them at runtime during Form_Load.
I am looking for a way to separate the repetitive html codes from web pages, and for this I am planning to use the macro functionality. The problem here is for every macro I need to put this macro in a file, or put some of them in a file and include this in the template file.
What I need is to include once just the directory name something like
<#import "/tags/widgetDirectory" as widgets />
here the /tags/widgetDirectory is a directory , and every files here can be seen as a macro defined.
when I need to insert a code part from a file from this directory lets say slide.ftl I will just use
<#widgets.slider />
the system will check for slider.ftl in the /tags/widgetDirectory directory . here the slider.ftl can have <#macro> as first and as last line , or these can transparently added and system can load it as a macro
this will easy my designer work.
Maybe there is better way for doing this kind of widgets/components based web design ?
best regards,
This feature (importing directories) is something that's planned for FM actually... but it won't happen anytime soon. But I guess it can be solved fairly well with a hack right now. Instead of #import, use your own TemplateMethodModelEx, that you could use like <#assign widgets = importDirectory('/tags/widgetDirectory') >. This will return a TemplateHashModel that's also implemented by you and is bound to the directory path. When an item of that hash is get, it uses Environment.getCurrentEnvironment().include. The included file is expected to create a macro with name __main or something. So then you get that variable with Environment.getCurrentNamespace().get("__main") and return it as the result of the hash lookup. Of course this hash should also maintain a cache, so that if the same item is get twice, it wont include the template for the second time, just return the macro extracted earlier. This can be developer further, so that if the include file didn't define __main, then it's supposed that it prints directly to the output, and so it will be included again, when the "tag" is called again.
With EnvDTE.ProjectItem, is it possible to parse an in-memory C#-code string to get the FileCodeModel?
I don't want to alter the project file in this course by adding a temporary file to project, get its ProjectItem, do stuff and then delete the file. It will further alert the source control to observe the changes.
There's simply no good way to do this with CodeModel. This is why we're building Roslyn to make this sort of operation trivial -- it operates with an immutable model where you can take a solution, fork it to a separate copy and do analysis, without every modifying the original. There's previews out you might be able to use, depending upon your scenario.
I am working on a module that supplies methods for navigating directories and manipulating files. Basically it will be a combination of the Dir and File classes, with options specific to the needs of a project I'm working on.
Right now I have started writing tests for some of these methods and things are getting messy.
Example
One of the methods I have is a tree function that returns a hash of files and folders where you can pass options like tree(only: 'folders', limit: 3). In order to test that it only goes down 3 levels, I would have to have 4+ subfolders with dummy files in them.
The Problem
Right now I'm testing on folders outside the project since the subfolders are already there, but I want to move away from this, especially considering the implausibility of testing on system files once I start testing methods equivalent to rm -rf (as well as the lack of portability).
I'm starting to think that I need to create a "lab rat" type folder that I do all my "experiments" on, but I have no clue how to approach creating it.
Do I create a function that creates the files?
Do I pull files and folders from another location?
Do I use some sort of "lorem ipsum" generator for file structures?
Do I make all these files and folders manually(ugh)?
Do I just mock and stub the hell out of everything and not actually create/delete the files and folders?(I don't see this happening)
So...
How would someone normally approach testing excessive amounts of file and folder manipulation?
I don't think you want to use mocks/stubs. The file system of your OS should be well tested and fast, so the benefit of mocks/stubs is minimal. Creating a mock/stub system increases the complexity without much benefit.
Here's my answers:
Do I create a function that creates the files?
Yes. You can create tests for these functions to make sure that they are correct. Instead of calling Dir and File, write helper functions that make the code simple and readable. Maybe you can share the helper functions between the source/test code...
Do I pull files and folders from another location?
Not sure what this is for...
Do I use some sort of "lorem ipsum" generator for file structures?
Yes, if you mean create functions that generate file structures.
Do I make all these files and folders manually(ugh)?
No.
Do I just mock and stub the hell out of everything and not actually create/delete the files and folders?(I don't see this happening)
No. One benefit of creating files/directories is that you can manually check what is going on and not be 100% dependent on the tests. This is actually a good approach because without it there could be a bug where both the source code and test code is not doing what you expect, but you wouldn't know because everything seems to be working.
I've been reading up on LINQ lately to start implementing it, and there's a particular thing as to how it generates UPDATE queries that bothers me.
Creating the entities code automatically using SQLMetal or the Object Relational Designer, apparently all fields for all tables will get attribute UpdateCheck.Always, which means that for every UPDATE and DELETE query, i'll get SQL statement like this:
UPDATE table SET a = 'a' WHERE a='x' AND b='x' ... AND z='x', ad infinitum
Now, call me a purist, but this seems EXTREMELY inefficient to me, and it feels like a bad idea anyway, even if it weren't inefficient. I know the fetch will be done by the clustered primary key, so that's not slow, but SQL still needs to check every field after that to make sure it matches.
Granted, in some very sensitive applications something like this can be useful, but for the typical web app (think Stack Overflow), it seems like UpdateCheck.WhenChanged would be a more appropriate default, and I'd personally prefer UpdateCheck.Never, since LINQ will only update the actual fields that changed, not all fields, and in most real cases, the second person editing something wins anyway.
It does mean that if two people manage to edit the same field of the same row in the small time between reading that row and firing the UPDATE, then the conflict that would be found won't be fired. But in reality that's a very rare case. The one thing we may want to guard against when two people change the same thing won't be caught by this, because they won't click Submit at the exact same time anyway, so there will be no conflict at the time the second DataContext reads and updates the record (unless the DataContext is left open and stored in Session when the page is shown, or some other seriously bad idea like that).
However, as rare as the case is, i'd really like to not be getting exceptions in my code every now and then if this happens.
So my first question is, am I wrong in believing this? (again, for "typical" web apps, not for banking applications)
Am I missing some reason why having UpdateCheck.Always as default is a sane idea?
My second question is, can I change this civilizedly? Is there a way to tell SQLMetal or the ORD which UpdateCheck attribute to set?
I'm trying to avoid the situation where I have to remember to run a tool I'll have to make that'll take some regexes and edit all the attributes in the file directly, because it's evident that at some point we'll run SQLMetal after an update to the DB, we won't run this tool, and all our code will break in very subtle ways that we probably won't find while testing in dev.
Any suggestions?
War stories are more than welcome, i'd love to learn from other people's experiences on this.
Thank you very much!
Well, to answer the first question - I agree with you. I'm not a big fan of this "built in" optimistic concurrency, especially if you have timestamp columns or any fields which are not guaranteed to be the same after an update occurs.
To address the second question - I don't know of any way to override SqlMetal's default approach (UpdateCheck = Always), we ended up writing a tool which sets UpdateCheck = Never for appropriate columns. We're using a batch file to call SqlMetal and afterwards running the tool).
Oh, while I think of it - it was also a treat to find that SqlMetal also models relationships to set a foreign key to null instead of "Delete On Null" (for join tables in particular). We had to use the same post-generation tool to set these appropriately too.