I am trying to understand what DSLs are, and just now this questions popped out of my mind. Atleast the fluent version seems to be internal DSL. What about the query syntax of LINQ. Can that also be called internal DSL? Or External DSL?
Yes. Or at least the query syntax and keywords can.
Linq covers a few related technologies, and much of them can be understood just as a domain-specific classes and methods, just as most classes and methods are specific to some particular domain. It would be hard to argue that any of that constitutes a DSL when it's much the same as any other .NET code.
But the query syntax and keywords in C# and VB come up only in the context of the domain of queries against sources of data and differ from the rest of those languages, so it's reasonable to consider them internal DSLs. (It's possible to do strange things to make them serve other purposes, but it's possible to do strange things with other DSLs to force them into serving other domains too).
Related
I see a broad adoption of Dataweave which I feel is more of transformation library just like Freemarker or Velocity.
In case of DW Change in transformation logic would need change in code, the very same purpose template engines got popular at the first place to seperate logic and code so that we can change transformation logic without needing to rebuild/repackage our code (more deployment hassle).
Can anyone help me to point out few reasons as to why one would prefer DW .
TLDR: If you're looking for a template engine for things like static websites, DataWeave definitely isn't the right choice. Use the right tool for the job. Also, while you can use DataWeave outside of Mule, I don't think I've seen anyone adopt DataWeave that hasn't adopted MuleSoft..
A few things to consider (and most of these I'm stating in the context of developing Mule applications):
These template engines are, typically, for outputting static text. If you're using it to output structured data rather than something like an HTML page.. you're probably doing it wrong. They aren't going to return structured data - they are going to return text. If you're at the very end of your flow and you're going to output that back out of the API or to a file, you're fine I suppose.. but if you want to actually be able to work with that output, you're going to have to convert the plain text to an actual object... introducing a lot of extra steps in this process when you could have just used DataWeave in the first place. Dataweave is especially beneficial when you want to do things like streaming because you're processing large payloads. Dataweave can understand JSON, XML, and CSV (the three most common data types I see) in a streamed format without any additional work, making it very easy to create efficient applications. The big difference between a template engine and a data transformation language is that one is for outputting text using structured data as input, and the other is for working with structured data on the input and outputting structured data that you can continue to work with. There is a reason that almost all of the template engine docs talk about building websites and not things like integrations.
The DataWeave engine is, as Aled indicated, built into the Mule runtime. Deeply so. You can use DataWeave in any field in any connector by default, even fields that don't have the f(x) button - because it's built into the runtime. This makes DataWeave what you could consider a first-class citizen within Mule, unlike something you will only be able to utilize either via connectors or by invoking java bridges/libraries.. which you do via DataWeave or a long series of connector operations.
The benefits you listed are also not things you can't do with DataWeave. You can VERY easily templatize and externalize DataWeave - for example, I have several DataWeave libraries in my maven repo I can include as dependencies. I've built several transformation services that use databases with DataWeave in order to do transformation, allowing me to change those transformations without modifying the app. You can also use dynamic DataWeave, where you use a template system to load specific parts of the script before running it. I've even taken it a step further and written a generic DataWeave script that I can use to do basic mappings without writing DataWeave - this allowed me to wrap a web UI around things pretty easily.
I wouldn't use DataWeave outside of MuleSoft unless you're a MuleSoft shop. If you are a MuleSoft shop, using the CLI to run your scripts, the same way you do with most interpreted languages, works fairly nicely - especially since you likely already have in-house expertise in DataWeave. The language is still niche enough that unless you've already adopted it for use in Mule applications I don't see any advantage in using it.
Docs / basic examples:
https://github.com/mulesoft-labs/data-weave-native
https://docs.mulesoft.com/mule-runtime/4.3/parse-template-reference
https://docs.mulesoft.com/mule-runtime/4.3/dataweave-create-module
https://github.com/mikeacjones/transform-system-api
Because it is the expression and transformation language embedded in Mule runtime. If you are using Mule it is also integrated with the IDE Anypoint Studio.
Outside Mule applications I don't think you can use DataWeave easily. You might want to go with the alternatives.
I am currently developing an application that parses and manipulates MIME messages wherein these messages are a central part of the domain model. Although I have already implemented the required functionality, for the moment, for parsing these messages, it seems unnecessary trying to reinvent the wheel would I need to add additional MIME features in the future. I could simply use an available library such as MimeKit which probably does the job much more efficiently and seems like the more robust way to go with. At the same time I feel hesitant to this idea for a couple of reasons:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be that the domain objects should not have any external dependencies since they model a domain that is specific to the business. And so if the business rules change it wouldn't be a good idea to have your domain model be dependent of an external library. However, since MIME is a standardized protocol this shouldn't be a problem, but that leads to the second point.
Although MIME is a standardized protocol, it has come to my knowledge that the clients from which my application receives these messages does not always fully conform to the RFC specifications. I have yet to come across a problem regarding the MIME format of the messages but with that in mind I feel as though there's no guarantee that I won't stumble across problems down the line.
I might have to add additional custom functionality regarding the parsing of the messages. This could however be solved by adding that functionality on top of the imported classes.
So my questions are:
Would it under normal circumstances be a valid alternative to use an external library for standardized protocols as a part of the domain model? It doesn't seem right to sully my domain- and application-layer with external dependencies.
How should I go about this problem with regards to my circumstances? Should I create an interface for the domain model so that I can swap it out with another implementation if needed in the future? This would require isolating the external dependencies in a class and mapping all the data to fit the contracts for the application layer which almost seems like more work than implementing the protocol myself. Or should I just implement it myself and add new features successively just to make sure that I have full control of the domain model?
I would highly appreciate your input.
Your entire question boils down to the following flawed thinking:
I am fairly new to software architecture but what from what I've gathered online the consensus seems to be...
Why let consensus make your decisions for you?
Who are these people who make up this "consensus"?
How do you know they have any idea what they are talking about?
Trusting the consensus of unknown sources seems like a terrible way to make decisions for your project.
Do you want to write software that solves real problems? Or do you want to get lost in the weeds of idealism and have your project fail before it even gets out of the design phase?
Do what makes sense for you.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just started learning about ASP.NET Web API and I have several things that are still unclear to me:
why should I use EntitySetController,which inherits from odata controller instead of ApiController
Why is EF frequently mentioned in the context of OData. I know it "represents" an Entity, but I don't see why the 2 are connected. The first is on Service Layer and EF is Model.
I have read and understood a lot of litereture written about the subject, yes I missed when its the best practice
Thanks a lot,
David
why should I use EntitySetController,which inherits from odata controller instead of ApiController
I agree that it is confusing and that documentation seems to be lacking (at least when I had the same question as you). The way I put my feelings at ease was by simply reading the code. I encourage you to do the same, as it really is very short (concentrate on the EntitySetController class and its helpers); shouldn't take more than 5-10 minutes tops (promise) and you won't have any questions after.
The short story is that it eliminates some boilerplate for the common cases (but continue reading if you want more context and an opinion).
Why is EF frequently mentioned in the context of OData. I know it "represents" an Entity, but I don't see why the 2 are connected. The first is on Service Layer and EF is Model.
This one confused me endlessly too, until I gave up and looked at the origins of OData, the WCF Data Services (previously ADO.NET Data Services) and the OData specifications (the hint was that OData Core protocol versions are still specified with a header called "DataServicesVersion"). There you can find that OData uses EDM, the Entity Data Model, which is the same model specification used by EF, and serializes it in the same format as EF: CSDL (Conceptual Schema Definition Language). This is no coincidence, WCF Data Services has prime support for EF, and although it doesn't require it, one could say that its design was based on it.
Note that WCF Data Services is still was the flagship implementation of OData.
Something that is potentially of high interest (at least it was to me): When using EF with ASP.NET Web API and OData extensions, there is no way (as far as I know) to share the model between the two.
You may skip to the next bullet point for the next answer if you didn't find this interesting.
For example, when using EF in a Code-First setup, you will typically build your model based largely on code conventions and the EF System.Data.Entity.DbModelBuilder ("fluid API"). You will then use the System.Web.Http.OData.Builder.ODataConventionModelBuilder that will do pretty much exactly the same thing to construct the OData model, and arrive pretty much at exactly the same result. In the past, I managed to dig up some random notes from a random meeting from either the EF team or the Web API team which mentioned this briefly, and as far as I can remember (I can't find this document anymore), there were no plans to improve the situation. Thus, they now have two different and incompatible implementations of EDM.
I admit I didn't take the time to go through the code extensively to verify this properly, but I know that Web API + OData extensions depend on EdmLib (which provides Microsoft.Data.Edm initially developed for WCF Data Services), while EF does not, and instead uses its own System.Data.Entity.Edm implementation. I also know that their convention-based model builders are different, as explained above. It becomes ridiculous when you use EF in a DB-First setup; you get a serialized EDM model in CSDL format in the EDMX file, and the OData extensions go on and generate their own serialized CSDL code at runtime from the CLR code (using separate code conventions) itself generated by EF from the initial CSDL via T4 templates. Your head spin much?
Update: This was largely improved a little under two weeks ago (July 19th), sorry I missed that. (Thanks RaghuRam Nadiminti.) I didn't reviewed the patch, but from the sample code it seems that the way it works is that one must serialize the model into CSDL using the EF EDMX serializer, then deserialize it using the EdmLib parser to be used by the OData extensions. It still feels a little bit like a hack in EF Code-First setups (at least the CLR code is only analyzed once, but I would prefer it if both components used the same in-memory model to begin with). A shortcut can probably be taken when using Model-First or Database-First scenarios however, by deserializing the EDMX file generated by VS directly. In this last scenario it actually feels less like a hack, but again, a single model would be best. I don't know that either EF would possibly switch to using EdmLib or that EdmLib would switch to using EF's EDM model, both projects are really strong now, and the blockers are probably not just technical issues. The ASP.NET team unfortunately can't do much about it AFAICT.
Update: Randomly stumbled upon those meeting notes again. They were indeed from the EF team and indicate that they don't plan to work on EdmLib.
However, I now believe this is all a good thing. The reason is that if they close all the gaps, and remove all the boilerplate, and make everything right, they'll essentially end up where WCF Data Services are, which is a fully integrated solution where the programmer injects code in the pipeline via "Interceptors". To me, the only reason to go there is because of open source requirements, but even then, I think it's more reasonable to try and advocate for an open source WCF-DS instead.
The question now becomes: "But what is Web API + OData extensions good for, then?". Well, it's a good fit when you really do want two different models for your data store and your web service. It's a good fit when the "interceptor" design is not flexible enough for you to translate between the two models.
Update: As of March 27th 2014, it's official, they are going to try to close those gaps, deprecating WCF Data Services in the process. Very early talks mention a "handler" to do this, most likely an ASP.NET HTTP handler (see comments on the announcement). It looks like very little planning has gone into this, as they're still brainstorming ideas to make ASP.NET Web API fill the use-cases of WCF Data Services. I mentioned those use-cases above, in a comment to the announcement and in this thread (started a few days before the announcement).
Many other people expressed close to identical concerns (again, see linked discussions), so it's good to see that I haven't been dreaming all this up.
There is some disbelief that ASP.NET Web API can be turned into something useful for the Data Services use-cases in a reasonable time, so some people suggested that MSFT reconsider their decision. The question of whether to use ASP.NET for open source requirements is also moot: WCF Data Services will soon be open-sourced if all goes "well", though not thanks to any advocacy efforts. (It's just a source dump, it's unknown if anyone would maintain it at this point.)
From what I can gather, everything points to a budget cut, and some people talk about it being the result of a company-wide "refocusing", though all of this should be taken with a grain of salt.
These things aside, there is now a possibility that with time, a new solution emerges -- even better that WCF Data Services or Web API when it comes to OData APIs. Although it looks a bit chaotic right now, the MSFT OData team did receive quite a bit of feedback from its customers relatively early, so there's hope (especially if the future solution, should there be one, is itself open-sourced). The transition is probably going to be painful, but be sure to watch discussions around this in the future.
I'm not sure I'll take the time to update this post anymore; I just wanted to highlight that things regarding Web API and Data Services are about to change a lot, since this answer is still being upvoted from time to time.
Update: RESTier (announcement) seems to be the result.
And finally, my (personal) opinion: OData, despite being technically a RESTful HTTP-based protocol, is very, very, very data-oriented. This is absolutely fine (we can define a lot different types of interfaces with HTTP) and I, for one, find all the ServiceStack vs OData debates irrelevant (I believe they operate at different layers in our current, common architectures). What I find worrying is people trying to make an OData-based API act like a behavior-centric (or "process-oriented", or "ServiceStack"-like) API. To me, OData URI conventions and resource representation formats (Atom and JSON) together replace SQL, WCF Data Services "Query Interceptors" and "Change Interceptors" replace DBMS triggers and OData Actions replace DBMS stored procedures. With this perspective, you immediately see that if the domain logic you need to put behind your OData API is too complex or not very data-oriented, you're gonna end up with convoluted "Actions" that don't respect REST principles, and entities that don't feel right. If you treat your OData API as a pure data layer, you're fine. You can stack a service on top of it just like you would put a "service layer" on top of a SQL database.
And thus, I'm not sure Web API + OData extensions is that great anymore. If you need fundamentally different models, it's likely that your application isn't too data-oriented (except if you're simply merging models from various sources or something), and OData is thus not a good fit. This is a sign you should at least consider Web API alone (with either SQL or OData below) or something like ServiceStack.
For better or for worse, Javascript clients can't talk SQL to a remote server. Maybe in the future via browser APIs, or maybe via variants of WebSockets, but right now, OData is the closest thing to a remote data layer anyone is going to get for rich JS clients that have thin or no server-side logic. OData is used by other types of clients of course, but I would say that it's especially useful on the client-side web platform, where things like Breeze.js or JayData are to OData what the Entity Framework is to SQL.
I have read and understood a lot of litereture written about the subject, yes I missed when its the best practice
Don't worry, I looked around, but I don't think anybody really knows what they're doing. Just pretend like everybody else while you make sense of this mess.
Use EntitySetController if you want to create an OData endpoint. Use ApiController if you want to return generic JSON or XML, or some other format (e.g., using a custom formatter).
In Web API, EF and OData are not necessarily connected. You can write an OData endpoint that does not use EF. A lot of the Web API tutorials use EF, because EF code-first is relatively easy to show in a tutorial. :-)
If not, are there standards in existence for rules engine storage?
or
Is there a C# implementation of the Oracle Rules Engine syntax?
No.
I only have a little experience with Rules Manager and Expression Filter, and it's difficult to say if something is not based on a standard, but here's my reasoning:
Oracle seems to love talking about standards in their documentation. For example, there are many standards mentioned in the SQL Reference but there are none mentioned in the Rules Manager and Expression Filter guide.
As #Adam Hawkes mentioned, Oracle Business Rules Engine uses various standards, like JSR-94, and also the Rete algorithm. (But I'm in no position to judge how well they follow those standards.) However, Oracle Business Rules Engine and Rules Manager/Expression Filter are completely unrelated products. (Even though they're made by the same company and do almost the same thing.)
If there was a standard, someone on here would know about it and would have answered by now.
For similar reasons I'm also guessing the answer is No to the other questions.
The Oracle Business Rules Engine is supposedly an implementation of the Java "JSR 94" API. Not sure that there is a standard for the "storage" of the rules, but there is a standard for expressing/using the rules.
I am trying to coach some guys on building web applications. They understand and use MVC, but I am interested in other common patterns that you use in building web apps.
So, what patterns have you found to fit nicely into a properly MVC app. Perhaps something for Asynchronous processes, scheduled tasks, dealing with email, etc. What do you wish you knew to look for, or avoid?
Not that it matters for this question, but we are using ASP.NET and Rails for most of our applications.
Once you get into MVC, it can be worthwhile to explore patterns beyond the "Gang of Four" book, and get into Martin Fowler's "Patterns of Enterprise Application Architecture."
The Registry pattern can be useful to make well-known objects available throughout the object hierarchy. Essentially a substitute for using global data.
Many MVC frameworks also employ the Front Controller and the Two-Step View patterns.
The "Model" in MVC is best designed as the Domain Model pattern, although some frameworks (led by Rails) conflate the Model with the ActiveRecord pattern. I often advise that the relationship between a Model and ActiveRecord should be HAS-A, instead of IS-A.
Also read about ModelViewController at the Portland Pattern Repository wiki. There is some good discussion about MVC, object-orientation, and other patterns that complement MVC, such as Observer.
This question is so open that it's hard to give a correct answer. I could tell you that Observer pattern is important in MVC (and for webapplication) and it would be a good answer.About all design pattern that exist are common in big web application. You will require to use some Factory to build complexe object and to access some section require some Facade.
If you want more "tips" or good practice instead of design pattern, I would suggest you to use IoC and the use of good Framework instead of starting from scratch. I can suggest you to explain the benefit of having a good ORM engine to drive you persistance layer faster too (usually can come from the Framework too).
Don't look at it from the aspect of what patterns to use with your development approach, but look at it more as how to apply patterns on a problem-by-problem basis. The architectural decisions made for the project provide just as much indication of what patterns to use as other people's experience will dictate.
That said, I have found that I am a fan of the Provider model for having multiple choices to accomplish a single task with ease of deployment added in. Also, the Unit of Work pattern is great for setting transactional boundaries. Largely, though, the architecture and business needs dictate the approach that is taken for any given code change or new development.
As much as I love patterns, I always fear seeing them overused. I have personally seen people that have used them just for the sake of using them, and it has actually made the code harder to maintain and more tightly coupled than it should have been. Also, it is good to know both sides of the patterns argument. A good pattern knowledge should be rounded out with (often considered a pattern, on its own) anti-pattern knowledge, as well.
I would most likely recommend some kind of Dependency Injection as well (Inversion of Control). Probably the single most important supplementary "pattern" to use.