why The Repository Pattern and Unit of Work in MVC and EntityFramework? [closed] - asp.net-mvc-3

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am a .Net Developer. I jst want to know that why The Repository Pattern and Unit of Work in MVC and EntityFramework is Used. Plz tell me the scenario where I can use Repository and Unit of Work Pattern.

OK - First, the Repository Pattern. Why? Imagine the scenario where you have a database in your application - say, SQL 2000. You then want to upgrade the database to SQL 2008 and Entity Framework. If you don't have a Repository pattern implemented, this could turn out to be very tedious. Why? , well imagine that the data access is implemented using ADO.net. Very different from LINQ to Entities. So, the ADO.NET code would be littered through your data access calls.
Now, if your application used a Repository Pattern, it would call, for example, the GETCUSTOMER() method in the Repository. It does not care how GETCUSTOMER() gets its data, because its DECOUPLED from the actual data access. It only goes as far as the Repository. So, when you rip out your ADO.NET code and replace it with Entity Framework Data Access technology, you don't have to mess with the application, only the data access stuff.
UNIT OF WORK: Imagine this scenario. A Customer has just registered on your site. 1.You need to add their data to an Accounts section. 2.They have also subscribed to the newsletter. AND, 3. you need to send them a confirmation email to activate their account. These 3 thing ALL need to happen to successfully register a new customer and can be considered a UNIT of work. It has some parallels to a database transaction.

Related

best approach to make available excel reports to users on Local network [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 12 months ago.
Improve this question
For reporting purposes in an organization, someone exports the result of a query on an oracle database, after changing the date parameters every month, and sends the excel on outlook, to a receiver (analyst), there are different receivers (analysts) and different queries with With N:N (many-to-many) relationship between them
I m working on making this process more "automatic", I thought of these approaches:
Deploy a web application on my computer, with an Authentication page, then every user is taken to a list of reports that he is allowed to view, then he can choose a maxdate and a mindate value, and then he can download the excel file, with data exported from the oracle database
A batch script that's executed every end of the month (or a date chosen by the analysts), executing the oracle query, exporting the result to an excel file: 2.1 And then send the file on outlook 2.2 OR save the file on a folder on my computer, and make that file accessible on the local network by the different analysts
I want to get opinions on other approaches (hopefully more minimal and easiest to scale), and what's the pros and cons of the two approaches I've presented, and how I can best implement them
Option 1 sounds like Oracle Application Express (Apex). Even if you aren't an experienced developer, in a matter of a few hours you should be able to create a working web application.
What should you do?
talk to DBA, ask them to install Apex
when they provide login credentials to you (presuming they'll also create a workspace for you), create a new application
you'd mostly use Interactive Reports
if all data you need is in one table, even better
if not, you'll have to write a query which joins several tables, but hey - you already have those queries, haven't you?
Interactive Report lets users filter data in various ways
you can download result in Excel format so they can analyze data the way they are used to; or, perhaps even better, continue using Apex

Querying database of Windchill using SQL [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Many vendors such as Microsoft with Sharepoint and Dynamics have made it impossible to access database tables directly in newer versions as they convert to Software as Service (SAS) offerings.
I am working with PTC Windchill and have developed extensive Oracle SQL Layer ETL processing. Is this a future proof practice within the context of this product line? Or in the future will I be required to work through some sort of DAL. If so, is there a recommended practice?
The information available from Windchill for Cloud appear vague and mostly suggesting to me virtualizion at the infrastructure layer, suggesting I would be able to query at the database layer for many years to come. Any confirmation, pointers or feedback would be appreciated.
Windchill offers extensive APIs for data access (and customization) in java. Starting from version 11.0, There are also some soap and rest web service for data access , but not for everything. It is always better to use API, they offer Data Abstraction Layer in a supported way. PTC would recommend that you refer to a consultant for this job.
But if you want to try:
There is a huge documentation about Windchill Customization, you can also create your own web services in java to access the data you want, if standard web services does not suffice. a starting point can be the Windchill help, and the javadoc located in the windchill server in this folder:
WINDCHILL_HOME/codebase/wt/clients/library/api/index.html
there are also some examples:
WINDCHILL_HOME/prog_examples
more documentation and appropriate training is available on https://support.ptc.com, only for registered customer users.

WCF REST to web API [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I want to migrate from wcf rest services to web API, (around 30 endpoints to be created with 6 complex methods) just want to decide based on the budget (1 month time with one resource) available, which amongst the below would be a better solution.
Writing whole new code for creating web API, just utilizing logic already present in wcf rest services.
Creating API endpoints and calling wcf services inside that.
There is no real way to tell for sure without knowing more details (or maybe the entire project).
If you're not sure the time will be enough, one thing you can do is to start with option 2 and then replace each endpoint with the actual code from the WCF service. If one month proves to not be enough, you may end up with a mixed solution (where some methods are implemented in the Web Api and some are wrappers calling the WCF service). However, you will be able to just keep slowly moving the methods back to the Web Api and finish it eventually.

Versioning with ASP.NET Web API [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'd like to get peoples thoughts on a scenario I'm about to encounter.
I've been tasked with building a RESTful Web API Service that will be used by two client applications.
One client application will be a web application and the other client will be a mobile application.
They are two distinctly different applications that are targeting the same data-store. I imagine that a lot of the requests made by both client applications will be of shared interest.(They may want to receive slightly different messages back in terms of the model objects they request).
But ultimately there will be differences, and I don't want to expose parts of the service that are designed for an individual client app to all other clients.
I've been looking at Versioning with ASP.NET Web API, where i can create the same controllers multiple times and create custom constraints to controller selectors that switch out the controller depending on the version used in the URI.
Is this a good idea in my scenario, or should i really be building two API's, one for each specific client application?
First of all, if you want client A to access certain resources while client B shouldn't, you're going to implement authorization like OAuth2.
In the other hand, I doubt that the solution should be implementing 2 different APIs or overcomplicate the code of a single API to return the same response with some differences.
Furthermore, you can emit same DTO for both clients and map the generic DTO to a domain object or other DTO using AutoMapper in order to avoid the hassle to manually set properties from one object to the other.
Finally, you can also use an OData interface to your RESTful API to let the client decide which properties you want to return in your entities or perform other operations during the request and get just what you need in each case.
Conclusion/summary: you shouldn't adapt the REST API to your clients, but the clients should adapt themselves to how the API works. At the end of the day, you're returning JSON entities and you can map them to any class even if the structure is different using AutoMapper as I said in the first paragraph. You can even implement a custom serializer if needed. It will be less pain than duplicating the server code because some differences.
What would happen if you add a third client and says "I want also a different structure in returned entities", add even a fourth one! You're going to get crazy, aren't you?

External Data Source for Microsoft CRM [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
The question itself is very tricky. But I'll try to break it down into pieces.
Let's say I have external datasources each of them providing my data model. Either a webservice, or database. What matters is that my Entities are defined and exists in separated systems than the Dynamics builtin database.
What I want to do is to use the capabilities of CRM, to handle Business Entities (Provided from the external source), aspects such as security, and UI. Are well managed inside the CRM. So I want to build my system, using this tool, but I want to be able to store and keep the data in my own sources.
In other words, is there a way in CRM (Through the webServices I believe), in which I can provide the entity, and have it managed later inside the CRM.
thanks in advance... I really hope I can find the answer here.
The only option you have is to synchronize the data stored inside Dynamics CRM database with your external sources.
With tools like Scribe from Scribesoft, this scenario is manageable.
About 50% of the functionality of MS CRM is implemented via rather convoluted SQL views/queries/stored functions etc. It is much more then a simple "table per entity type" data store. There is no way to keep live data "somewhere else", so you are stuck with import/export (as recommended in the previous answer).

Resources