Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Our client force us to use JdbcTemplate instead of Spring Data Jpa for the development of the Spring project. However, the application is not critical in terms of speed and delivery of responses (it is an internal web application for the client's end-users).We would like to use Spring Data Jpa.
Question: Is there some objective reason to use JdbcTemplate because the speed of the application? From my point of view there will be different bottlenecks.
The question, what is faster is impossible to answer without specifying an exact use case.
JdbcTemplate will most likely be faster when talking about pure query execution, because a JPA implementation will do more stuff:
Parse JPQL (assuming you are using that)
creating a SQL query out of that
executing it
converting the result into objects
While the template will (almost) just:
execute the query
hand you the result as a call to ResultSetMapper or similar.
Of course JPA does all of this for a reason.
it offers a reasonable amount of database independence.
it tracks your changes and generates the right update statements for persisting them.
it allows for lazy loading so you don't have to think about what to load before hand (of course you still have to do that if you care about performance).
And those things have costs beyond performance.
The abstraction JPA offers is actually really complex and leaky and it is not properly understood by most developers using it. While I think it is completely reasonable to use JPA in many contexts, I can also relate to people banning it from their projects. Talking about performance is absolutely too limited in order to make a well-educated decision on this.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to develop a REST API for the first time and it seems to me that I have a problem with realizing some basic REST API concepts. I'm not sure if I should only create CRUD operations for each model and then analize responses from these operations using Vue (in my case)? Or should I let my DRF side do some business logic?
SPECIFIC QUESTION
Here's an example. I want to remove an object and update some other objects in other table, which are related to the original object I would like to delete. Should I just create one POST(?) endpoint to do that or should I get those other objects I would like to delete using Vue, then call "delete" on each one of them from Vue, and only then delete the original object. As you can see in first case it's a complex operation and in the second case it's a couple of CRUD operations.
I'm asking this because I found many interpretations of REST API in Google and I struggle to find the truth. It seems to me that DRF doesn't want me to create complex views, looks like it just wants me to create 4 operations for each model.
Hope I made myself clear, thank you for trying to help.
What you really seem to be asking is what degree of coupling is appropriate for a REST API. The answer is as little as possible, but what's possible will depend on your application and your requirements.
To use your example, yes, it's preferable to have a uniform interface for deletion for each one of your resources, but what are your other requirements? Is it a problem if you cascade the deletion of children resources? Is it OK for you to automate deletion of orphan resources? Can you afford to lose transaction integrity by requiring the client to explicitly delete multiple resources through their own endpoints? If you can't find a way to make the uniform deletion interface work for you, there's nothing wrong or unRESTful in creating a single POST endpoint for doing what you need, as long as that's not coupled to the needs of a particular client implementation.
Don't expect to find "the truth", or a manual of best practices for REST API or final answers to your questions about that, because REST is just an architectural style, not a rigid specification for architectures. It's just a set of principles used to guide long-term design and evolution of applications.
If you don't have long-term requirements that ask for a careful adoption of REST principles, more important than finding the truth about REST is to respect the Principle of Least Astonishment, since many people already have strong opinions about how a REST API should be implemented. A good example is API versioning with URLs. Adding a version number to your URLs is a REST anti-pattern, but it's a widespread practice believed by many to be a REST best-practice. That happens because most so-called APIs are strongly coupled to the clients, and the API versioning makes it much easier to make backwards incompatible changes. Making backwards incompatible changes is not a problem when you implement REST API and clients correctly, but it takes a lot more work than simply tackling a version number somewhere.
If you really have long-term requirements or if you are genuinely interested in learning about how to design and implement a REST API correctly, try searching for "Hypermedia API" instead of "REST API". Many people gave up on the term REST and decided to start using a new term to refer to APIs that implement REST correctly.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying implement redis basic functionality like below in Go.
GET
SET
ZADD
ZCARD
ZCOUNT
ZRANGE
SAVE
If you want to implement a Go server offering some Redis features, it is quite easy. You need to decide about the goroutine model, then implement/reuse some data structures (map and skiplist), then implement the Redis protocol (which is simple enough).
I would suggest a goroutine model with 2 goroutines per client connection, plus one goroutine to implement the Redis engine and manage the data structures. The benefit of this model is you can easily support pipelining and the atomicity property of Redis commands without any explicit locking. This model is well adapted if you want to later extend the scope by supporting blocking commands (such as the ones useful for queues).
Now, if you also want to mimic the same exact Redis behavior, this is more complex. Especially, saving the data in background leveraging the OS copy-on-write mechanism will be difficult with Go (since forking does not work). For a memory database, foreground saving is always easy. Background saving is extremely difficult.
You may also want to have a look at the following attempts, and simplify/enrich them to match your goals:
https://github.com/siddontang/ledisdb
https://github.com/felixge/go-redis
https://github.com/docker/go-redis-server
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am new to ABAP programming. To prepare myself for my new job, I am reading ABAP books. During reading, I learned that ABAP has several legacy elements to keep it backwards compatible with older SAP releases.
Regarding GUIs, I am reading about SAP-UI (PARAMETERS, etc.) Dynpros and WebDynpros. Now, I am unsure about on what to focus my learning efforts on.
Are the common rules like "You should know a little about basic SAP-UI, but mainly focus on WebDypros."
Background information: My new employee does SAP customizing for small and medium sized enterprises.
I'm not a consultant, but I work for a medium (~120 employees) sized company myself. If you were to work for us you would mostly create custom abap reports, maybe sometimes program a user exit. Small companies usually don't spend the money needed for big SAP driven portals, so they probably don't use Netweaver AS Java at all. That means abap dynpro and abap lists as your main UI elements. Sometimes it is good to also know your way around other ways of creating reports, for instance SAP Query.
If I were you I would start with basic abap. You won't have any fun working with dynpros if you haven't gotten your head around the basic stuff first. Learn to work with internal tables, work areas, field symbols. Have a look at some basic ABAP Objects stuff (for instance the ALV grid, very useful for displaying all sorts of tables). You should also understand the ABAP Dictionary, the place where structures, tables, data elements, data domains ans search helps are defined.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Good day. I'm doing my magister's degree on "Implementing distributed NoSQL database". Having studied the material comparing strenghts and weaknesses of NoSQL databases compared to RDBMS I've faced the step of choosing the proper problem to solve. My task is to show the typical development of the same application backed by Oracle and MongoDB and to show that during the evolution of the app Mongo begins to outperform Oracle. I'm focused on many writes and horizontal scaling. As the task I've chosen a typical Twitter like app with complex evolving domain, Java and Spring Data as my instruments.
I ask for experienced people for benefical critics and alternative tasks to show Mongo's favor. I understand that it highly depends on the schema, indexes, etc, still I ask if
Mongo on my scenario can beat Oracle with:
Many writes
Horizontal scaling
Read operations
Schema evolving
Sharding\Replication
My task is to show the typical development of the same application backed by Oracle and MongoDB and to show that during the evolution of the app Mongo begins to outperform Oracle.
I'm sorry for being very frank, but what kind of academic work starts with the final answer and you want to reverse engineer the problem?! This is less than worthless since it's intentionally misleading.
Leaving that aside, here are some tips:
Use something which requires JOINs in the relational database, but can be modelled as a single document. Blog posts would come to my mind. Common tricks include putting the author name into the document. No JOIN required for reading and if the author changes his name (which will happen very rarely in most systems) you only need a unique attribute like his email address to update the name everywhere:
{
title: "...",
content: "...",
date: "...",
author: { name: "...", email: "..." },
comments: [
{ name: "...", email: "...", text: "...", date: "..." },
...
]
}
Keep your data small enough so they fit into RAM. MongoDB can make good use of that and will only occasionally flush information to disk (depending on your configuration), RDBMS will always to to disk for durability reasons (ACID compliance).
Use an "insecure" connection setting. Do not wait for the database to actually process the request, but return immediately (fire-and-forget like UDP). This isn't possible in a transactional system. You can amplify this if you test in the cloud, for example on EBS backed EC2 instances with have very high disk latency.
Use a pretty heavy ORM like Hibernate. Probably avoid an ODM (object document mapper) like Morphia (if you're doing it in Java) and use the plain Java driver - even though I'm not sure how big the performance gain is, but I'm sure there is some if done properly.
Use replication in MongoDB and allow reads from the secondaries (thus sacrificing consistency but gaining performance).
Use sharding.
Besides the system's performance, you might want to take a look at developer productivity. MongoDB is great for getting start and I have the feeling it is much quicker to get started with. Not sure if this doesn't change into the opposite in the long run - strict schemas do have their place in the long run IMHO.
I'd rather compare MySQL and MongoDB. The two are both open source software and pretty similar. For example indexing is exactly the same - only b-trees (if you stick to the standard, on-disk storage engines).
Final note: I hope you can agree with me that it's pretty easy to win such a disbalanced comparison, which makes it pretty pointless...
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Assuming you have three layers (Business, Data and UI). My data layer would have a linq to sql file with all the tables added.
I've seen some examples where an Interface is created in the business layer and then implemented in another class (type is of IQueryable/IEnumerable), yet other classes are using normal Linq syntax to get/save/delete/update data.
Why and when would i use an Interface which has an IQueryable/IEnumerable type?
Two of the most common situations, which you may want to do this are:
you want to protect yourself from changes to that part of your system.
you want to be able to write good unit tests.
For example, you have a business layer that talks directly to LINQ to SQL. In the future you mat have a requirement to use nHibernate or Entity Framework instead. Making this change would impact on your business layer, which is probably not good.
Instead, if you have programmed to an interface (say IDataRepository), you should be able to swap in and out concrete implementations like LINQtoSQLRepository or HibernateRepository without having to change your business layer - it only cares that it can call, say Add(), Update(), Get(), Delete() etc - but doesn't care how these operations are actually done.
Programming to interfaces is also very useful for unit testing. You don't want to be running tests against a database server for a variety of reasons such as speed and reliability. So, you can pass in a test double, fake or mock implementation to test your data layer. E.g. You have some test data that implements your IDataRepository, which allows tou to then test add(), delete() etc from your business layer without having a DB connection.
These points are generally good practice in all aspects of your application. I suggest reading up on The Repository Pattern, SOLID principles and maybe even Test Driven Development. This is a large and sometimes complex area and its difficult to give a detailed answer of exactly what to do and when as it needs to suit your scenario.
I hope this helps you get started.