I am trying to design a code in Java which will sync table A on a database with table A on another database. Can this efficiently be done using Hibernate?
Maybe database replication is the best way to do, but if you MUST do that for any reason, then you should look at some sheduler like Quartz Scheduler.
Quartz is well integrated with the Spring Framework, so it could be a good way to program a Job which do the replication as many times per day as necessary. Be carefull about datas flow, and synchronization process with transactions, it could do some datas integrity problems (which could "provide" some unwanted deadlocks...).
Anyway you could have as many hibernate.cfg.xml as you have databases connections.
So you could store your objet at the same time in the two databases, but it could be an heavy solution which will probably impact application response time.
Related
In my system I have more than one project, each project connect with individual DB .When Insert transaction occur in any project then record insert on all of the db,but when update event occur in any project then respective update occur only it’s DB not impact rest of the project db.it’s my system process.After continue this process data become difference on each db.With out change this process what I do to overcome this data mismatch problem.
Suppose on system-1 transaction activity :
Transaction -->Update -->Modification occur only on system1 db not in system-2,sytem-3 db
Any type of suggestion will be acceptable,if have any query please ask,thanks in advanced.
I'm currently working in almost the same Project architecture. Our solution is to create Orchestration module that will manage Single_entry_point module. Last one is responsible to unify the information from the Upstream (cluster of different DataBases and Service systems) and after it to upload/distribute it to a Downstream (Single_Data_Warehouse). By doing so - you can guarantee that all your information is actual in every moment. The Orchestrator communicates with Service massages when dealing with all other modules.
This design is based on Pipes and Filters Pattern concept.
I think that in your case, you can only add logic for Update DB information and reuse all that you have at this point. If you spend some time on such Single_entry_point module, which to deal with not only Insert, but with Transaction Update too.
When it comes to Databases “eyeballing” validation (done by SQL scripting) you definatelly have to consider the use of Informatica. To be more specific - when data as it is being moved into production systems. The data in your production systems has to be right in order to support your business decision making. Informatica Data Validation Option provides the ETL testing automation and management capabilities to ensure that your production systems are not compromised by the data update process.
If you find that this options doesn't suits your needs, here are resources I found about this topic:
database-synchronization-an-overview-of-approaches
MSDN Synchronizing Databases
how-to-synchronize-databases-in-different-servers-in-sql-server-2008
sql-comparison-sdk-synchronizing-databases
I have Oracle as my main RDBMS for read and write, but I want to use couchbase as caching layer as it has map-reduce as can be used as memcache. Any idea as to how i can implement that, and how to transfer and update data in the caching layer, when Oracle is updated or inserted etc.
You are not telling anything about your current performance issues.
I have seen too many applications which do not really take advantage of RDBMS/SQL features, especially if an ORM sits in between.
The cure is to put another cache on top of a database, and to synchronize this in a cluster manually using IP multicasts (SwarmCache for example), message queues (JMS) or nightly import jobs. It could create more problems in the end. And it increases system complexity.
So my answer to your question is: I would not do it, as long as there is room for improvement regarding your data model and/or queries.
I believe your question is about Database synchronization. This can be done through a combination of using DB dependencies and "right-thru" features that I am not too sure about whether couchbase offers. So with DB dependency you have cached items dependent upon Db items and if the DB items are updated or deleted the corresponding dependent item in the cache is removed and at the same time you can write a "right-thru" handler executed at the server level; and the main purpose of this handler is loading fresh copies of the removed items in the cache. So, basically, you'll write the handler once and registerit with the cache server and the cache server will execute it when needed to sync. new items in the DB with the cache. This reading on Db synchronization can be useful . Its based on a product Ncache.
So your question is not directly related to Couchbase, but as other stated more about how you can be alerted when data are changing into your Oracle instance.
One thing that is not well known is the Oracle Database Change Notification feature that is quite cool for this:
http://docs.oracle.com/cd/E11882_01/java.112/e16548/dbchgnf.htm
So you can create an application that is listening to your changes and pushes the data into Couchbase.
Given a pre-production oracle database and a production oracle database and if around 300K records need to be transferred from the former to the latter, would using a messaging system such as an ESB/JMS/TIBCO be a good option?
I don't know Oracle, but if I was trying to asynchronously replicate data with SQL Server, I would use their own internal tools to accomplish it. I would imagine Oracle has similar tools to run jobs to copy between two Oracle databases.
However, I do have quite a bit of experience using an ESB (Mule) with ActiveMQ to replicate data across database technologies. Specifically I've done SQL Server->Mongo and MySQL->Mongo with Mule and ActiveMQ.
So far I've found Mule to be a wonderful solution - especially coupled with ActiveMQ. I've been able to replicate about 400k Wordpress blog posts (from MySQL) to Mongo in about 20 minutes. To transfer 100k articles from a CMS system we were able to get it done in about 30 minutes.
I figured I'd weigh in because you mentioned and ESB and messaging. I would go that route if the integration points are heterogenous. If you do go down that route, Mule is awesome.
If you are trying to move data from an old database to a new one instead of doing it asynchronously, possibly a simpler method would be sql injection. Assuming your old database allows you to "export" your database, when you export it you will download a sql file. Then you can just open that sql file in a program like notepad and copy-paste that code in the sql executor at your new database and it will re-create all your tables and populate them with the old data.
Actually using the database tools will be the recommended method for replicating data between databases.
When using messaging, one does not get the guarantee that the data will be transferred in the same sequence as it was sent and honor relationships between tables, potentially resulting in replication errors, unless one builds up some mechanism on the JMS receiver side to maintain the sequence. But that looks rather like an overhead.
I have a website developed with ASP.NET MVC, Entity Framework Code First and SQL Server.
The website has entities that each have a history of statuses that we defined (NEW, PACKED, SHIPPED etc.)
The DB contains a table in which a completely separate system inserts parcel tracking data.
I have to read this data tracking data and, following certain business rules, add to the existing status history of my entities.
The best way I can think of is to write an independent Windows service to poll the tracking data every so often and update my entity statuses from that. However, that makes me concerned about DB concurrency issues.
Please could someone advise me on the best strategy for this scenario?
Many thanks
There are different ways to do it. It also depends on the response time you need. If you need to update your system as soon as the tracking system updates the record then a trigger is the preferred way. Alternative way is to schedule a job which will run every 15/30mins and sync the 2 systems.
As for the concurrency issue you can use a concurrency token field. Entity framework has support for this.
I have an application that talks to several internal and external sources using SOAP, REST services or just using database stored procedures. Obviously, performance and stability is a major issue that I am dealing with. Even when the endpoints are performing at their best, for large sets of data, I easily see calls that take 10s of seconds.
So, I am trying to improve the performance of my application by prefetching the data and storing locally - so that at least the read operations are fast.
While my application is the major consumer and producer of data, some of the data can change from outside my application too that I have no control over. If I using caching, I would never know when to invalidate the cache when such data changes from outside my application.
So I think my only option is to have a job scheduler running that consistently updates the database. I could prioritize the users based on how often they login and use the application.
I am talking about 50 thousand users, and at least 10 endpoints that are terribly slow and can sometimes take a minute for a single call. Would something like Quartz give me the scale I need? And how would I get around the schedular becoming a single point of failure?
I am just looking for something that doesn't require high maintenance, and speeds at least some of the lesser complicated subsystems - if not most. Any suggestions?
This does sound like you might need a data warehouse. You would update the data warehouse from the various sources, on whatever schedule was necessary. However, all the read-only transactions would come from the data warehouse, and would not require immediate calls to the various external sources.
This assumes you don't need realtime access to the most up to date data. Even if you needed data accurate to within the past hour from a particular source, that only means you would need to update from that source every hour.
You haven't said what platforms you're using. If you were using SQL Server 2005 or later, I would recommend SQL Server Integration Services (SSIS) for updating the data warehouse. It's made for just this sort of thing.
Of course, depending on your platform choices, there may be alternatives that are more appropriate.
Here are some resources on SSIS and data warehouses. I know you've stated you will not be using Microsoft products. I include these links as a point of reference: these are the products I was talking about above.
SSIS Overview
Typical Uses of Integration Services
SSIS Documentation Portal
Best Practices for Data Warehousing with SQL Server 2008