I'm looking for either a web-based or Windows-based way to point to a relational data source using automated schema exploration (or, even better, a reflection-based approach that would work on any IQueryable in-memory data source) and allow easy exploration of data, traversing between records in related tables, etc. Basically a dynamic UI that doesn't have to look perfect. Any recommended approaches? Looking less for a rapid prototyping tool and more for a generic data explorer that can work out of the box and can work in multiple contexts against multiple data sources.
There is an application called LinqPad that I use for a similar idea mentioned above.
linqpad.net
Related
We have different set of data into different systems like Hadoop, Cassandra, MongoDB. But our analytic team want to get the stitched data from different systems. For example customer information with demographic will be in one system, their transactions will be in another system. Analytic should able to query to get data like from US users what was the volume of transaction. We need to develop an application to provide ease way to interact with different system. What is the best way to do?
Another requirement:
If we want to provide their custom workspace in a system like MongoDB, they can easily place with it. What is the best strategy to pull data from one system to another system on demand?
Any pointer or common architecture used to solve this kind of problem will be really helpful.
I see two questions here:
How can I consolidate data from different systems into one system?
How can I create some data in Mongo for people to experiment with?
Here we go ... =)
I would pick one system and target that for consolidation. In other words, between Hadoop, Cassandra and MongoDB, which one does your team have the most experience with? Which one do you find easiest to query with? Which one do you have set up to scale well?
Each one has pros and cons to scale, storage and queryability.
I would pick one and then pump all data to that system. At a recent job, that ended up being MongoDB. It was easy to move data to Mongo and it had by far the best query language. It also had a great community and setting up nodes was easier than Hadoop, etc.
Once you have solved (1), you can trim your data set and create a scaled down sandbox for people to run ad-hoc queries against. That would be my approach. You don't want to support the entire data set, because it would likely be too expensive and complicated.
If you were doing this in a relational database, I would say just run a
select top 1000 * from [table]
query on each table and use that data for people to play with.
I store a huge amount of reporting elements in a MySQL database. These elements are stored in a simple way :
KindOfEvent;FromCountry;FromGroupOfUser;FromUser;CreationDate
All these reporting elements should permit to display graphs from different points of view. I have tried using SQL requests for that but it is very slow for users. As this graph will be used by non-technical users, I need a tool to pre-work the result.
I am very new to all this data-mining, reporting, olap concepts. If you know a pragmatic approach not so time consuming, or a tool for that, it would help !
You could setup OLAP cubes on top of your MySQL data. The multi-dimensional model will help your users navigating through and analysing the data either via Excel or Web dashboards. One thing specific to icCube is its ability to integrate any Javascript charting library and to embed the dashboard within your own pages.
I am not familiar with DB, but I think MySQL is far than enough for your problems. Well designed index or transaction will speed up the query process.
I am not a DB expert but if you want to process graphs, you can use Neo4J (java graph processing framework), or SNAP (C++ graph processing framework), or employee cloud computing if this is possible. I would recommend either Hadoop (MapReduce) or Giraph (cloud graph processing). For graph display you can use whatever tools suites you. Of course "the best" technology depends on the data size. If none of the above suites you, try finding something that does on the wiki page: http://en.wikipedia.org/wiki/Graph_database
InforGrid (http://infogrid.org/trac/) looks like might suite you.
In my app i have to store some data. I'm thinking of XML instead of database. But little confused that which is faster.The data contains some URLs and some strings.
Please let me know xml or database is better?
It depends on what kind of app you are trying to develop.
Like a weather forecast app , you just need to save several provinces/cities info .
I think xml is better . Because it is more easy to implement and maintain.
And Like a diary app , the data increase very fast. So DB is more better , because the large xml file would affect the performance.
I thinks these kinds of questions are more discussive and most likely to be voted for closing.
Nevertheless, the performance depends on the size of the stored data.
While an XML file is small, it will generally perform better then the DB (considering an overhead you will need to go through while deploying it, etc.)
But when you need to store a lot of structured data - DB will after all will the race.
And since I think that the phone is not a place for an RDBMS engine, I go with XML storage on WP7 for now.
One of the things I've experience with WP7 and the built in database is that there's a bit more upfront performance cost to using the database engine than there is with straight Isolated Storage and XML. It was enough of a performance hit during application startup that it was apparent to the user that there was a delay in populating their data.
I would say that for small amounts of data where you just need to read and display, XML is probably your best bet, but for data where you might have to do a lot of aggregating and grouping, it will probably wind up being easier to do with SQL, so you'll need to measure the trade-offs between performance and ease-of-coding/maintenance before you make your decision.
I'm begin to developing a scial sharing website so I'm curious about database design Schema... So in Data-Mining Star-Schema is the best one but how about a social sharing website... And as a nature of the SS websites there will be (i hope :)) many users in same time... Which better for performance for overdose using...
What do you want to do? Star Schema and Snow Flake are reporting schemas. Social sharing would not need that except mayby then for reporting?
You need something representing the social relations, that is usually done with a graph database http://en.wikipedia.org/wiki/Graph_database or in a RDBMS there are graph techniques such as this More details in the book by Celko
Star and Snowflake are not actually design methods. They are common patterns that arise as part or whole of a schema. As far as I'm aware the term "Snowflake" was invented by Ralph Kimball and is only relevant if you are using his "Dimensional" design methodology (which I certainly wouldn't recommend for a social networking site!).
The best default design for your database should generally be a Normal Form one. Aim to be in at least Boyce-Codd / 5th Normal Form unless you find compelling reasons to modify that.
I have a legacy VB6 app which I am rewriting in .Net. I have not used an ORM package before (being the old fashioned type who likes to know what SQL is being used), but I have seen good reports of NNibernate and I am tempted to use it for this project. I just want to check I won't be shooting myself in the foot.
Because my new app will initially run alongside the existing one, any ORM I use must work with the existing database schema. Also, I need to use SQL server text searching. From what I gather, LINQ to SQL does not support Text searching, so this will rule it out.
The app uses it's own method of allocating IDs for new objects - will NHibernate allow this or does it expect to use it's own mechanisms?
Also I have read that NHibernate does caching. I need to make sure that rows inserted outside of NHibernate are immediately accessible when accessing the database from NHibernate and vice versa.
There are 4 or 5 main tables and 10 or so subsidiary tables. although a couple of the main tables have up to a million rows, the app itself will normally be only returning a few. The user load is low so I don't anticipate performance being a problem.
At the moment I'm not sure whether it will be ASP.NET or win forms but either way I will be expecting to use data binding.
In terms of functionality, The app is not particulatly complicated - the budget to re-implement it is about 20 man days, so if I am going to use ORM it has to be something that will start paying for itself pretty quickly. Similarly I want the app to be simple to deploy and not require some monster enterprise framework.
Any thoughts on whether this is a suitable project for NHibernate would be much appreciated.
While ORMs are good, I personally wouldn't take on the risk of trying to use any ORM on a 20 day project if I had to absorb the ORM learning curve as I went.
If you have ADO.NET infrastructure you are comfortable with and you can live without ORM features, that is the much less risky approach to take.
You should learn ORMs and Linq (not necessarily Linq To Sql) eventually, but it's much more enjoyable when there is no immediate time pressure.
This sounds more like a risk management issue and that requires you to make a personal decision about how willing you are to see the project fail due to embracing new (to you) technologies.
You might also check out LLBL Gen Pro. It is a very mature ORM that handles a lot of different scenarios.
I have successfully fitted an NHibernate domain model to a few legacy database schemas - it's not yet proved impossible, but it is sometimes not without its difficulties. The easiest schemas to map are those where all primary keys and foreign keys are single column ones, but with so few tables you should be able to do the mapping relatively quickly even if this is not true of yours.
I strongly recommend, particularly given your timescale, that you use Fluent NHibernate to do your mappings - the time to learn the XML mapping file syntax may be too big an ask. However, you will need to use an XML mapping file for your full-text indexing stuff (assuming that's what you meant), writing these as named SQL queries. (See nhibernate.info documentation for details.)
Suggest you spend a day or two trying to create a model for a couple of your tables, and writing code to interact with them. There'll always be people on SO ready to answer any questions you have.
You may also want to take a look at Linq to NHibernate - we've found it helpful in terms of abstracting even more of our database access stuff away behind a simple interface. But it's Fluent NHibernate that will give you the biggest and quickest win in terms of "cheating" on the NHibernate learning curve.