Merging two dataset - oracle

I have one "big" TOracleDataSet which I can't change 'cause it's using in many different parts of huge project. I want to add just one record to this dataset for using in another grid. The solve way I see it is create another one oracle data set which will combine wanted record and another ones from "big" dataset. In other words, "small" dataset includes "big" dataset.

Try this maybe?
TxQuery Project
This was a commercial project at one time, but the auther was convinced to release it opensource.
TxQuery component is a TDataSet descendant component that can be used to query one or more TDataSet descendant components using SQL statements. It is implemented in Delphi 100% source code, no DLL required, because it implements its own SQL syntax parser and SQL engine.
That quote was taken from the page of the current maintainer, I believe, Chau Chee Yang .
Either this or maybe TClientDataset might be your best options.

Related

Is it OK to have multiple merge steps in an Excel Power query?

I have data from multiple sources - a combination of Excel (table and non table), csv and, sometimes, even a tsv.
I create queries for each data source and then I am bringing them together one step at a time or, actually, it's two steps: merge and then expand to bring in the fields I want for each data source.
This doesn't feel very efficient and I think that maybe I should be just joining everything together in the Data Model. The problem when I did that was that I couldn't then find a way to write a single query to access all the different fields spread across the different data sources.
If it were Access, I'd have no trouble creating a single query one I'd created all my relationships between my tables.
I feel as though I'm missing something: How can I build a single query out of the data model?
Hoping my question is clear. It feels like something that should be easy to do but I can't home in on it with a Google search.
It is never a good idea to push the heavy lifting downstream in Power Query. If you can, work with database views, not full tables, use a modular approach (several smaller queries that you then connect in the data model), filter early, remove unneeded columns etc.
The more work that has to be performed on data you don't really need, the slower the query will be. Please take a look at this article and this one, the latter one having a comprehensive list for Best Practices (you can also just do a search for that term, there are plenty).
In terms of creating a query from the data model, conceptually that makes little sense, as you could conceivably create circular references galore.

How to handle multiple customers with different SQL databases

Summary
I have a project with multiple existing MSSQL databases, I already created an Azure Analysis Service where I deployed my first Tabular Cube. I already tested to access the Analysis Service, worked perfectly.
Finally I have to duplicate the above described for ~90 databases (90 different customers).
I'm unsure how to organize this project and I'm not sure about the possibilities I have.
What I did
I already browsed the Internet to find some information, but I just found a single source where somebody asked a similar question, the first reply is what I was already thinking about, as I described below.
The last reply I don't really understand, what does he mean with one solution, is there another hierarchy above the project?
Question
A possibility would be to import each database as a source in the same project, but I think this means I have to import each table from this source, means finally 5*90 = 450 tables, I think this gets quickly outta control?
Also I thought about duplicating the whole Visual Studio Project folder for ~90 times for each customer, but at the moment I fail to find all references to change the name, but I think this wouldn't be to hard.
Is there an easier way to achieve my goal? Especially regarding maintainability.
Solution
I will make a completely new Database with all the needed tables. Inside those tables I copy the databases from all customers with a new column customerId. The data I'll transfer with a cyclic job, periodicity to define. Updates in already existing row in the customer database I handle with a trigger.
For this the best approach would be to create a staging database and import the data from the other databases, so your Tabular Model can read the data from it.
Doing 90+ databases is going to be a massive admin overhead and getting the cube to lad them effectively is going to be problematic. Move the data using SSIS/Data factory as you'll be able to better orchestrate the data movement, and incremental loads that way. That way if you need to add/remove/update data sources it is not done in the Cube, its all done at the database/data factory level.
Just use one database for all the customers and differentiate each customer with a customer_id column.

How to make data to show in separate word instead of one word from XML in jaspersoft adhoc

I have a lookup table that is harvested from the XML file and not physically stored in the MySQL database. Because of that all the data are represented in one word when it is queried out using jasper adhoc for example
ridikill
peon
thegreat
All these lookup should be like so
ridi kill
pe on
the great
how to make the data to show correctly in separate words.
You are going to have some trouble doing this exclusively in the Ad-Hoc editor, it simply doesn't have this kind of functionality on it's own. You could create a calculated field with the following code in the formula builder:
CaseWhen("RigType" == 'deepwaterdrillship', 'deep water drill ship', "RigType" == 'standardjackup', 'Standard Jack Up',"RigType"=='standardfloater','Standard Floater')
Replace all instances of "RigType" with your original field name. Obviously this will get quite manual if you have a lot of different strings.
If you created a calculated table in the domain/topic that you are using, with similar logic to the code above, this would be more powerful since you can join to your other tables. However, as Petter commented, this is a data source problem and in my experience it is always better to fix the source if possible.

Filter by zip code, or other location based data retrieval strategies

My little site should be pooling list of items from a table using the active user's location as a filter. Think Craigslist, where you search for "dvd' but the results are not from all the DB, they are filtered by a location you select. My question has 2 levels:
should I go a-la-craigslist, and ask users to use a city level location? My problem with this is that you need to generate what seems to me a hard coded, hand made list of locations.
should I go a-la-zipCode. The idea of just asking the user to type his zipcode, and then pool all items that are in the same or in a certain distance from his zip code.
I seem to prefer the zip code way as it seems more elegant solution, but how on earth do one goes about creating a DB of all zip codes and implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
this should be fairly common "task" as many sites have a need similar to mine, so I am hoping not to re-invent the wheel here.
Getting a Zip Code database is no problem. You can try this free one:
http://zips.sourceforge.net/
Although I don't know how current it is, or you can use one of many providers. We have an annual subscription to ZipCodeDownload.com, and for maybe $100 we get monthly updates with the latest Zip Code data complete with Lat/Longs of the centroid of the zip code.
As for querying for all zips within a certain radius, you are going to need a spatial library of some sort. If you just have a table of zips with lats/longs, you will need a database-oriented mechanism. SQL Server 2008 has the capability built in, and there are open source libraries and commercial libraries that will add such capabilities to SQL Server 2005. The open source database PostgreSQL has a project, PostGIS that adds this capability to that database. It is here: http://postgis.refractions.net/
Other database platforms probably have similar projects, but those are the ones I am aware of. With one of these DB based libraries you should be able to directly query for any zip codes (or any rows of any kind that have lat/long columns) within a given radius.
If you want to go a different route you can use spatial tools with a mapping library. There are open source options here as well, such as SharpMap and many others (Google can help out) that can use the free Tiger maps for the united states as the data source. However, this route is somewhat more complicated and possibly less performant if all you need is a radius search.
Finally, you may want to look into a web service. This, as you say, is a common need, and I imagine there are any number ob web services that you can subscribe to that can provide all zip codes in a given radius from a provided zip code. A quick Google search turned up this:
http://www.zip-codes.com/free-zip-code-tools.asp#radius
But there are MANY resources to be had for the searching on this subject.
how on earth do one [...] implement the function that given zip code 12345, gets all zipcodes in 1 mile distance?
Here is a sample on how to do that:
http://www.codeproject.com/KB/cs/zipcodeutil.aspx
Just to be technical... PostGIS isn't a project of the Postgres community... it's a stand-alone project that is built on top of Postgres. If you want help or support with PostGIS, you'll want to go to it's community instead of Postgres.
You can use PostGIS. Additionally, I've used deCarta's mapping libraries. They have technology which allows you to geokey any arbitrary data type. Then you can query these spatially.
disclaimer: I work for deCarta
Wouldn't it be more efficient to just figure out which cities are within a 1 mile radius and store that information in a table? Then you don't have to do calculations in the database all the time.

Managing LINQ to SQL .dbml model complexity

This question is addressed to a degree in this question on LINQ to SQL .dbml best practices, but I am not sure how to add to a question.
One of our applications uses LINQ to SQL and we have currently have one .dbml file for the entire database which is becoming difficult to manage. We are looking at refactoring it a bit into separate files that are more module/functionality specific, but one problem is that many of the high level classes would have to be duplicated in several .dbml files as the associations can't be used across .dbml files (as far as I know), with the additional partial class code as well.
Has anyone grappled with this problem and what recommendations would you make?
Take advantage of the namespace settings. You can get to it in properties from clicking in the white space of the ORM.
This allows me to have a Users table and a User class for one set of business rules and a second (but the same data store) Users table and a User class for another set of business rules.
Or, break up the library, which should also have the affect of changing the namespacing depending on your company's naming conventions. I've never worked on an enterprise app where I needed access to every single table.
Past a certain size it probably becomes easier to work with the xml instead of the dbml designer.
I have written a tool too! Mine is for scripting changes to dbml files using c# so you can rerun them and not lose changes. See my blog http://www.adverseconditionals.com 4 more details
The approach that we've used it to keep 2 .dbml files. One of them holds the Stored Procs and all production DB access is done through this. The other is in a unit test folder and holds tables and their relationships and is used for DB data manipulation and querying for unit tests.
I have written a utility to address exactly that problem, I needed a quick app to let you select only the database objects you need. In my case I often needed a complex view, but no tables.
http://www.codeplex.com/SqlMetalInclude/

Resources