Our web application stores UTF-8 encoded data in VARCHAR fields. Recently, we have provided our customers access to this data via ODBC using DataDirect's OpenAccess ODBC driver. This is achieved using DataDirect's OpenAccess SDK, writing a C# .Net class to interface with the service. We only allow customers to perform SELECT queries. We also limit results to 100K rows at this time.
This solution really works great, except querying fields with some of this encoded data appeared as gibberish to the users, understandably. In the next release of our service, I would like to offer users the ability to query using unencoded strings, and subsequently see the unencoded results.
I have solved this by UTF-8 encoding the incoming query, then returning the unencoded results by flagging VARCHAR fields was WVARCHAR. This is actually working really well. It does mean, though, that every VARCHAR column comes back as a WVARCHAR despite the presence (or lack of) Unicode characters.
Many of our customers at this time have adopted the method of creating a linked server in SSMS on their own SQL Server instance, and it is my preferred method of connecting. Since our service limits results to 100K rows, I encourage everyone to use OPENQUERY to perform queries. It seems, though, that there is something I'm missing in the configuration of my Linked Server in SSMS. When string functions (LEFT, RIGHT, SUBSTRING for example) are performed against these WVARCHAR columns, SSMS returns the following error:
OLE DB provider "MSDASQL" for linked server "LOCAL" returned message
"Requested conversion is not supported.". Msg 7341, Level 16, State 2,
Line 1 Cannot get the current row value of column "[MSDASQL].ColName"
from OLE DB provider "MSDASQL" for linked server "LOCAL".
This would be returned for a query such as this:
SELECT *
FROM OPENQUERY([LOCAL], '
SELECT LEFT(FirstName, 2) AS ColName
FROM dbo.User
')
If I were to remove the LEFT function from this query, the FirstName column would be returned, properly decoded, without error.
This problem does not affect queries in, say, MS Excel for example. And on the surface the strings seem to be properly affected by their respective functions as I debug my way through the .Net class which interfaces to the DataDirect product. I've attempted to change all the Server Options on the Linked Server properties, but I've not had any luck finding the right combination. I'm just looking for the right tree to bark up here. Is it my treatment of the results, changing them to WVARCHAR? Or is it some attribute of my SSMS linked server that I need to change that I'm missing?
On linked server properties, on COLLATION COMPATIBLE set to TRUE, the problem is resolved.
Related
is there any way we can update the values in BIRT report which in-turn will update the database ? We need to present a report generated in Microsoft SQL server to the client , we tried providing the report in excel however our client changes the format and it is difficult to again consume it in our proprietary tool
(which is Microsoft SQL based). Is there any way we can achieve this? Client should update the values in the report and it should get reflected in the DB
while it's possible to write to wrtie back to db from BIRT using a servlet (see Eclipse Community Forum) I don't know of a way how BIRT could track the changed values.
While it's difficult to campare excel files it should be simpler to create csv files from these excel files and comparing the csv files independant of excel formating changes.
I see the gattering of value changes and writing back to the db as an independant separate workflow not related to the reporting.
Reporting tools are made for generating output only.
A general automatism concept is impossible, if you think about it from a more abstract point of view:
There's data D in the data base (usually spread accross several tables T1, ..., Tn, and records R1, ..., Rm).
The report output data O = (o1, o2, ...) is a the result of a more or less complex (the opposite of trivial) function f(R1, ..., Rm).
An automatic back-propagation automatism of any kind like you dream of would have to know what changing the value of o1 from "spam" to "eggs" means for R1, ..., Rm.
... Or even for records which were not selected by f, for example if the user changed the value of a primary key column.
This is only possible if the function f is bijective (I don't know if the english word is correct), but usually f isn't bijective. Even if it is, the task of inverting a non-trivial function is very hard.
Thus, if you want to let the user change values and persist the changes inside the DB, you need some kind of database UI or some kind of import interface.
Depending on your database, it might be as trivial as let the user work with Oracle SQL*developer or similar tools which support importing data from excel sheets.
However, these tools are intended for SQL developers, as the name implies.
OTOH, if all you want is to perform DML statements in BIRT, this is possible indirectly: You can write stored procedures in the database doing the DML work, and call these procedures from BIRT (use a JDBC Stored Procedure Query instead of JDBC SQL Select Query).
I used Access 2007 to convert an MDB database from Access 1997 to ACCDB format. For RecordSource, both use the following query:
This query works as the DataSource for combining two tables into one grid when using the old MDB database, but when I use it with the ACCDB file it only shows the fields from the smaller database, not the combined ones.
In Design mode in VB6, the Data View uses the same query and shows the combined files. When I click Run in Access, it also works there.
I made a simple test program - a form with a grid and a data control. If I use as a record source "select * from Bids" it displays all the Bids table in the grid, but the query doesn't work to add the second table.
Unfortunately, I created this query about a dozen years ago and don't remember how I came up with the query in SQL format and can't make sense of the screenshot above; I've Googled but can't find how to get the SQL command from what's displayed in the screenshot.
I've struggled with the conversion for over a week, one problem after another, and each research by scouring the Web, but I am now at an impasse.
I have some mappings, where business entities are being populated after transformation logic. The row volumes are on the higher side, and there are quite a few business attributes which are defaulted to certain static values.
Therefore, in order to reduce the data pushed from mapping, i created "default" clause on the target table, and stopped feeding them from the mapping itself. Now, this works out just fine when I am running the session in "Normal" mode. This effectively gives me target table rows, with some columns being fed by the mapping, and the rest taking values based on the "default" clause on the table DDL.
However, since we are dealing with higher end of volumes, I want to run my session in bulk mode (there are no pre-existing indexes on the target tables).
As soon as I switch the session to bulk mode, this particular feature, (of default values) stops working. As a result of this, I get NULL values in the target columns, instead of defined "default" values.
I wonder -
Is this expected behavior ?
If not, am I missing out on some configuration somewhere ?
Should I be making a ticket to Oracle ? or Informatica ?
my configuration -
Informatica 9.5.1 64 bit,
with
Oracle 11g r2 (11.2.0.3)
running on
Solaris (SunOS 5.10)
Looking forward to help here...
Could be expected behavior.
Seem that bulk mode in Informatica use "Direct Path" API in Oracle (see for example https://community.informatica.com/thread/23522 )
From this document ( http://docs.oracle.com/cd/B10500_01/server.920/a96652/ch09.htm , search Field "Defaults on the Direct Path") I gather that:
Default column specifications defined in the database are not
available when you use direct path loading. Fields for which default
values are desired must be specified with the DEFAULTIF clause. If a
DEFAULTIF clause is not specified and the field is NULL, then a null
value is inserted into the database.
This could be the reason of this behaviour.
I don't believe that you'll see a great benefit from not including the defaults, particularly in comparison to the benefits of a direct path load. If the data is going to be readonly then consider compression also.
You should also note that SQL*Net features compression for same values in the same column, so even in conventional path inserts the network overhead is not as high as you might think.
I hope this was not asked here before (I did search around here, and did google for an answer, but could not find an answer)
The problem is: I'm using MS Access 2010 to select records from a linked table (There are millions of records in the table). If I specify criteria (e.g. Date) directly (for example date=#1/1/2013#), the query returns in an instant. If i use parameters (add a parameter of type date/time and provide value of 1/1/2013 when prompted (or date in some different format), or reference a control in a form), the query takes minutes to load.
Please let me know if You have any ideas on what could be causing this. I do feel bad about asking such a question and possibly wasting someones time...
Here's a potential answer, I didn't know this myself and did a little digging.
If performance is important, it may be necessary to prefer dynamic SQL even for where parameter queries are suitable due to how queries are optimized. Generally, Access creates a plan for a new query upon saving. When a query contains a parameter, then Access cannot know what value the parameter may contain and has to make a "good guess". Depending on which actual values are later supplied, it may be okay or poor, resulting in sub-optimal performance. In contrast, dynamic SQL sidesteps this because the "parameters" are hard-coded into the temporary string and thus a new plan is compiled with that value, guaranteeing optimal execution plan. Since compiling a new plan at runtime is very fast, it can be the case that dynamic SQL will outperform parameter queries.
Source: http://www.utteraccess.com/wiki/index.php/Parameter_Query#Performance
Also, if I had to guess, in your parameter query, Access is requesting the ENTIRE table from Oracle and then filtering down with your where clause, but when the WHERE clause is specified, it actually just loads those records and possibly makes use of indexes.
As far as a solution, I would build your query string in VBA then execute it. It opens you up to injection, but you can handle that. So:
Instead of using a saved parameter query object in Access, try to do something like this.
dim qr as string
qr = "SELECT * FROM myTable WHERE myDate = #" & me.dateControl & "#;"
'CurrentDb.execute qr, dbFailOnError
Docmd.RunSQL qr
Or, as you replied, currentdb.openrecordset(qr)
This would force the engine to make an execution plan at runtime rather than having a saved potentially suboptimal plan. Let me know if this works out for you, I'd be interested to see.
Of course the above reference about using parameters with Access (JET/ACE) ONLY applies to access back ends, not ODBC ones like SQL server or oracle. Since you pointed out that your using Oracle here then creating a view or using a pass-though query would and should resolve this performance issue. However one does NOT want to use Access/JET paramters with data coming from some server based system - you best just send the server SQL strings, but much better would be to use a pass-though query. If the result set requires editing, then PT query are only readonly, and you have to create a view and link to that view.
My Database Schema :
table : Terminology (ID (PK), Name, Comments)
table : Content (ID (PK), TerminologyID (FK), Data, LangaugeID)
1 - many relationship between Terminology and Content. One Terminology can have any number of content based on different language ID.
Terminology and Contents table may have millions of records.
Now, even thought I fetch some hundreds of record (pagination) from my client side using WCF data Service, after 5-6 attempts, I get time out exception.
_DataService.Terminologies.Expand("Contents").Skip(index1).Take(count).ToList();
If I don't expand my Contents, query works fine :), but I will not have Content Data.
What is the best way to handle this scenario.
Options...
Is there any performance improvement, if I use Include in ServerSide (I mean, writing Custom webget method) over Exapnd in Client Side.
Creating database Views and accessing it over client side.
Creating Stored Procedure, where I can pass my preferred LanguageID and call it from client side.
Is this ADO.NET DataServices build by default wizard?
In any case if your client can access database directly, it will be a lot faster, so if direct db option is available then take it.
If WCF is the only option, then you will have to create your own implementation of Paging Web Service, perhaps even with store procedure that returns multiple recordsets.
On a side note I do not see LanguageId in your service query, and that could slow things down a lot.
_DataService.Terminologies.Expand("Contents").Skip(index1).Take(count).ToList();