I am migrating some of my workflows from MySQL to MonetDB.
One thing that has hampered my progress so far is the lack of FIND_IN_SET functionality in MonetDB:
> SELECT FIND_IN_SET('b', 'a,b,c,d');
2
I was relying on this functionality for converting domain definitions between two alignments.
Any idea how I could get this function in MonetDB with reasonable performance?
You could try using a regular expression. I recommended this to someone using MySQL who wanted to find more than one needle in a comma-delimited haystack, perhaps it could be adapted to MonetDB?
SELECT name FROM table WHERE CONCAT(',', DataID, ',') REGEXP ',(222|777|400),'
Related
AX allows you to enter basic SQL into View ranges. For example, in an AOT view's range, for the match value, you could enter (StatRepInterval.Name == 'Weekly'). This works nicely.
However, I need to do a more advanced lookup on a View, using a subquery. Can anyone suggest a way to do this?
This is what I would like to use, but I receive an error: "Query extended range failure: Syntax error near 34."
(StatRepInterval.Name == (SELECT FIRSTONLY StatRepInterval.Name FROM StatRepInterval WHERE StatRepInterval.PrintDirection == 1 ORDER BY StatRepInterval.Name DESC))
I've tried a lot of different variants of the subquery, from straight T-SQL to X++ SQL, but nothing seems to work.
Thanks for the help.
Sub-queries are not supported in query expressions.
This may be solved by using additional datasources with inner or outer joins as you observed.
See the spec and Axaptapedida on query expressions.
I found a way to do this. It isn't pretty, and I'm going to leave the question unanswered for a bit, should someone else have a more graceful solution.
Create a source View that contains all fields I wish to return, plus calculated fields that contain my subquery results.
Create a second View that uses the first as a data source, and applies all the necessary ranges.
Works pretty nicely.
Probably inefficient if there were large tables of data, but this is in a relatively small section of AX.
I am looking for a function (or a group of functions) in HSQLDB that does something similar to Oracle's LISTAGG.
I have this as part of a larger select and would like to keep the syntax as similar as possible in HSQLDB:
SELECT LISTAGG(owner_nm, ', ') WITHIN GROUP (ORDER BY owner_nm)
FROM OWNERSHIP WHERE FK_BIZ_ID = BIZ.BIZ_DATA_ID) AS CURRENT_OWNER
The point of this is that we're trying to use HSQLDB for remote work and Oracle for working on site, prod, etc so I want to change the DDLs as little as possible to achieve that.
Looking at ARRAY_AGG, it doesn't seem like it does anything similar (as far as being able to pull from a separate table like we're doing above with OWNERSHIP). Any suggestions for how I may accomplish this?
group_concat is probably what you are looking for:
http://www.hsqldb.org/doc/2.0/guide/dataaccess-chapt.html#dac_aggregate_funcs
Quote from the manual:
GROUP_CONCAT is a specialised function derived from ARRAY_AGG. This function computes the array in the same way as ARRAY_AGG, removes all the NULL elements, then returns a string that is a concatenation of the elements of the array
Is there a way to do
"UPDATE Item SET start_date = CURRENT_TIMESTAMP" ?
in Nhibernate without using hql/sql.
I am trying to avoid hql/sql because the rest of my code is in criteria. I want to do something like :
var item = session.get<Item>(id)
item.start_date = current_timestamp
There are two ways and sql is correct one.
Either you will
load all entities, change, update and commit, or
write sql query and let dbms handle most of the work
I am trying to avoid hql/sql because the rest of my code is in criteria
That is not a valid argument. Criteria is an API intended for relational search, and it does not support mass updates.
Different tasks, different APIs.
In this case, you can use either HQL or SQL, as the syntax is the same. I recommend the former, because you'll be using your entity/property names instead of table/column ones.
I have an Oracle database with all the "data", and a Solr index where all this data is indexed. Ideally, I want to be able to run queries like this:
select * from data_table where id in ([solr query results for 'search string']);
However, one key issue arises:
Oracle WILL NOT allow more than 1000 items in the array of items in the "in" clause (BIG DEAL, as the list of objects I find is very often > 1000 and will usually be around the 50-200k items)
I have tried to work around this using a "split" function that will take a string of comma-separated values, and break them down into array items, but then I hit the 4000 char limit on the function parameter using SQL (PL/SQL is 32k chars, but it's still WAY too limiting for 80,000+ results in some cases)
I am also hitting performance issues using a WHERE IN (....), I am told that this causes a very slow query, even when the field referenced is an indexed field?
I've tried making recursive "OR"s for the 1000-item limit (aka: id in (1...1000 or (id in (1001....2000) or id in (2001....3000))) - and this works, but is very slow.
I am thinking that I should load the Solr Client JARs into Oracle, and write an Oracle Function in Java that will call solr and pipeline back the results as a list, so that I can do something like:
select * from data_table where id in (select * from table(runSolrQuery('my query text')));
This is proving quite hard, and I am not sure it's even possible.
Things that I can't do:
Store full data in Solr (security +
storage limits)
User Solr as
controller of pagination and ordering
(this is why I am fetching data from
the DB)
So I have to cook up a hybrid approach where Solr really act like the full-text search provider for Oracle. Help! Has anyone faced this?
Check this out:
http://demo.scotas.com/search-sqlconsole.php
This product seems to do exactly what you need.
cheers
I'm not a Solr expert, but I assume that you can get the Solr query results into a Java collection. Once you have that, you should be able to use that collection with JDBC. That avoids the limit of 1000 literal items because your IN list would be the result of a query, not a list of literal values.
Dominic Brooks has an example of using object collections with JDBC. You would do something like
Create a couple of types in Oracle
CREATE TYPE data_table_id_typ AS OBJECT (
id NUMBER
);
CREATE TYPE data_table_id_arr AS TABLE OF data_table_id_typ;
In Java, you can then create an appropriate STRUCT array, populate this array from Solr, and then bind it to the SQL statement
SELECT *
FROM data_table
WHERE id IN (SELECT * FROM TABLE( CAST (? AS data_table_id_arr)))
Instead of using a long BooleanQuery, you can use TermsFilter (works like RangeFilter, but the items doesn't have to be in sequence).
Like this (first fill your TermsFilter with terms):
TermsFilter termsFilter = new TermsFilter();
// Loop through terms and add them to filter
Term term = new Term("<field-name>", "<query>");
termsFilter.addTerm(term);
then search the index like this:
DocList parentsList = null;
parentsList = searcher.getDocList(new MatchAllDocsQuery(), searcher.convertFilter(termsFilter), null, 0, 1000);
Where searcher is SolrIndexSearcher (see java doc for more info on getDocList method):
http://lucene.apache.org/solr/api/org/apache/solr/search/SolrIndexSearcher.html
Two solutions come to mind.
First, look into using Oracle specific Java extensions to JDBC. They allow you to pass in an actual array/list as an argument. You may need to create a stored proc (it has a been a while since I had to do this), but if this is a focused use case, it shouldn't be overly burdensome.
Second, if you are still running into a boundary like 1000 object limits, consider using the "rows" setting when querying Solr and leveraging it's inherent pagination feature.
I've used this bulk fetching method with stored procs to fetch large quantities of data which needed to be put into Solr. Involve your DBA. If you have a good one, and use the Oracle specific extensions, I think you should attain very reasonable performance.
I have been sick and tired Googling the solution for doing case-insensitive search on Sybase ASE (Sybase data/column names are case sensitive). The Sybase documentation proudly says that there is only one way to do such search which is using the Upper and Lower functions, but the adage goes, it has performance problems. And believe me they are right, if your table has huge data the performance is so awkward you are never gonna use Upper and Lower again. My question to fellow developers is: how do you guys tackle this?
P.S. Don't advise to change the sort-order or move to any other Database please, in real world developers don't control the databases.
Try creating a functional index, like
Create Index INDX_MY_SEARCH on TABLE_NAME(LOWER(#MySearch)
Add additional upper or lower case column in your select statement. Example:
select col1, upper(col1) upp_col1 from table1 order by upp_col1
If you cannot change the sort-order on the database(best option), then the indexes on unknown case fields will not help. There is a way to do this and keep performance if the number of fields is manageable. You make an extra column MyFieldLower. You use a trigger to keep the field filled with a lower case of MyField.
Then the query is:
WHERE MyFieldLower = LOWER(#MySearch)
This will use indexing.