Selecting table in Oracle database in SPARQL - oracle

I using SPARQL query to get rows from table TRIPLES
"SELECT * WHERE { ?s ?p ?o }"
But I've got error that table or view doesn't exist. I think that it doesn't work because in this query no info about selecting tables. Am I right?

You're correct - you need to add a FROM clause, as in
SELECT * FROM TRIPLES WHERE ...
Share and enjoy

Related

Hibernate + Oracle Group By Results in ORA-00979: not a GROUP BY expression error

I have the following Hibernate HQL query:
select t from Term t join ApprovedCourse ap on t.id = ap.term.id group by t order by t desc
It's failing with the
ORA-00979: not a GROUP BY expression
error because Oracle insists that all select values be in the group by. Hibernate, of course, is hiding the various fields of the Term object from us, letting us deal with it as a Term and not Term.id. (This query works on Postgres, by the way. Postgres is more liberal about its group by requirements.)
Hibernate is producing the following SQL:
select term0_.id as id1_12_, term0_.semester_id as semester_id2_12_, term0_.year_id as year_id3_12_
from term term0_
inner join approved_course approvedco1_
on (term0_.id=approvedco1_.term_id)
group by term0_.id
order by term0_.id desc
I've tried just removing the select t from the start of the query, but then Hibernate assumes that I'm selecting both the Term and ApprovedCourse objects, and that makes things worse.
So how do I make this work in a Hibernate way?
I found that I could get what I want by replacing the group by clause with a distinct in the select clause. Here's the resulting query:
select distinct(t) from Term t join ApprovedCourse ap on t.id = ap.term.id order by t desc

Hive Joins query

I have two tables in hive:
Table 1:
1,Nail,maher,24,6.2
2,finn,egan,23,5.9
3,Hadm,Sha,28,6.0
4,bob,hope,55,7.2
Table 2 :
1,Nail,maher,24,6.2
2,finn,egan,23,5.9
3,Hadm,Sha,28,6.0
4,bob,hope,55,7.2
5,john,hill,22,5.5
6,todger,hommy,11,2.2
7,jim,cnt,99,9.9
8,will,hats,43,11.2
Is there any way in Hive to retrieve the new data in table 2 that doesn't exist in table 1??
In other Databases tools, you would use a inner left/right. But inner left/right doesn't exist in Hive and suggestions how this could be achieved?
If you are using Hive version >= 0.13 you can use this query:
SELECT * FROM A WHERE A.firstname, A.lastname ... IN (SELECT B.firstname, B.lastname ... FROM B);
But I'm not sure if Hive supports multiple coloumns in the IN clause.
If not something like this could work:
SELECT * FROM A WHERE A.firstname IN (SELECT B.firstname FROM B) AND A.lastname IN (SELECT b.lastname FROM B) ...;
It might be wiser to concatenate the fields together before testing for NOT IN:
SELECT *
FROM t2
WHERE CONCAT(t2.firstname, t2.lastname, CAST(t2.val1 as STRING), CAST(t2.val2 as STRING)) NOT IN
(SELECT CONCAT(t2.firstname, t2.lastname, CAST(t2.val1 as STRING), CAST(t2.val2 as STRING))
FROM t1)
Performing sequential NOT IN sub-queries may give you erroneous results.
From the above example, a new record with the values ('nail','egan',28, 7.2) would not show up as new with sequential NOT IN statements.

HIve join without common filed

I have the following tables:
Table1:
user_name Url
Rahul www.cric.info.com
ranbir www.rogby.com
sahil www.google.com
banit www.yahoo.com
Table2:
Keyword category
cric sports
footbal sports
google search
I want to search Table1 by matching the keyword in Table2. I can perform the same using case statement and the query works but it is not the right approach because each time I have to add the case statement when I will add new search keyword.
select user_name from table1
case when url like '%cric%' then sports
else 'undefined'
end as category
from table1;
Thanks find the soluntions for this approach. FIrst we need to do the Join and after that we need to filter the record.
select user_name,url,Keyword,catagory from(select table1.user_name,table1.url ,table2.keyword,table2.catagory from table1 left outer join table2)a where a.url like (concat('%',a.phrase,'%')
Not sure about more current versions, but I've run into a similar problem... the primary issue is that Hive only supports equi-join statements... when you apply logic to either side of the join, it has difficulty translating into a Map Reduce function.
The alternative method, if you have a reliably structured field, is that you can create a matching key from the larger field. For example, if you know that you're looking for your keyword to exist in the second position of a dot-delimited URI, you could do something like:
select
Uri
, split(Uri, "\\.")[1] as matchKey
from
Table1
join Table2 on Table2.keyword = Table1.matchKey
;

Oracle: function based index using dynamic values

I have one complex SQL queries. One of the simple part of the queries looks like:
Query 1:
SELECT *
FROM table1 t1, table2 t2
WHERE t1.number = t2.number
AND UPPER(t1.name) = UPPER(t2.name)
AND t1.prefix = p_in_prefix;
Query 2:
SELECT *
FROM table1 t1, table2 t2
WHERE t1.number = t2.number
AND UPPER(t1.name) = UPPER(p_in_prefix || t2.name)
AND t1.prefix = p_in_prefix;
I have function based index on table1 as (number, UPPER(name)). I have function based index on my table2 as (number, UPPER(NAME)). p_in_prefix is a input parameter (basically a number).
Because of these indexes my Query 1 runs efficiently. But Query 2 has a performance issue, as in Query 2, 't2.name' is prefixed with p_in_prefix.
I can not create function based index for Query 2 because p_in_prefix is a input parameter and I don't know while creating index, what values it might hold. How to resolve performace issue in this scenario? Any hint/idea would be appreciated. If you require more information, please let me know.
Thanks.
Use AND UPPER(t1.name) = UPPER(p_in_prefix) || UPPER(t2.name).
As you have a function based index as UPPER(NAME) of table2, you should have an operand with the same expression in the query in order to make use of the function based index.
Using UPPER(p_in_prefix || t2.name) will not use the function based index as this does not match the function expression UPPER(NAME). Note here that using UPPER(t2.name) does not cause any problems as t2 is just a column alias.
Along with this, you can also pass an optimizer hint in your query in order to instruct the optimizer to use the index.
For more information read "Oracle Database 11g SQL" by Jason Price.
Also read Oracle Docs here and here and for optimizer hints here.

Entity Framework 4 generated queries are joining full tables

I have two entities: Master and Details.
When I query them, the resulting query to database is:
SELECT [Extent2]."needed columns listed here", [Extent1]."needed columns listed here"
FROM (SELECT * [Details]."all columns listed here"...
FROM [dbo].[Details] AS [Details]) AS [Extent1]
LEFT OUTER JOIN [dbo].[Master] AS [Extent2] ON [Extent1].[key] = [Extent2].[key]
WHERE [Extent1].[filterColumn] = #p__linq__0
My question is: why not the filter is in the inner query? How can I get this query? I've tried a lot of EF and Linq expressions.
What I need is something like:
SELECT <anything needed>
FROM Master LEFT JOIN Details ON Master.key = Details.Key
WHERE filterColumn = #param
I'm having a full sequential scan in both tables, and in my production environment, I have milions of rows in each table.
Thanks a lot !!
Sometimes The entity Framework does not produce the best query. You can do a few of the following to optimize.
Modify the linq statement (test with
LINQPad)
Create a stored proc and map the stored proc to return an entity
Create a view that handles the join and map the view to a new
entity

Resources