How to search an array element in SurrealDB - surrealdb

I'm trying to perform a query that evaluates if an array includes or contains a specif value or set of values.
SELECT * FROM table
WHERE data = ['value_a']
https://surrealdb.com/docs/surrealql/statements/select
I've read the documentation and try several queries, but I didn't find any function or way to create this query.
I've read the official documentation and performed several queries based on the examples, but nothing have worked.
https://surrealdb.com/docs/surrealql/statements/select
My expected behaviour is:
Matches a specific value or set of values like these examples in SQL relational database.
https://www.w3schools.com/sql/sql_in.asp

There is a CONTAINS operator :
https://surrealdb.com/docs/surrealql/operators#contains
The Query should be like:
SELECT * FROM table WHERE data CONTAINS 'value_a'

Related

How to query text[] column in Supabase?

I want to retrieve rows based on a value being present in text column defined as multidimensional array in Supabase
Table looks like following
I am trying to query this records using following urls
https://DATABASE_URL.supabase.co/rest/v1/test_db?data=in.({"1"})
https://DATABASE_URL.supabase.co/rest/v1/test_db?data=in.(1)
But doesnt seem to work. Error message was operator does not exist: text[] ~~ unknown
with hint being No operator matches the given name and argument types. You might need to add explicit type casts.
Any help will be appreciated!
Thanks in advance.
To filter by the values inside of an array, you can use the cs operator, which is equivalent to #> (contains) in PostgreSQL.
For instance, this query will retrieve all the rows that have the value "1" present in the array of the data column.
https://DATABASE_URL.supabase.co/rest/v1/test_db?data=cs.{"1"}

how to get list of attributes from a JSONata query

I want to know for the given JSONata query how can we find out list of attributes with their path. Is there any any API/function to get some metadata about the query. For e.g. for the following query I want to find out the attribute name as "Account.Order.Product.Price" and "Account.Order.Product.Quantity"
$sum(Account.Order.Product.(Price * Quantity))
This may not be simple Regx or string parsing as query can contain functions & operators as well.
Atul

faster search for a substring through large document

I have a csv file of more than 1M records written in English + another language. I have to make a UI that gets a keyword, search through the document, and returns record where that key appears. I look for the key in two columns only.
Here is how I implemented it:
First, I made a postgres database for the data stored in the CSV file. Then made a classic website where the user can enter a keyword. This is the SQL query that I use(In spring boot)
SELECT * FROM table WHERE col1 LIKE %:keyword% OR col2 LIKE %:keyword%;
Right now, it is working perfectly fine, but I was wondering how to make search faster? was using SQL instead of classic document search better?
If the document is only searched once and thrown away, then it's overhead to load into a database. Instead can search the file directly using the nio parallel search feature which uses multiple threads to concurrently search the file:
List<Record> result = Files.lines("some/path")
.parallel()
.unordered()
.map(l -> lineToRecord(l))
.filter(r -> r.getCol1().contains(keyword) || r.getCol2().contains(keyword))
.collect(Collectors.toList());
NOTE: need to provide the lineToRecord() method and the Record class.
If the document is going to be searched over and over again, then can think about indexing the document. This means pre-processing the document to suit the search requirements. In this case it's keywords of col1 and col2. An index is like a map in java, eg:
Map<String, Record> col1Index
But since you have the "LIKE" semantics, this is not so easy to do as it's not as simple as splitting the string by white space since the keyword could match a substring. So in this case it might be best to look for some tool to help. Typically this would be something like solr/lucene.
Databases can also provide similar functionality eg: https://www.postgresql.org/docs/current/pgtrgm.html
For LIKE queries, you should look at the pg_trgm index type with the gin_trgm_ops operator class. You shouldn't need to change query at all, just build the index on each column. Or maybe one multi-column index.

How to construct subquery in the form of SELECT * FROM (<subquery>) ORDER BY column;?

I am using gorm to interact with a postgres database. I'm trying to ORDER BY a query that uses DISTINCT ON and this question documents how it's not that easy to do that. So I need to end up with a query in the form of
SELECT * FROM (<subquery>) ORDER BY column;
At first glance it looks like I need to use db.QueryExpr() to turn the query I have into an expression and build another query around it. However it doesn't seem gorm has an easy way to directly specify the FROM clause. I tried using db.Model(expr) or db.Table(fmt.Sprint(expr)) but Model seems to be completely ignored and fmt.Sprint(expr) doesn't return exactly what I thought. Expressions contain a few private variables. If I could turn the original query into a completely parsed string then I could use db.Table(query) but I'm not sure if I can generate the query as a string without running it.
If I have a fully built gorm query, how can I wrap it in another query to do the ORDER BY I'm trying to do?
If you want to write raw SQL (including one that has a SQL subquery) that will be executed and the results added to an object using gorm, you can use the .Raw() and .Scan() methods:
query := `
SELECT sub.*
FROM (<subquery>) sub
ORDER BY sub.column;`
db.Raw(query).Scan(&result)
You pass a pointer reference to an object to .Scan() that is structured like the resulting rows, very similarly to how you would use .First(). .Raw() can also have data added to the query using ? in the query and adding the values as comma separated inputs to the function:
query := `
SELECT sub.*
FROM (<subquery>) sub
WHERE
sub.column1 = ?
AND sub.column2 = ?
ORDER BY sub.column;`
db.Raw(query, val1, val2).Scan(&result)
For more information on how to use the SQL builder, .Raw(), and .Scan() take a look at the examples in the documentation: http://gorm.io/advanced.html#sql-builder

many to many relationship in mongodb

I'm writing a small torrent indexer in Ruby (here) and would love to support MongoDB as an option for the database. Currently, I have the database set up with a many-to-many relationship between tags and torrents.
How would I format a query that gets all the torrent_ids from a map table that match to all the tags in a given list?
I did this in SQL like this:
select torrent_id, count(*) num from tagmap where tag_id in (tag1, tag2, tag3, tag4) group by torrent_id having num = 4"
EDIT: I am right now working only with the collection with torrent_id and tag_id. That's all it has in there. So I'm mapping ids to ids and naught more.
It's better to create a collection to create the mapping consisting tag_id's and torrent_id's. Whenever you add a torrent, also add the torrents tags to the torrenttags collection. Index should be on tag_id.
You can use the following query syntax to get a list of torrents matching more than one tag.
db.tagmap.find({tag_id:{$in: ['tag1','tag2','tag3','tag4']}});
For Aggregation (group by, count) you need to use MapReduce

Resources