How do I count the number of elements in a Wix Database? - velo

I'm using Corvid by Wix and I created a data collection with multiple elements in the table. How can I count the number of elements from the front-end?

Most of your questions can be answered with a quick google search or reading the Corvid documentation: https://www.wix.com/corvid/reference
For this specific question see .count() https://www.wix.com/corvid/reference/wix-data.WixDataQuery.html#count

Related

Google Sheets Extract Data from Table and make a row per data set

I'm stuck with Google Sheets.
Situation:
I have a data table with projects. Each project as a few attributes, most importantly including which team member has worked on the project this month.
Goal:
I need to convert the data to a new table that is built up differently. I need one row per project per active team member.
Sample data and goal:
https://docs.google.com/spreadsheets/d/1QcNPsvHX8hBNUpCJiutof8yD8ukFYcCXM_pLNrQmDUs/edit?usp=sharing (can edit)
As you can see, SEO and Island now have two rows instead of one, as Jan AND Chris have worked on the projects this month.
Approach:
I tried FILTER, QUERY (with pivot) and thought about Scripting (basically its an iteration over the Matrix B3:E8...). However, I am not particularly skilled at Sheets and am very thankful for your help. Thanks a billion, guys!!!
You can do this a fairly standard way by using Textjoin to join together the corresponding column headers and other data for the non-blank cells, then separating it into rows then rows and columns with the Transpose and Split functions:
=ArrayFormula(split(transpose(split(textjoin("¶",true,if(B3:E8="","",A3:A8&"|"&F3:F8&"|"&G3:G8&"|"&H3:H8&"|"&I3:I8&"|"&B2:E2)),"¶")),"|"))

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

Query large Document Library to get 100 Items only

(SharePoint 2010, Visual Studio, C#)
I have a large SharePoint Document Library called LargeLib (and am concerned about performance).
I have about 100 IDs and I have to extract respective items (with just 3 columns ID, Name, Author).
The CAML Query seems to be very large, as there is no "IS IN" clause possible in CAML. I have to repeat CAML lines of code a hundred times. WIll this be a good option? I wish I could pass it an array of IDs.
Do we have any other performance friendly option?
Thanks a lot in advance as I am stuck on this one.
This SO question seems to be the same as what you're asking. He solved it by building a function for nesting OR nodes...
One of the answers discusses using the <IN> node for SharePoint 2010, so I guess you're in luck:
http://msdn.microsoft.com/en-us/library/ff625761.aspx
HTH

Comment System using Redis Database System

I am trying to build a comment system using Redis database, I am currently using hashes to store the comment data, but the problem I am facing is that after 10 or 12 comments, comments lose their order and start appearing randomly, anyone know what data type should be used for building a commenting system using Redis, currently my hashes are of the form.
postid:comments commentid:userid "Testcomment"
Thanks, Any help will be appreciated.
Hashes are set up for quick access by key rather than retrieval in order. If you need items in a particular order, try a list or sorted set.
The reason it appears to work at first is an optimization for small sets - when you only have a small number of items a list is the most efficient structure, so that is what redis uses internally. When you get more items, an actual hashmap is needed for efficient querying and redis rearranges the data so that it is ordered by hash rather than by insertion order.
With my web app, I am using a format like this.
(appname):(postid):(comment id) - The hash of the posts
(appname):(postid):count - The latest comment id
And then I query the (appname):(postid):count key to get the amount of times I should run a loop that gets the contents of the (appname):(postid):(comment id) hash.
Example Code
$c = $redis->get('(appname):(postid):count');
for($i = 0; $i<$c; $i++) {
var_dump($redis->hgetall('(appname):(postid):'.$i));
}

Remove duplicates from custom entities in Microsoft Dynamics CRM

Has anyone found a good way to either merge or remove duplicates that are in custom entities? In our case we have two custom entities, literature history and subscriptions which relate contacts back to a custom entity named literature.
I can run a duplicate detection job, but this returns thousands of records and deleting them one at a time is impractical at best. We would like to either be able to merge them or just delete the duplicates. However, much Google searching has not turned up any good suggestions other than "you can write something."
Okay, but where to even get started? Should I be bulk deleting from the duplicate detection job? Should I try just writing a quick and dirty c# program with the SDK? Is there a way to merge custom entities that just requires some magical workflow voodoo?
EDIT: FYI What I eventually did was setting the deletion state code using some fun SQL to quickly find duplicates:
UPDATE T1 SET DeletionStateCode = 2
FROM New_subscriptionhistory T1 INNER JOIN New_subscriptionhistory T2 ON t1.New_LiteratureId = T2.New_LiteratureId AND t1.New_ContactId = t2.New_ContactId
AND t1.CreatedOn > t2.CreatedOn AND t1.statecode = 0 AND t2.statecode = 0
You should look into creating a Bulk Delete Job using the SDK.
Here's a short tutorial.
I won't say with certainty that this is the only or the best way, but we've used SQL queries in the _MSCRM database, setting the DeletionStateCode of any duplicated entity to 2.

Resources