Advanced Find compare columns - dynamics-365

I need to get a list from the child entity to the case and run a workflow where the owner of the child is not equal to parent(case). I don't see the option in advanced find to compare columns in the parent and child
Something like current (owner) does not equal to Related parent( owner)

Unfortunately this is not possible. It is only possible to compare columns that reside in the same the table/record.
See MS Docs - Use column comparison in queries.

Related

Removing a dynamic list of columns in powerquery

I'm working on a tool to help my team identify changes in some data files. Long story short, i managed to put something together (I'm quite the beginner with powerquery and M) that works well but it lacks user friendliness.
Issue is that not all team members need the tool to check for differences in all columns (different people, different interests). In order to manage this i used the following to remove all the unneeded columns before doing the compare:
= Table.RemoveColumns(myTable,{"col1","col2","col3"... etc
This works but if you want to change the configuration you need to go into the code and modify the list.
My question is the following: Is there any way to integrate a dynamic list into this code? i.e. have that list of columns in an easy to use table, "tick/untick" the ones you want and have the code remove the rest?
If your intent is to allow the user to select columns without entering the query editor then you may benefit from using a parameter table as described here: http://www.excelguru.ca/blog/2014/11/26/building-a-parameter-table-for-power-query/ . You should be able to expose a 2colxNrow table to the user with some predefined column names/numbers. You can use data validation to constrain user inputs to a binary on/off behavior ( https://support.office.com/en-us/article/Apply-data-validation-to-cells-29fecbcc-d1b9-42c1-9d76-eff3ce5f7249 ).
( P.S. Based on the your description of your goals Inquire add-in may alread offer the functionality you are looking for )
Probably the easiest way is to use "Choose Columns" on the Home tab in the Query Editor and then rename the generated step like:
#"CHOOSE COLUMNS HERE ----->" = Table.SelectColumns(Source,{"Column1", "Column2", "Column3", "Column5", "Column7", "Column8", "Column9", "Column10"})
Then when you want to adjust the selected columns, you can press the small wheel to which the arrow is pointing, and a popup will show up from which you can do your (un)ticking.
Alternatively, if you use multiple queries with the same selection, you can create an additional query that outputs a list, like:
let
Source = Table.FromList(List.Transform({1..10}, each "Column" & Text.From(_)),null,{"Available Columns"}),
Transposed = Table.Transpose(Source),
#"CHOOSE COLUMNS HERE ----->" = Table.SelectColumns(Transposed,{"Column2", "Column3", "Column5", "Column6", "Column8", "Column9", "Column10"}),
TransposedBack = Table.Transpose(#"CHOOSE COLUMNS HERE ----->"),
ConvertedToList = TransposedBack[Column1]
in
ConvertedToList
And then use that list in your queries, like:
= Table.SelectColumns(#"Transposed Table",SelectedColumns)
where SelectedColumns is the name of the query with the selected columns.

Kibana - How to display log as table

I'm testing Kibana 4 for a project.
I have created an index from my database table which is composed by 3 fields:
Date
User
Action
I would like to display my index as a simple table (3 column, N rows) in my dashboard.
I tried to use "Data table" visualization but I can't find a way to display my results without any Metrics (Count, Sum etc...)
Maybe is pretty simple and I missed something... is there a way to do this?
Regards,
On the Discover tab, create a view that has just the fields you want and then save that as a search.
On the Dashboard tab, click on Edit then hit the + Create new button to add a widget, but if you look at the top, there's a Searches tab. Select that and add your saved search in.
[Elastic 7.x / 2019 Update]
I was a bit confused when I read #Alcanzar's answer so I am sharing a little more noob-friendly step-by-step how-to here :
STEP 1 : Create the Index Pattern
STEP 2 : Go to the Dashboard view, and create a view on your index
Select each column you want to include/add in your view by clicking "add" on it (The confusing part is that until you do that, you will have a "scrambled" view listing everything in a jumbled way.)
STEP 3 : Go to the Dashboard view, and create a view on your index
The trick is to select the specific columns you want to include... and voila !
Don't forget to save your view, this will help a lot in the process.
In Kibana 7.5.0 you can do it as follows:
Go to Discover section
Select fields you are interested in
Click on Save to save your discover search so you can use it in visualizations and dashboards
Click on Dashboard and create a new dashboard
Click on Add and select the panel
There is no step 6
The accepted solution has its pros (if, for simplicity, you see your index as a table, this is the only way to deal with rows naturally) but also cons (it allows the user to see too much information, by expanding the records that appear in the table; users cannot get an export of the values).
So if you plan to build tables to use in reports seen by users which should not see everthing and may want to get exports of the data, I recommend a different (hacky) approach using Table visualizations:
Say you have three columns A, B and C:
If there are no duplicates considering the combined values of A and B, you can use these two vales as aggregation fields, and then set a Max or Top hit Metric for C.
If even A, B and C have duplicates, then you can use the three of them as aggregation fields and add a Metric count, that will give you the number of repeated rows. This solution makes somehow sense, because instead of repeating the same row 'n' times you just tells you should have repeated 'n' times that row.
If A and B have duplicates but A, B and C are unique, then there is, afaik, no elegant solution. You have to use the three of them as aggregation fields, but then you would have a dummy metric at the end (e.g. count, always equal to 1).
Why? why do we have to go through all of this? that is another question...

Crystal: Sort by multiple groups

Good afternoon all;
Currently I have a crystal report that displays as such;
{ReceivingHospital}
{CallTtype} {Date} {SendingHospital} {Time1} {Time2}
I would like it to break down by receiving hospital then beneath that show all "Major" call types and sum them. Then Beneath that all "Moderate" call types with a sum, and then all "Minor" call types beneath that with a sum also. And, I want to keep all the associated details listed in that same order. I was thinking I could add multiple group headers and place the call type in that, but that does not seem to be working.
Any ideas would be greatly appreciated.
first, you need to create a group for {ReceivingHospital} and then a second group for {CallType}. You can then create a group sum based on {CallType}. If it still does not work and you are working with multiple tables you should check whether you have joined your tables correctly.

Random exhaustive (non-repeating) selection from a large pool of entries

Suppose I have a large (300-500k) collection of text documents stored in the relational database. Each document can belong to one or more (up to six) categories. I need users to be able to randomly select documents in a specific category so that a single entity is never repeated, much like how StumbleUpon works.
I don't really see a way I could implement this using slow NOT IN queries with large amount of users and documents, so I figured I might need to implement some custom data structure for this purpose. Perhaps there is already a paper describing some algorithm that might be adapted to my needs?
Currently I'm considering the following approach:
Read all the entries from the database
Create a linked list based index for each category from the IDs of documents belonging to the this category. Shuffle it
Create a Bloom Filter containing all of the entries viewed by a particular user
Traverse the index using the iterator, randomly select items using Bloom Filter to pick not viewed items.
If you track via a table what entries that the user has seen... try this. And I'm going to use mysql because that's the quickest example I can think of but the gist should be clear.
On a link being 'used'...
insert into viewed (userid, url_id) values ("jj", 123)
On looking for a link...
select p.url_id
from pages p left join viewed v on v.url_id = p.url_id
where v.url_id is null
order by rand()
limit 1
This causes the database to go ahead and do a 1 for 1 join, and your limiting your query to return only one entry that the user has not seen yet.
Just a suggestion.
Edit: It is possible to make this one operation but there's no guarantee that the url will be passed successfully to the user.
It depend on how users get it's random entries.
Option 1:
A user is paging some entities and stop after couple of them. for example the user see the current random entity and then moving to the next one, read it and continue it couple of times and that's it.
in the next time this user (or another) get an entity from this category the entities that already viewed is clear and you can return an already viewed entity.
in that option I would recommend save a (hash) set of already viewed entities id and every time user ask for a random entity- randomally choose it from the DB and check if not already in the set.
because the set is so small and your data is so big, the chance that you get an already viewed id is so small, that it will take O(1) most of the time.
Option 2:
A user is paging in the entities and the viewed entities are saving between all users and every time user visit your page.
in that case you probably use all the entities in each category and saving all the viewed entites + check whether a entity is viewed will take some time.
In that option I would get all the ids for this topic- shuffle them and store it in a linked list. when you want to get a random not viewed entity- just get the head of the list and delete it (O(1)).
I assume that for any given <user, category> pair, the number of documents viewed is pretty small relative to the total number of documents available in that category.
So can you just store indexed triples <user, category, document> indicating which documents have been viewed, and then just take an optimistic approach with respect to randomly selected documents? In the vast majority of cases, the randomly selected document will be unread by the user. And you can check quickly because the triples are indexed.
I would opt for a pseudorandom approach:
1.) Determine number of elements in category to be viewed (SELECT COUNT(*) WHERE ...)
2.) Pick a random number in range 1 ... count.
3.) Select a single document (SELECT * FROM ... WHERE [same as when counting] ORDER BY [generate stable order]. Depending on the SQL dialect in use, there are different clauses that can be used to retrieve only the part of the result set you want (MySQL LIMIT clause, SQLServer TOP clause etc.)
If the number of documents is large the chance serving the same user the same document twice is neglibly small. Using the scheme described above you don't have to store any state information at all.
You may want to consider a nosql solution like Apache Cassandra. These seem to be ideally suited to your needs. There are many ways to design the algorithm you need in an environment where you can easily add new columns to a table (column family) on the fly, with excellent support for a very sparsely populated table.
edit: one of many possible solutions below:
create a CF(column family ie table) for each category (creating these on-the-fly is quite easy).
Add a row to each category CF for each document belonging to the category.
Whenever a user hits a document, you add a column with named and set it to true to the row. Obviously this table will be huge with millions of columns and probably quite sparsely populated, but no problem, reading this is still constant time.
Now finding a new document for a user in a category is simply a matter of selecting any result from select * where == null.
You should get constant time writes and reads, amazing scalability, etc if you can accept Cassandra's "eventually consistent" model (ie, it is not mission critical that a user never get a duplicate document)
I've solved similar in the past by indexing the relational database into a document oriented form using Apache Lucene. This was before the recent rise of NoSQL servers and is basically the same thing, but it's still a valid alternative approach.
You would create a Lucene Document for each of your texts with a textId (relational database id) field and multi valued categoryId and userId fields. Populate the categoryId field appropriately. When a user reads a text, add their id to the userId field. A simple query will return the set of documents with a given categoryId and without a given userId - pick one randomly and display it.
Store a users past X selections in a cookie or something.
Return the last selections to the server with the users new criteria
Randomly choose one of the texts satisfying the criteria until it is not a member of the last X selections of the user.
Return this choice of text and update the list of last X selections.
I would experiment to find the best value of X but I have in mind something like an X of say 16?

How do I implement threaded comments?

I am developing a web application that can support threaded comments. I need the ability to rearrange the comments based on the number of votes received. (Identical to how threaded comments work in reddit)
I would love to hear the inputs from the SO community on how to do it.
How should I design the comments table?
Here is the structure I am using now:
Comment
id
parent_post
parent_comment
author
points
What changes should be done to this structure?
How should I get the details from this table to display them in the correct manner?
(Implementation in any language is welcome. I just want to know how to do it in the best possible manner)
What are the stuff I need to take care while implementing this feature so that there is less load on the CPU/Database?
Thanks in advance.
Storing trees in a database is a subject which has many different solutions. It depends on if you want to retrieve a subhierarchy as well (so all children of item X) or if you just want to grab the entire set of hierarchies and build the tree in an O(n) way in memory using a dictionary.
Your table has the advantage that you can fetch all comments on a post in 1 go, by filtering on the parentpost. As you've defined the comment's parent in the textbook/naive way, you have to build the tree in memory (see below). If you want to obtain the tree from the DB, you need a different way to store a tree:
See my description of a pre-calc based approach here:
http://www.llblgen.com/tinyforum/GotoMessage.aspx?MessageID=17746&ThreadID=3208
or by using balanced trees described by CELKO here:
or yet another approach:
http://www.sqlteam.com/article/more-trees-hierarchies-in-sql
If you fetch everything in a hierarchy in memory and build the tree there, it can be more efficient due to the fact that the query is pretty simple: select .. from Comment where ParentPost = #id ORDER BY ParentComment ASC
After that query, you build the tree in memory with just 1 dictionary which keeps track of the tuple CommentID - Comment. You now walk through the resultset and build the tree on the fly: every comment you run into, you can lookup its parentcomment in the dictionary and then store the comment currently processed also in that dictionary.
Couple things to also consider...
1) When you say "sort like reddit" based on rank or date, do you mean the top-level or the whole thing?
2) When you delete a node, what happens to the branches? Do you re-parent them? In my implementation, I'm thinking that the editors will decide--either hide the node and display it as "comment hidden" along with the visible children, hide the comment and it's children, or nuke the whole tree. Re-parenting should be easy (just set the chidren's parent to the deleted's parent), but it anything involving the whole tree seems to be tricky to implement in the database.
I've been looking at the ltree module for PostgreSQL. It should make database operations involving parts of the tree a bit faster. It basically lets you set up a field in the table that looks like:
ltreetest=# select path from test where path <# 'Top.Science';
path
------------------------------------
Top.Science
Top.Science.Astronomy
Top.Science.Astronomy.Astrophysics
Top.Science.Astronomy.Cosmology
However, it doesn't ensure any kind of referential integrity on its own. In other words, you can have a records for "Top.Science.Astronomy" without having a record for "Top.Science" or "Top". But what it does let you do is stuff like:
-- hide the children of Top.Science
UPDATE test SET hide_me=true WHERE path #> 'Top.Science';
or
-- nuke the cosmology branch
DELETE FROM test WHERE path #> 'Top.Science.Cosmology';
If combined with the traditional "comment_id"/"parent_id" approach using stored procedures, I'm thinking you can get the best of both worlds. You can quickly traverse the comment tree in the database using your "path" and still ensure referential integrity via "comment_id"/"parent_id". I'm envisioning something like:
CREATE TABLE comments (
comment_id SERIAL PRIMARY KEY,
parent_comment_id int REFERENCES comments(comment_id) ON UPDATE CASCADE ON DELETE CASCADE,
thread_id int NOT NULL REFERENCES threads(thread_id) ON UPDATE CASCADE ON DELETE CASCADE,
path ltree NOT NULL,
comment_body text NOT NULL,
hide boolean not null default false
);
The path string for a comment look like be
<thread_id>.<parent_id_#1>.<parent_id_#2>.<parent_id_#3>.<my_comment_id>
Thus a root comment of thread "102" with a comment_id of "1" would have a path of:
102.1
And a child whose comment_id is "3" would be:
102.1.3
A some children of "3" having id's of "31" and "54" would be:
102.1.3.31
102.1.3.54
To hide the node "3" and its kids, you'd issue this:
UPDATE comments SET hide=true WHERE path #> '102.1.3';
I dunno though--it might add needless overhead. Plus I don't know how well maintained ltree is.
Your current design is basically fine for small hierarchies (less than thousand items)
If you want to fetch on a certian level or depth, add a 'level' item to your structure and compute it as part of the save
If performance is an issue use a decent cache
I'd add the following new fields to the above tabel:
thread_id: identifier for all comments attached to a specific object
date: the comment date (allows fetching the comments in order)
rank: the comment rank (allows fetching the comment order by ranking)
Using these fields you'll be able to:
fetch all comments in a thread in a single op
order comments in a thread either by date or rank
Unfortunately if you want to preserve your queries DB close to SQL standard you'll have to recreate the tree in memory. Some DBs are offering special queries for hierarchical data (f.e. Oracle)
./alex

Resources