FileNet P8 ACCE Sweep job filter - filenet-p8

I'm trying to setup a sweep job that moves a document from one class to a different class, but I only want to test right now -- not move ALL documents.
I was trying to add a filter to only pull over certain documents to test this before I pull the trigger, but it isn't working (ALL documents get listed in the results when I run this as preview).
The current filter I have is:
[DocumentTitle] like '%Z*%'
Any ideas what I need to do to change the filter to only have this run on the subset of documents I want??

Please clarify on below queries to resolve your issue:
1) Is your sweep job based on Java API / .net API? or
2) Is it based on FEM (Enterprise manager) tool
From the Filter : [DocumentTitle] like "%Z%" will filter all documents with the title %Z%, Please try to filter with ID to fetch one record, Once successful, then test with multiple records.
Thanks,
Habi

The sweep jobs typically take a condition that is similar to part after WHERE condition in search, the easiest way hence is simply to go to the search view, create your search, move to the SQL view tab, and then take whatever after WHERE condition and then add it to your sweep search filter.
Here are examples of filter conditions:
VersionStatus = 4 //All superseded documents
DateCreated < NOW() - TimeSpan(365, 'Days') //All documents that were created at least a year ago
StorageArea = OBJECT('{5E2BE09A-F4B1-49E2-A229-77FE32E5FEF1}') //All content in a specific storage area
VersionStatus = 4 AND DateCreated < NOW() - TimeSpan(365, 'Days') AND ContentSize > (1024 * 1024 * 500) //Complex logical expression
Final point in regard to your question about
I only want to test right now -- not move ALL documents.
Sweeps has Sweep Mode which defines how the sweep is going to execute, in your case you need to set it to Preview.

Related

Neo4j building initial graph is slow

I am trying to build out a social graph between 100k users. Users can sync other social media platforms or upload their own contacts. Building each relationship takes about 200ms. Currently, I have everything uploaded on a queue so it can run in the background, but ideally, I can complete it within the HTTP request window. I've tried a few things and received a few warnings.
Added an index to the field pn
Getting a warning This query builds a cartesian product between disconnected patterns. - I understand why I am getting this warning, but no relationship exists and that's what I am building in this initial call.
MATCH (p1:Person {userId: "....."}), (p2:Person) WHERE p2.pn = "....." MERGE (p1)-[:REL]->(p2) RETURN p1, p2
Any advice on how to make it faster? Ideally, each relationship creation is around 1-2ms.
You may want to EXPLAIN the query and make sure that NodeIndexSeeks are being used, and not NodeByLabelScan. You also mentioned an index on :Person(pn), but you have a lookup on :Person(userId), so you might be missing an index there, unless that was a typo.
Regarding the cartesian product warning, disregard it, the cartesian product is necessary in order to get the nodes to create the relationship, this should be a 1 x 1 = 1 row operation so it's only going to be costly if multiple nodes are being matched per side, or if index lookups aren't being used.
If these are part of some batch load operation, then you may want to make your query apply in batches. So if 100 contacts are being loaded by a user, you do NOT want to execute 100 queries each, with each query adding a single contact. Instead, pass as a parameter the list of contacts, then UNWIND the list and apply the query once to process the entire batch.
Something like:
UNWIND $batch as row
MATCH (p1:Person {pn: row.p1}), (p2:Person {pn: row.p2)
MERGE (p1)-[:REL]->(p2)
RETURN p1, p2
It's usually okay to batch 10k or so entries at a time, though you can adjust that depending on the complexity of the query
Check out this blog entry for how to apply this approach.
https://dzone.com/articles/tips-for-fast-batch-updates-of-graph-structures-wi
You can use the index you created on Person by suggesting a planner hint.
Reference: https://neo4j.com/docs/cypher-manual/current/query-tuning/using/#query-using-index-hint
CREATE INDEX ON :Person(pn);
MATCH (p1:Person {userId: "....."})
WITH p1
MATCH (p2:Person) using index p2:Person(pn)
WHERE p2.pn = "....."
MERGE (p1)-[:REL]->(p2)
RETURN p1, p2

Complex search query from 2 documents

I'm new to Elasticsearch and I need to execute a complex query, but I need some help.
Here is my use case:
I would like to recommend a new place to each of my users everyday.
However:
The place must be opened this week day
The chosen place must be near of the user (closer places have higher score)
The place should not be one of the last 10 places a user have already been/suggested (if a place has already been visited by a user in his last 10 visits, this place should have a lower score)
My first guess is to have 2 documents types as follow:
user_history
user_id
place_id
date
place
place_id
opening_days (array with week days the place opens)
location geo position of the place
Given a user with position [lat, lon] and id user_1, what could be the search query to execute to retrieve X places sorted by score? (better score is near of user and not in the 10 last places a user have already been).
This query seems to be a basic but I can't figure out how to "mix" data from user_history and from place to get places I want.
But that's not all!
With this query, if I want to attribute to each user a place I need 3 steps:
retrieve all users (with their position)
for each user, search for the best place
once I have this place, add it to the user_history
This seems very time consuming task. Is it possible to simplify it with less Elasticsearch queries?
For instance, having something like this:
retrieving for each user his best place (with 1 query, search for all users and find them the best place)
add the place to the history
Or event better:
retrieving for each user his best place and add it to the history (with 1 query, perform all the 3 tasks above)
I don't know if it's possible to create queries that complex. That's why I need your help to tell me if it's possible and how it could be accomplished.

Grafana Templating

I am wondering if there is a way to limit the number of rows generated from grafana templating.
I can have a drop down variable "$x"on my grafana page and I can select the row editor and say repeat row for every value under $x.
Based on different criteria, I can have x anywhere between 1 and like 160 rows. I need to only be looking at about 10 at a time. I am wondering if I can control the number of rows shown/change the rows shown somewhere in grafana.
I can manually select items from the $x drop down to show only a few items of course, but its a matter of selecting only say.. 10 items right when the page loads.
Please let me know if I am not describing the problem correctly or if I need to clarify more.
Thanks,
Karan
As far as I know there is no direct option for this but there are some ways you may be able to achieve what you want.
You could select your ~10 default entries and then save the dashboard this will save the selected ones in the dashboards JSON. (or modify the JSON of the dashboard directly)
You could use the regex field in the template settings to filter some of your values and split them in groups this way. (one variable per regex group)
You could change your data in elasticsearch in order to use multiple fields where you can split on.
see PR #5616 as #Daniel Lee mentioned
In general I think you get a faster response to this in grafana directly at github.

kibana 4 discover table in dashboard [duplicate]

I'm testing Kibana 4 for a project.
I have created an index from my database table which is composed by 3 fields:
Date
User
Action
I would like to display my index as a simple table (3 column, N rows) in my dashboard.
I tried to use "Data table" visualization but I can't find a way to display my results without any Metrics (Count, Sum etc...)
Maybe is pretty simple and I missed something... is there a way to do this?
Regards,
On the Discover tab, create a view that has just the fields you want and then save that as a search.
On the Dashboard tab, click on Edit then hit the + Create new button to add a widget, but if you look at the top, there's a Searches tab. Select that and add your saved search in.
[Elastic 7.x / 2019 Update]
I was a bit confused when I read #Alcanzar's answer so I am sharing a little more noob-friendly step-by-step how-to here :
STEP 1 : Create the Index Pattern
STEP 2 : Go to the Dashboard view, and create a view on your index
Select each column you want to include/add in your view by clicking "add" on it (The confusing part is that until you do that, you will have a "scrambled" view listing everything in a jumbled way.)
STEP 3 : Go to the Dashboard view, and create a view on your index
The trick is to select the specific columns you want to include... and voila !
Don't forget to save your view, this will help a lot in the process.
In Kibana 7.5.0 you can do it as follows:
Go to Discover section
Select fields you are interested in
Click on Save to save your discover search so you can use it in visualizations and dashboards
Click on Dashboard and create a new dashboard
Click on Add and select the panel
There is no step 6
The accepted solution has its pros (if, for simplicity, you see your index as a table, this is the only way to deal with rows naturally) but also cons (it allows the user to see too much information, by expanding the records that appear in the table; users cannot get an export of the values).
So if you plan to build tables to use in reports seen by users which should not see everthing and may want to get exports of the data, I recommend a different (hacky) approach using Table visualizations:
Say you have three columns A, B and C:
If there are no duplicates considering the combined values of A and B, you can use these two vales as aggregation fields, and then set a Max or Top hit Metric for C.
If even A, B and C have duplicates, then you can use the three of them as aggregation fields and add a Metric count, that will give you the number of repeated rows. This solution makes somehow sense, because instead of repeating the same row 'n' times you just tells you should have repeated 'n' times that row.
If A and B have duplicates but A, B and C are unique, then there is, afaik, no elegant solution. You have to use the three of them as aggregation fields, but then you would have a dummy metric at the end (e.g. count, always equal to 1).
Why? why do we have to go through all of this? that is another question...

Kibana - How to display log as table

I'm testing Kibana 4 for a project.
I have created an index from my database table which is composed by 3 fields:
Date
User
Action
I would like to display my index as a simple table (3 column, N rows) in my dashboard.
I tried to use "Data table" visualization but I can't find a way to display my results without any Metrics (Count, Sum etc...)
Maybe is pretty simple and I missed something... is there a way to do this?
Regards,
On the Discover tab, create a view that has just the fields you want and then save that as a search.
On the Dashboard tab, click on Edit then hit the + Create new button to add a widget, but if you look at the top, there's a Searches tab. Select that and add your saved search in.
[Elastic 7.x / 2019 Update]
I was a bit confused when I read #Alcanzar's answer so I am sharing a little more noob-friendly step-by-step how-to here :
STEP 1 : Create the Index Pattern
STEP 2 : Go to the Dashboard view, and create a view on your index
Select each column you want to include/add in your view by clicking "add" on it (The confusing part is that until you do that, you will have a "scrambled" view listing everything in a jumbled way.)
STEP 3 : Go to the Dashboard view, and create a view on your index
The trick is to select the specific columns you want to include... and voila !
Don't forget to save your view, this will help a lot in the process.
In Kibana 7.5.0 you can do it as follows:
Go to Discover section
Select fields you are interested in
Click on Save to save your discover search so you can use it in visualizations and dashboards
Click on Dashboard and create a new dashboard
Click on Add and select the panel
There is no step 6
The accepted solution has its pros (if, for simplicity, you see your index as a table, this is the only way to deal with rows naturally) but also cons (it allows the user to see too much information, by expanding the records that appear in the table; users cannot get an export of the values).
So if you plan to build tables to use in reports seen by users which should not see everthing and may want to get exports of the data, I recommend a different (hacky) approach using Table visualizations:
Say you have three columns A, B and C:
If there are no duplicates considering the combined values of A and B, you can use these two vales as aggregation fields, and then set a Max or Top hit Metric for C.
If even A, B and C have duplicates, then you can use the three of them as aggregation fields and add a Metric count, that will give you the number of repeated rows. This solution makes somehow sense, because instead of repeating the same row 'n' times you just tells you should have repeated 'n' times that row.
If A and B have duplicates but A, B and C are unique, then there is, afaik, no elegant solution. You have to use the three of them as aggregation fields, but then you would have a dummy metric at the end (e.g. count, always equal to 1).
Why? why do we have to go through all of this? that is another question...

Resources