I'm trying to dynamically move a document that is in my CMTOS Object Store in my Content Engine To a Case folder in a solution in IBM Advance Case Manager.
The document is transfered trough a web service from a distant server to the CMTOS Object Store.
I heard about subscription ... I mean, creating a Document class, creating a subscription (in the Content Engine Manager), and then open IBM Process Designer through the Workplace XT and add attachment in my workflow properties, but it doesn't seems to work.
It's been a few days that i started to search on google, redbooks, ECM PLace and IBM developerWorks, but i havn't found any procedure to do that....
filenet events and subscriptions
The above link contains the events and subscription how to , you have to create subscription for creation event on the class you have to file into the case folder . Then CE will take care of everything else no need for process designer and work flows
This is an information on case manager subscription. When you deploy a case manager solution using case manager. It automatically creates subscriptions in case manager object stores which in turns takes care of filing the case documents to case folder. No need to create additional subscription for case documents filing.
Related
I am stuck in a complicated situation and appreciate that if somebody can help.
So I was testing indexing blob storage( pdf files) and indexed a copy of my storage in qa environment that cost me some money.
My question is that:
Is there any solution to use this index in production without indexing again?
I found a solution to copy the index and that works fine but when I add an indexer that is connect to production blob storage it start indexing from scratch again( as I expected). Is there any solution to avid this? Is there any solution to ask indexer to index from now on?
I tried to use the index and the indexer that I already have by changing the subscription to prod. But I have to change the data source for indexer to point at production blob storage and in this case I get an error :
Indexer 'filesIndexer' currently references data source 'qafilesds' and cannot be updated to reference a different datasource 'prodfilesds' because it has a non-empty change tracking state, or it is currently in progress. You can use Reset API to reset the indexer's change tracking state when it is no longer in progress, and retry this call.
A simple answer to your first question is to simply use the qa index you built.
A more complicated answer is to switch from the push model you are using now to a pull model. From your explanation above I assume all of your content comes from blob storage. And you have configured an indexer to do the indexing for you. This is known as the pull model.
The alternative is to use the Azure Cognitive Search SDK to write your own application that submits content to the index instead. In this case you do not use the built-in indexer, only the index itself. Then you are free to use whatever logic you want to determine what to index and what to skip. You can even enable your storage accounts to notify your application with events when content is updated.
I am trying to get the information of all the files related to an instance of ibm bpm but the following query does not work for me and there is no error in the javascript console either. I am using ECM Document List and in configuration I am adding a variable which contains the query.
"SELECT cmis:name, IBM_BPM_Document_FileNameURL,IBM_BPM_Document_UserId FROM IBM_ WHERE IBM_BPM_Document_ProcessInstanceId = 75774"
Thanks
I would assume that you are using and external ECM server and not the embedded ECM that comes along with IBM BPM/BAW.
Looking at the problem, I would approach the debugging in the following order;
Use a relevant ECM browser (ACCE in case of Filenet) to check if the
documents have a property that holds the value for the instanceID.
Because, by default external ECM servers don't have such a document
property.
If the document has such a property, then use that in the
"WHERE" clause of the query. If it doesn't then talk to whoever
maintains the ECM environment to create such a property and make
sure that is set properly (to the correct instanceId) for the
documents.
Another solution if you have access to it can be using the "BPM Document List" and "BPM File Uploader" which has the feature (as a configuration option) to associate documents with the current process instance.
I am working On a Microservice (Spring boot) that require to store some static information that infrequently changes (once per quarter). The data (below) is about the company reports that looks like
reportId#1: "frequency"="daily","to":"some email ids"
reportId#2: "frequency"="weekly", "to":"some emailids"
As you can see an entry in the data is basically a Report id, and associated attributes are frequency of reports and receiver's email id.
My question is.. What is the best place to store this information? I have some thoughts..and here are my views.
a) NoSQL DB like MongoDB seems to be a good option.. I can create a Collection and store it there and retrieve it once during app startup. But the I thought, whether creating a Collection just to store this static info is a good choice?
b) Redis seems to be another good option. I can create a template for above dataset and store it there. I can query the Redis based on the reportId to retrieve the frequency and senders list.
c) Store it in a file in the classpath and load at the app startup. The downside is that, I will have to redeploy the app with new changes in file whenever this report listing changes. I believe externalizing this information to either Mongo or Redis is a better option.
d) The app is running in the AWS..so I can even store this in a file in S3 bucket.
Would like to know your views?
Since the config will only change once a quarter, the overheard of a database is not required. You should consider Apache commons configuration. It will allow you to load config changes from files without the need for an application restart.
http://commons.apache.org/proper/commons-configuration///userguide/howto_reloading.html
I'm working with Filenet P8 Content Platform Engine 5.2.1 in my current project. I wanna export classes, property templates, folder hierarchy and other assets from specific Filenet object store (there are three object stores on server) via Filenet Deployment Manager. When i'm try to retrieve object store data, there are all existing object stores are retrieved. How can i retrieve only specific object store via Filenet Deployment Manager? How should i config my environment for it?
When you "Retrieve Data for Object Store Half Map" you are given a choice of the "Source For Object Store Data".
If you choose "Deploy Data Set File" you will be able to get data from a single OS.
What I do is create a "Security" document. It is a record with all the security information I will need for FDM on a content-less document object within my desired OS.
I then create a Export Manifest, adding that record with no Include Options.
Export that document.
Then run the Retrieval for OS, Security, Service, and Connection Points.
You may want to familiarize yourself with the FDM instructions.
When you create a mapping and load the object store, it creates a xml file which will have list of objectstores. Try modifying the xml and see if it works.
PS: I have not tried this solution.
Regards,
Manju
I'm currently working on a POC with Couchbase, using Spring Data to put & get documents on/off a bucket on a cluster.
As I'm working in a big company, I'm lucky they gave me a bucket, but still I don't have the admin rights on the cluster, so I only have access to the bucket.
But as I'm digging into the Spring Data documentation, I'm not able to find a way to retrieve documents without creating views on the server. (I'm getting errors like "Unknown query param" ). Nevertheless with couchbase java sdk i'm able to, through n1ql queries, but the use of the Spring data layer is mandatory.
The answers I found always point me to the server-side function direction
ex : https://stackoverflow.com/a/30928169/3744307
What I would like to find, is a way to add a repository method like
List findReceiptByAccount(String Account)
without having to specificly declare the function server-side.
Is this possible, or have I to send a request to the administrators to create functions for me everytime I have to add a findByX method?
Thanks for your time,
What version of CB is it ?
I think that prior to 4.5, a n1ql access (which you seems to have) is enough to build your index yourself !
With Spring Data Couchbase 2.x that would use a N1QL index in the background, and it would work with a single primary index (although having 1 index per repository entity class would be best for performance). Maybe you can ask your admin to create that index once?