View list of agents and recipients OBIEE - obiee

I am trying to get a list of all of the agents setup in our OBIEE isntance.
I've found the agents in the table s_nq_job
However, I need the report that the agent is using, and a list of the recipients of that agent. Is that possible?

Analysis used in Agent is stored in S_NQ_JOB_PARAM.
Recipients in the Agent XML unless retrieved by a conditional analysis in which case you only get the list of recipients once the analysis is being run.

The xml of the agent is in a file in the web catalog file system. And you only see the actual XML, if it's changed or deleted you won't have past code.

Related

How to get Validation error message into a Attribute from ValidateResult Processor of Nifi

I am trying to validate a json using ValidateRecord Processor via Avroschemaregistry. I need to store validation error message into a sql table, so i tried to capture the error message in attribute but i am unable to capture the error message in attribute, any idea how to do it
After your ValidateRecord Processor, you can choose to route flow files which are 'invalid' to a separte log and route them to your sql table, you can do the same if they 'fail'. I am assuming from the 'error message' you mean the 'bullentin' which would occur when the Processor can neither validate or invalidate the flow file based on your schema.
A potential solution to this is to use the SiteToSiteBulletinReportingTask
Screenshot of SiteToSiteBulletinReportingTask
You can build a dataflow to receive these bulletin events, manipulate them as you want and store them in a location of your choice for your auditing needs.
From the sounds of it, the SiteToSiteBulletinReportingTask should be able to achieve what you want. To implement this, add a iteToSiteBulletinReportingTask to the 'Reporting Tasks' in the NiFi Settings: Reporting Tasks in NiFi Settings
You can name your input port and have it flow towards your SQL store and you should have what you're after.
You need to allow NiFi nodes to receive data via site-to-site on the input port and you also need to grant the correct permissions on the root process group so the nodes are able to see the component, view and modify the data.
Side note: I would usually log everything, and have all failures and invalid route to log files, which I put to store, e.g. HBase/SQL. One suggestion I've seen is configure the logging subsystem to additionally send specific error categories to your destination of choice (e.g. active notification vs passive parsing of logs). NiFi is leveraging a very flexible logback system (an evolution of log4j). The best part - changes to the $NIFI_HOME/conf/logback.xml configuration file do not require an instance restart, will be picked up within 30 seconds or less.

Azure Blob Storage lifecycle management - send report or log after run

I am considering using Azure Blob Storage's build-in lifecycle management feature for deleting blobs of a certain age.
However, due to a business requirement, it must be possible to generate a report or log statement after each daily execution of the defined ruleset. The report or log must state the number of blob blocks that were affected, e.g. deleted during the run.
I have read through the documentation and Googled to see if others have had similar inquiries, but so far without any luck.
So my question: Does any of you know if and how I can get a build-in Lifecycle management system to do one of the following after each daily run:
Add a log statement to the storage account containing the Blob storage.
Generate and send a report to an endpoint I define.
If the above can't be done I will have to code the daily deletion job and report generation myself, which surely I can do, but I would like to use the built-in feature if possible.
I summarize the solution as below.
If you want to know which blobs are deleted every day, we can configure Diagnostics settings in the storqge account. After doing that, we will get the logs for read, write, and delete requests for the blob. For more detail, please refer to here and here
Regarding how to enable it, we can use PowerShell command Set-AzStorageServiceLoggingProperty.

OEM 13C Log File Monitoring

I have installed OEM 13c and deployed a couple of agents and want to test out the Log File Monitoring utility. I have enabled it and added a log file to monitor.
When I go and test it out, it does not show any alerts when they are put into the Log File. On the agent server, I have tailed the file and see the messages coming into the log file.
Does anyone have experience adding log files to OEM? I could have configured it wrong. Or is there any troubleshooting steps that I can follow to see if the server is even contacting the agent for reading the log file. Status of the agent is good with no incidents.
Without access to the system, it would be difficult to tell you the exact cause of this issue. However, I can list a few potential causes of this issue that I have experienced personally:
Permissions. The Oracle Enterprise Manager Agent is very convoluted when it comes to system permissions within a remote server. The agent can be owned and run as any number of users but during metric evaluation, may also need sudo or pam-authentication permissions to access certain entities on the server. Depending on the authentication profiles on that server, this could be the cause of your issue. There are ways to grant the agent access through the PAM stack if that is necessary.
Syntax. The wildcard syntax in the OEM GUI can be a little confusing as well. I would play with the wildcard elements a bit on the "String" component to ensure that it isn't as simple as adding wildcards to the beginning and end of the string. Without diving into the binaries of the agent plugins, it is difficult to assess exactly how the agent is evaluating this particular metric
One suggestion I would have is to go through the agent commands. There are specific commands you can run to manually force an agent to evaluate a particular metric for a particular target. This can allow you to manually trigger the metric collection locally on the server and evaluate what exactly is being performed at the agent level.
On the system I was running (12c) the command was as follows:
emctl control agent runCollection <hostname>:host host_storage

send email automatically in Microsoft dynamics CRM

I have field in entity Quote called new_date_expiration (type = Date).
I want send email if new_date_expiration equals date system in Microsoft dynamics CRM automatically.
any ideas?
thanks in advance.
You can do this using the CRM Workflow Automation Tool combined with an on-demand workflow. The setup process is quite simple:
Create a new workflow in CRM (make sure the "run on demand" option is turned on) which generates and sends your email.
Create an advanced find query on your entity which defines the criteria of when you want the email to be sent. In your case it sounds like the condition would be something like Expiration Date [equals] Today.
Once you are satisfied with the advanced find query, copy of the source FetchXML for it.
Configure the CRM Workflow Automation Tool (instructions on how to do this are included in the download package) to run the workflow created in the first step using the FetchXML you just copied.
Set your server to run the tool run daily at, for example, 8 AM using Windows Task Scheduler.
Once you have this configured, everyday at the specified time the results of the FetchXML query will be pulled and the provided workflow will be run against it. This effectively accomplishes exactly what you want without a bunch of timeouts.

DB job to generate/email Oracle report output

The task is to have an Oracle report generated daily, automatically, and e-mailed to a user.
So I've sort of got this working (it works if I hardcode one of the reports server names below).
I created a job on the database that will generate the report. I'm able to get the report to email as a PDF to the destination with this command:
UTL_HTTP.REQUEST('http://server/reports/rwservlet?server=specific_report_server &report='||p_report_name||'&userid='||p_connstring||'&destype=mail'||p_parameters||'&desname='||p_to_recipientlist||' &cc='||p_cc_recipientlist||'&bcc='||p_bcc_recipientlist||'&subject=%22' || REPLACE(p_subject,' ','%20') || '%22&paramform=no&DESformat=pdf&ENVID='||p_envid);
That works great...
The problem however is that my organization has two report servers that are load balanced. Our server team could take down one of the servers without really any warning, so I can't just hardcode the report server name (the ?server= parameter above) with one of the report server names because it will work for a while, then when that server goes down, it will stop working.
My server team asked me to look for a way to pull the server from the formsweb.cfg file or from default.env value within the job (there are parameters in there that hold the server name). The idea there is that the "http://server" piece will direct the report to be run on the appropriate server, and the first part of the job could get the reports server name from the config file that the report is run on. I'm not sure if this is possible from the database level, or how to do this. Any ideas?
Is there a better way that this can be done, perhaps?
If there are two load-balanced servers, that strongly implies that the network folks must have configured some sort of virtual IP (VIP) for the service. You (and everyone else) should be using that VIP rather than a specific server name.
For example, if you have two servers reportA.yourdomain.com and reportB.yourdomain.com, you would almost certainly create a VIP for reports.yourdomain.com that load balances between the two servers (and knows whether one of the servers is down or whether a new reportC server has been added). This VIP would either do the load balancing itself or would point to an actual physical load balancer that distributes the traffic. All applications would reference the reports.yourdomain.com VIP rather than any hard-coded server names.

Resources