LogParser: how to query Exchange Tracking Log for multiple (inactive) addrersses? - exchange-server

I have a task to find abandoned mailboxes in my Exchange servers, means mailboxes with no activities for last 90 days.
For that I made a query in LogParser:
SELECT
TO_TIMESTAMP(EXTRACT_PREFIX(TO_STRING([#Fields: date-time]),0,'.'),'yyyy-MM-ddThh:mm:ss') AS DATE,
recipient-address as Receiver,
sender-address as Sender
FROM '[LOGFILEPATH]'
WHERE (sender-address='mrsmith#conoso.com' OR recipient-address='mrsmith#conoso.com') AND Date > TO_TIMESTAMP('2017-01-22 22:18:00', 'yyyy-MM-dd hh:mm:ss')
GROUP BY Receiver, Date, Sender
But how to pass multiple addresses there? I mean If I need to check, i.e. 50 addresses, how should I pass email addresses to the Log Parser query?
Thank you!

Unfortunately, LogParser's query parameters must be specified via the command-line, and that's not handy for multi-valued parameters with many values.
You could use a two-step approach instead: generate the .sql file first populating an IN clause with a comma-separated list of addresses, and then run the .sql file.
Your example would become something like this:
... WHERE sender-address IN ('mrsmith#contoso.com', 'mrbrown#contoso.com', ...) OR recipient-address IN ('mrsmith#contoso.com', 'mrbrown#contoso.com', ...) ...

Related

How to create a single query with multiple where columns and value are optional

for example,
select * from a where id=#id and date between #date1 and #date2
when i passed #id=1 #date1='09/29/17'and #date2='09/30/17'
then an output is correct
When I passed #id=0 (means no id found in a table) #date1='09/29/17'and #date2='09/30/17' then output comes between date and ignore id column
means I want the filters like a shopping website
The way that you would accomplish this would be to use an OR on the ID column, to leave your SQL looking like:
SELECT *
FROM a
WHERE (#id=0 OR id=#id)
AND date BETWEEN #date1 AND #date2
The thing you need to be careful and aware of using this type of approach is that depending on your DBMS you will likely end up in a scenario where the execution is optimized for whatever the first execution path took. If it's a very big table, this could end up causing problems.

How to generate sequential numbers using UCM Oracle iDOC Script?

I want to create a metadata field to a certain Check-In Profile. This field is Info Only and it looks like this:
IFAP-XXXX.DD.MMM/YY
I already have done this code:
<$dprDefaultValue="IFAP-" & formatDateWithPattern(dateCurrent(),"MMM/yy")$>
And the output is: IFAP-.01Jan/16
What I need is to put a sequential number where "XXXX" is, starting with 0800, every time a user checks in. For example: IFAP-0801.01.Jan/16. How can I do that?
Getting a unique sequence number can be challenging. One way would be to write a custom service that executes a query against the database (which controls the sequence) and responds with the number. You could then executeService("MY_CUSTOM_SEQUENCE_SERVICE")$> to get the value.
One of the issues with the above approach is what happens if the checkin fails (due to a filter or something else). Then you have accidentally used up a value.
Another approach would be to use a database trigger to replace XXXX with the sequence number (using the same database sequence number).

Listing messages with more precision than yyyy/mm/dd

I'm trying to get all the messages sent to a user after a certain point in time, by using the Gmail API. I've successfully retrieved messages after a certain date by using the query q=after:2015/01/19 in the API Explorer.
I would like to be more specific than this and specify an hour and a minute of the day. Is this possible? I ask since the Advanced Search-specification only contains the most useful operators.
You can use a search query to list messages after a certain date with second accuracy.
Use the search term after:SOME_TIME_IN_SECONDS_SINCE_EPOCH. Gmail supports the after keyword using a timestamp instead of the yyyy/mm/dd format.

Extract distinct values

I have a big table of emails which has many emails repeated. I want to extract DISTINCT emails from it. Can't do it due to unavailability of DISTINCT and limitation of GROUP EACH BY or TOP function (Errors:Resources exceeded during query execution.).
A simple query with GROUP BY (without aggregate functions) should be enough:
SELECT email
FROM YouTableWithDuplicateEMails
GROUP BY email

Single Database Call With Many Parameters vs Many Database Calls With Few Parameters

I am writing a Content Management System which can store meta-data about different document-types. Each document-type has its own set of meta-data fields. For example a Letter has fields like "To", "From", "ToAddress", "FromAddress" etc whereas a MinutesOfMeeting has fields like "DateHeldOn", "TimeHeldOn", "AttendedBy" etc.
I am saving this information in database in two tables: General and Specific. General store information which is common to all types such as DocumentOwnerName, DocumentCreatedDate, DocumentSize etc. Specific table is not one table but a set of 35 different tables, one for each document-type.
I have a page which contains a grid in which I show list of document. One record corresponds to one document. Since the grid is made to show documents of all types therefore first row may show a letter, second a MinutesOfMeeting, third a Memo etc.
I have also made a search feature where user can set criteria on basis of which documents list is retrieved. To make it work, there are four search-related parameters for each of the field in each of the specific tables, and all of these parameters are passed to a central procedure. This procedure then filter out records on basis of criteria.
The problem is, dealing with 35 different document-types, each having like 10 fields, I end up with more than a thousand parameters for the procedure. This is a maintenance nightmare. I am looking for a solution.
One solution is to deal with each of the specific table individually, getting back Ids, then union them. This is fine, except that I have to make 36 different calls to the database, one each for a specific table plus one for the general table.
It all boils down to a simple architecture choice: Should I make a single database call passing many parameters or should I make many database calls passing few parameters.
Which approach is more preferable and why?
Edit: The web-server and database-server are on the same machine. Therefore, network speed shouldn't matter.
When designing an API where I need a procedure to take a large number of related parameters, or even a variable list of parameters, I use record types, e.g.:
TYPE param_type IS RECORD (
To
From
ToAddress
FromAddress
DateHeldOn
TimeHeldOn
AttendedBy
);
PROCEDURE do_search (in_params IN param_type);
The structure of the record is up to you, of course. If the procedure is coded to ignore the record elements that are NULL, then all the caller needs to do is set those elements that are required, e.g.:
DECLARE
p param_type;
BEGIN
p.DateHeldOn := DATE '2012-01-01';
do_search(p);
END;

Resources