Where in the O365 mail flow is the Tenant Allow/Block List rules and Journal rules applied - exchange-server

Based on this diagram, does anyone know at what point the following O365 features are applied?
When are Tenant Allow/Block list exceptions applied to a message?
I think it happens after Anti-Malware but before Mail-Flow Rules based on my testing
When would a message be journaled if using a legacy journal rule?
Is it within the EOP steps, or afterwards?
I've read endless MS articles but cannot find a clear answer.

Related

Is there a way to hand a document around with Power Automate?

I have a workflow that requires me to hand a file around my team and each of my team members needs to do something with this document. They have to do it in a certain order and one after another.
The current solution is that I send an email to the first person with this file and wait until I receive the document back. Then I send the received document to the next person and so on...
I already looked at all the connectors, especially the email with options from the outlook connector and the Approvals Connector look promising.
Getting the file into the workflow and attaching it to an email is easy and I am stuck for quite some hours now on how to get the received file back into the workflow. I should add that in the ideal case the file goes directly back into the workflow without taking the detour through my mailbox.
The is a bunch of commercial solutions out there, e.g. Adobe Sign, but i would really like to solve this without having to upload my files to some other service and rely on an other company (other than microsoft obviously).
I would really appreciate any suggestions on how one could solve this task!
Thanks a lot.
Short Answer
You need to have a shared storage that all members of the process can access, the file should then be opened and updated from there
My recommendation is (if your company teams/365 groups are set up well) to just use a specific folder in your team's SharePoint site (O365 group) that will be accessible via teams, a browser, or any of the applications required.
This can then be done in the approval flow you're playing with, or via one or several approval flows within the context of a BPF.
Those methods:
Approval Flow
Business Process Flow (BPF)
Detail
Shared Storage
This won't be hard to sort out, if the people involved are only a few in a larger team, and the data is sensitive, then create a separate folder and restrict access. Otherwise, you should at least restrict write access, to ensure that only the people involved can modify the file.
As mentioned earlier, the only thing that could hold you back is the company's set up with regard to O365 Groups, Azure (and normal) AD groups, and the literal teams. But it really shouldn't be an issue for this.
If there is bad group infrastructure, then it's all good, you can just lean in to that and make another brand new team in Teams. Once you've done that, find the new O365 Group it creates, and then just manage it all from SharePoint (you can even add a tab in the Team client to manage the process!) to ensure that the permissions are just right.
Approval Flow
Build the logic first. It should be relatively simple:
Person A performs their task, they click to say it's done.
Person B. Etc.
Then you can start worrying about the file, and how it's accessed and from where.
This is by far the easiest way to do things, and allows you to keep things as simple as possible. For the logic just plot it out step by step, then once you have that, take a look at it and see where you can economise it, and either loop elements, or use variables to make it not require the specifics that you begin with.
With any luck, you'll soon have it doing most of the work for you. You can even ensure that copies of the file are made at each stage and are then archived, if you like.
Business Process Flow
This is my preferred option because it will codify the process and you can make things however complicated in the flow(s) themselves, separately.
The BPF will ably show the organisation how your team performs the task, ie. Johnny edits, then Billy edits, then Jenna edits. However at each stage (or for bespoke tasks) you can call on different flows to perform whatever tasks you need performed.
There are positives and negatives to this approach, mainly:
Positive - You can set it up without ANY automation, and you can use it to manage your current manual process.
Positive - Later you can start to instill the automations you need to process what is required.
Negative - This is advanced stuff, and it's not only difficult to learn, but it's difficult to get right. That said, the end result will be amazing.
I want to share my final solution based on Eliot Coles answer and lots of internet research.
Basically I automated my mailbox meaning that I use the outlook connector to send and receive mails and handling the attachments between those.
The flow is triggered manually where the user has to enter the email-adresses of all the recipients and select the file to pass around. Then I store the recipients in an array to be able to loop over them later. Additionally an unique ID is generated to identify the emails belonging to this flow later on.
Next there is a loop over all recipients. The file is send to the first recipient in the array and another loop waits for the recipient to reply to the message before continuing with the next one.
Finally a close look at the "receive-loop". This runs until an email with attachment arrives from the recipient. All emails filtered by the ID generated earlier are reteived and if there is one with attachment, this attachment is stored in the file variable. If no email matched the criteria, it is waited for some time and the mailbox is checked again.
At the very end, I sent an email back to myself with the last received file, as the workflow is finished then.

Designing microservices in practice

Yet another question on how to or how not to split up a microservice :-D
The scenario:
What do we need?
Sending emails at different points of time within the work flow of an ecommerce order process. These mails will be containing order information.
What do we have?
1 x persistence service which retrieves order information
Several services which subscribe to order events and processes the relevant use case (e.g. Confirmation, delivery, invoice)
1 x service which can be triggered to send a mail
What's the next step?
Designing the architectural component which transforms the order information so they will fit the data structure of the email rendering service.
The current options are
1 having each processing service transform already existing order information for the mail template and send them to the mail rendering service.
2 have each processing service call a new service which would aggregate and transform the order information and call the mail rendering service.
Currently we're not sure yet if the data structures for the mail templates will be mostly common or if there will be differences.
So what do you think of these options in terms of cohesion, coupling and separation of concerns?
Do you need any more information? Any constructive thoughts are welcome!
Your software architecture should reflect your organizational structure, see Conway's law
Do you have multiple teams, and you want to minimize dependencies between the teams.
Are "services" large and complex enough to justify them being separated into modules?
Does the size of the product justify having advanced devops in place to orchestrate the microservices?
Do you need the flexibility in terms of deployment and replaceability of individual "services"?
If you can answer yes to most of these questions, it would make sense to go for microservices. Otherwise, you are just making your life complicated.
Frankly, microservices require a lot of coordination overhead which makes sense only if the product is large enough. Most (small) projects are just fine with monolithic and MVC architecture.
This is how I propose to proceed man, it's how one of my project's architecture does all SMTP related stuff.
API receives an HTTP request
It persists data needed to the database.
It offloads the long-running and memory intensive processes to mail builder.
Optional, mail builder builds attachment files (XLSX, PDF, etc)
Mail builder uploads to File Server
Mail builder offloads generic SMTP sending to SMTP service.
I suggested this format because it allows you to scale the instance of each piece (Mail builder will have tons of instances) depending on bottlenecks in your processing pipeline.
Given that you have asked this question in microservices, I am assuming you are asking the question in reference to cloud native patterns.
I suggest you start with looking at microservices pattern. An excellent site for the patterns is https://microservices.io/patterns/microservices.html.
Your question does not have the necessary details to provide an educated advice on what patterns are suitable and what are not. So, I suggest you look at these few patterns...
https://microservices.io/patterns/data/shared-database.html
https://microservices.io/patterns/data/database-per-service.html
Also take a look at event sourcing pattern
https://microservices.io/patterns/data/event-sourcing.html
Hope this helps.

Where to begin with SNMP agent implementation?

before I start I realise there are a few SNMP related questions here already but not many seem to have been answered - that could mean I'm asking in the wrong place but I don't know where else to go at the moment.
I've been reading up as best I can on SNMP for a couple of days but am finding it difficult to get my head around what is meant to be happening. The idea is eventually we will integrate SNMP into our Java application server which will allow the end users to incorporate it into their pre-existing Network Management Systems(NMS).
Unfortunately I'm feeling entirely confused by what is meant to be going on. From what I understood from talking to the end users (which was unfortunately before any research) was that the monitoring allows their existing NMS to give their admin guys a view of the vital statistics in a tree type display, giving them feedback regarding different parts of the system at a high level and allowing them to dig down into specific subsystems.
From reading around we would implement an 'Agent' which has several defined interfaces allowing for GET requests etc to be processed and responded to. That makes sense but I am at a loss to work out what the format of the communication is - there don't seem to be any specific examples of what any of the messages look like, how the information is encoded.
More of my confusion though is regarding Management Information Base(MIB). I had, wrongly, assumed that the interface of the agent would allow for the monitored attributes to be requested and then in turn the values for those attributes requested. Allowing any new Agent to be started and detected without any configuration on the NMS end (with the exception of authentication in v3). This, if I understand correctly, is not the case and the Agent must instead define MIBs which can be used by the NMS to determine those attributes. My confusion is increased when people start referring to thousands of existing MIBs and that they can be reused which I don't understand. Is the intention that a single MIB definition can be used to say describe how a particular attribute of a network device (something simple like internet connected on a router:yes/no) for many different devices? If so I don't believe that our software would allow the monitoring of anything common to any other device/system but should we be looking for already exising MIBs? At the moment I don't really see any good rational for such a system, surely it would be easier for the Agent to export that information - so I'd appreciate it if someone could enlighten me!
I think it would help if I was able to setup a simple SNMP agent and some sort of client, I could begin to see the process and eventually inspect the communication between the two but am finding it difficult to find anywhere that provides any information on doing such a thing. Nagios has been recommended to us as a test 'client'/NMS but their 'get started quick' section recommends downloading a 600Mb virtual machine - surely there is a quicker way to get started?
Any help or suggestions will be appreciated, I have been through the Wiki page but it doesn't seem to go into much detail about the MIBs and the having not had to deal with anything like the referenced RFCs before, while they may contain all of the information they seem completely impenetrable to me at the moment. Or if there are any books that can be recommended for an overview and implementation of v3?
Thanks for reading and even more thanks if you think you can help!
It seems to me that you read all SNMP information piece by piece in an disorganized way. This is highly not recommended and of course lead you to confusion.
What about forgetting what you have learnt so far and dive into a good book such as Essential SNMP?
http://shop.oreilly.com/product/9780596008406.do
Click the Google Preview icon to preview it please.
You could not depend on a network forum to tell you the ABCs, as that's impractical I find out.
The communications interface is SNMP. That's the protocol used for transmission (usually on top of UDP). The thing that services information requests is an SNMP Agent. The thing that sends information requests is an SNMP Manager.
The definition of what information should be made available by the Agent, and requested by the Manager, goes in a MIB. A MIB is the "glue", a directory of what sort of things any particular system can/should offer. It maps numeric codes to names and types that allow us to make sense of the data, much like how a phone directory maps phone numbers to people's names and addresses.
Generally you would create and ship and use your own MIBs that can describe aspects specific to your own product, but you are supposed to service some standard information requests as well, which are defined in existing MIBs. Yes there are thousands of other pre-existing MIBs and the likelihood that you need more than one or two of these is remote. They are typically published versions of MIBs for existing products.
The conventional way to "toy around" is to install Net-SNMP (a software suite that includes an agent implementation and allows you to "bolt on" your own logic and your own MIBs fairly easily) then examine the results using a packet capturer like Wireshark.
For a fuller implementation in production you may stick with Net-SNMP, or write your own Agent software, or do what I did and create a hybrid of the two that's a little more flexible and performant but uses Net-SNMP's backend for handling all the low-level SNMP stuff.
Your first step, though, is to read a book or some other teaching material that can clear all your misconceptions, because guesswork won't cut it.
I had success using the samples from this page. Both the shell and Perl NetSNMP code was very straightforward to implement and query.

Design Question for Notification System

The original post was posted at https://stackoverflow.com/questions/6007097/design-question-for-notification-system
Here is more clarification of the problem: The notification system purpose is to get user notified (via email for now) when content of the site has changed or updated, or new posting is made. This could be treated as a notification system where people define a rule or keyword for 3rd party site and notification system goes out crawle 3rd party site and crate search inverted indexes. Then a new link or document show up for user defined keyword or rule (more explanation at bottom regarding use case),
For clarified used case: Let suppose I am craigslist user and looking for used vehicle. I define a rule “Honda accord”, “year “ 1996 and price range from “$2000 to $3000”.
For above use case to work what is best approach and how can I leverage on open source technology such as Apache Lucent, Apache Solr and Apache Nutch, and Apache Hadoop to solve this use case.
You can thing of building search engine and with rule and keyword notification system. I just need some pointers and help on how to integrate these open source package to solve use case ?
Any help and pointer will be appreciated. We need three important components are :
1) Web Crawler
2) Index Creator
3) Rule or keyword Mather
Any help will be greatly appreciated. I was referring this wiki which integrates Nutch and Solr together for above purpose http://wiki.apache.org/nutch/RunningNutchAndSolr
Your question is a big one but I'll take a stab at it as I've designed and implemented systems like this before.
Ignoring user account management, your system will need to provide the means to:
retrieve new prospect data (web spider)
identify and extract pertinent results from prospect data (filtering)
collect, maintain and organize results (storage)
select results based on various metadata (querying)
format results for delivery to users (templating)
deliver formatted results to users (delivery)
If the scope of your project is small (say less than 100 sites requiring spidering per day), you could probably get along with one of the many open-source web spiders including wget, Nutch, WebSphinx, etc. You might need to provide instrumentation (custom software) for scheduling, monitoring and control. If your project scope is larger than this, you may need to "roll your own" spidering solution (custom software). Typically this would be designed as a distributed, parallel architecture.
For simple filtering, regular expressions would suffice but for more complex tasks requiring knowledge of HTML layout (extract the textual component of the fifth list element (<LI/>) of the fourth table on the page) you'd need to use an XHTML parser. However you proceed, you'll need to provide custom software to conduct filtering based on your users' needs.
While any database technology can be used to store results extracted from retrieved documents, using an engine optimized for text like Apache SOLR will allow you to easily expand your search criteria as your needs dictate. Since SOLR supports the attachment of and search for metadata associated with each document, it would be a good choice. You'll also need to provide custom software here to automate this step.
Once you've selected a list of candidate results from SOLR, any scripting language could be used to template them into one or more emails and would also inject them into your mail transport agent (MTA). This also requires custom software to automate this process (and if required, to inject user-specific data into each message).
You should probably look at Google's Custom Search API also before diving into crawling the web yourself. This way, google can help you with returning keyword based search results, which you could later filter in your application based on your additional algorithms/rules etc, and make the whole thing work.

Logging *Business* Events - use logging framework?

Something here doesn't feel right to me here, and so I would like the community's input - perhaps I am approaching this in the wrong way....
Q: Is is appropriate to use traditional infrastructure logging frameworks (like log4net) to log business events?
When I say business events, I mean I want a global log like this:
xx:xx Customer A purchased widget B.
xx:xx Widget B was dispatched from warehouse.
xx:xx Customer B payment declined.
Most traditional infrastructure logging frameworks have event levels something like this:
FATAL
ERROR
WARN
INFO
DEBUG
An of course these messages don't fit well into that. Best description would be INFO, but of course these are important events, and INFO is of very low importance.
I would still like this as a 'log' (e.g. I don't want to have to extract this from my business objects each time I want to see it)
Seems to me I have two options:
1) Use a framework like log4net and just define a special logger for this (and live with the fact that it doesn't feel right).
2) Provide a service for performing this that doesn't rely on a traditional logging services.
I'm leaning towards 2. What has anyone else done in a similar situations?
Thanks!
What you're wanting sounds like an auditing service, not a logging service. If I'm right, your goals are to keep track of these business events for historical and maybe even reporting purposes. You can use the details in the audit to, for lack of a better phrase, place blame for events that happen in the system.
I probably wouldn't use a logging system, like log4j, for this purpose. In our system, auditing is a first class citizen as a full service.
--
HTH,
Dusty
Leave the logger for things having to do with the program, not the business. It is just a tool to help the developers.
Write your own system to log business events. If it is a business requirement to have a record, you will want something you have control over and you will need to use the logger above to keep track of how it works.
Basically, #2 in your question.
To me the idea of a Business Event is that it plays a role in some future business processing, anything from actually triggering Business Actions to simply available for analytics.
Hence, completely different QOS requirements. needs its own API.
Conceviably initially that maps down to logging, but in future could go to reliable messaging or DB.
These sound like the sorts of things that your customers might potentially want to query or report on from within your application - the obvious choice would be the database.
In particualr, in this case I'd feel like traditional logging frameworks wouldnt be suitable because when it comes to data that you might later want to access within your application logging frameworks allow you to do things that dont really make sense, for example you might be able to change where the logging is sent to based on the app.config file (which is unhelful if you try and read it from a different location).
That said, if a logging framework allows you to do exactly what you want already then there isnt any shame in just using the logging framework as your implementation and saving yourself the effort:
class TransactionLogger
{
public void Log (Message message)
{
MyLoggingFramework.Log(message.string, etc...);
}
}
In my experience business events comprise large or huge number of technical operations behind the scenes, with only certain business events being important to the business.
This creates problems when trying to use a generic logging methodology, so in general, in the systems I've worked on, both are used.
Logging for the technical aspects, and business event logging for the business events.
The business event logging, doesn't use the same technology as the technical logging, and instead logs to a custom designed history/audit table (Sometimes these are split, depending on the required detail), which is designed specifically for each application. (This keeps the auditors and users nice and happy.)
This allows easy reporting, and management of the information, while obviously expanding the scope of each specification slightly.
you could use it but you need is business activity monitoring and event processing software. Off the top of my head, IBM WebSphere Business Monitor provides this capability. It processes Common Base Event (an IBM implementation of the Web Services Distributed Management Web Event Format standard) and then takes that data and create business activity dashboards.
Check out DiALog: A Distributed Model for Capturing Provenance and Auditing Information, apart from the distributed aspect, you can use the subject-predicate-object principle to record the business events. And afterwards reconstruct certain trails.
Here is a related post - mine. Audit logging and exception management framework.

Resources