What would be the recommended values for AUTHRECS for the following,of course this needs to be tweaked per one's requirements.
QUEUE MANAGER
QUEUE (local, remote, dlq)
CHANNEL (server connection, sender, receiver etc.)
I usually tell people to consider three different groups for security:
QMgr-to-QMgr connections which boil down to "what auths do I grant the MCAUSER of a RCVR/RQSTR/CLUSRCVR channel?" The roles in this category depend on how granular you want network security to be. In general though, the adjacent QMgrs should NOT have access to the local QMgr's command queues. The channel's MCAUSER gets to connect to the QMgr and inquire, then put and setall on the queues it actually needs. That's it.
Application-to-QMgr. By "application" here I mean a business application residing in a locked data center. These applications generally need to connect and inquire on the QMgr and then put, get, browse, publish, subscribe on queues and topics. Sometimes they need more elevated auths like the ability to set the context info for a message, but that is rare and limited to specific queues.
Interactive users. Among this group there are various roles such as administrator, anonymous user and everything in between. These may need access to display or even manage objects other than queues and topics.
Each of these has very different authentication and authorization requirements. They also have different characteristics such as the volume of messages and stability of the accounts and authorizations they require.
They do all have a few things in common, though.
Granting set on the QMgr to anyone with access to the command queue makes them a full administrator.
Granting the ability to create queues (other than dynamic ones) makes the user a full administrator.
Anything that connects or opens an object needs inquire on the QMgr and objects it is authorized to.
Business applications need read/write access to their queues and topics but not other types of objects.
Instrumentation generally needs access to many object types but not to read or update the messages on the business queues.
Human users need at least display access to all objects if you are monitoring security. This is because the only way to enumerate objects (for example to paint a queues screen) is to do the PCF equivalent of DIS objType(*). If the user doesn't have display access to all objects of objType then the QMgr emits authorization errors when the display is issued.
Related
Need to have two outbound queues on two different servers and queue managers act as primary and secondary respectively if the sending to primary fails want to connect to secondary but as soon as primary become up application must send to primary any spring boot configuration can help? using websphere MQ.
For a Java JMS messaging application in Spring Boot, a connection name list allows for setting multiple host(port) endpoints as comma separated pairs.
The connection factory will then try these endpoints in turn to finds an available queue manager. Unsetting WMQConstants.WMQ_QUEUE_MANAGER means the connection factory will connect to the available queue manager on a given host(port) endpoint regardless of its name.
You'll need to think about the lifecycle of the connection factory as the connection will remain valid once it has been established. In the scenario where the connection factory is bound to the queue manager on hostB(1414) (the second in the list) and then the queue manager on hostA(1414) (the first in the list) becomes available again, nothing would change until the next connection attempt.
It's important to note that where the queue manager endpoints in the connection name list are unrelated, then the queues and messages available will not be the same. Multi-Instance queue managers allow a queue manager instance to failover between two host(port) endpoints. For container deployments, IBM MQ Native HA ensures your messaging applications are routed to an active instance of the queue manager
A CCDT allows for much more sophisticated connection matching options with IBM MQ over the connection name list method outlined above. More information on CCDT is available here. For JMS applications you'll need to set the connection factory to use CCDT e.g., cf.setStringProperty(WMQConstants.WMQ_CCDTURL, "") to the location of the CCDT JSON file.
It's probably worth me putting up an answer rather than more comments, though most of the ground has been covered by Rich's answer, augmented by Rich's, Morag's and my comments.
Client Reconnection seems the most natural fit for the first part of the use-case, and I'd suggest using a connection name list rather than going to the complexity of using a CCDT for this job. This will ensure that the application connects to your primary queue manager if it's available and the secondary queue manager if the primary isn't available, and will also move applications to the secondary if the primary fails.
Once a connection has been made, in general it won't move. The two exceptions are that in client reconnection configuration, a connection will be migrated to a different queue manager if communication to the first is broken, and in the context of uniform clusters connections may be requested to migrate to a queue manager to balance load over the cluster. There is no automatic mechanism that I can think of which would force all the connections back from your secondary queue manger to the primary - you'd need to either do something with the application, or in a client reconnect setup you could bounce the secondary queue manager.
(You could use this forum to request such a feature, but I can't promise either that the request would be accepted or that it would be rapidly acted on.)
If you want to discuss this further, I'd suggest that more dialogue is probably needed to help us understand your scenario properly, so that we can make the most helpful recommendations. The MQ discussion forum may be a useful place for such dialogue, as it doesn't fit well (IMHO) to the StackOverflow model.
We have multiple application environments (development, QA, UAT, etc) that need to connect to fewer provider environments through MQ. For example, the provider only has one test (we'll call it TEST1) environment to which all of the client application environments need to interact. It is imperative that each client environment only receives MQ responses to the messages sent by that respective environment. This is a high volume scenario so correlating message IDs has been ruled out.
Right now TEST1 has a queue set up and is functional, but if one of the client app's environments wants to use it the others have to be shut off so that messaging doesn't overlap.
Does MQ support a model having multiple clients connect to a single queue while preserving the client-specific messaging? If so, where is that controlled (i.e. the channel, queue manager, etc)? If not, is the only solution to set up additional queues for each corresponding client?
Over the many years I have worked with IBM MQ, I have gone back and forth on this issue. I've come to the conclusion that sharing a queue just makes life more difficult. Queues should be handed out like candy on Halloween. If an application team says that they have 10 components to their application then the MQAdmin should give them 10 queues. To the queue manager or server or CPU or hard disk, there is no difference in resource usage.
Also, use an MQ naming standard that makes sense and is easy to apply security to. i.e. for HR (Human Resource) department
HR.PAYROLL.SALARY
HR.PAYROLL.DEDUCTIONS
HR.PAYROLL.BENEFITS
HR.EMPLOYEE.DETAILS
HR.EMPLOYEE.REVIEWS
etc...
You could use a selector such as MQGET(where applname="myapp") or based on a specific user-defined property assuming the sender populates such a property but that's likely to be worse performance than any retrieval by msgid or correlid. Though you've not given any information to demonstrate that get-by-correlid is actually problematic.
And of course any difference between a test and production environment - whether it involves code or configuration - is going to be very risky.
You would not normally share a single destination queue between multiple different application types - multiple queues is far more standard.
I am implementing an event-driven microservice architecture. Imagine the following scenario:
Chat service: Ability to see conversations and send messages. Conversations can have multiple participants.
Registration-login service: Deals with the registration of new users, and login.
User service: Getting/updating user profiles.
The registration-login service emits the following event with the newly created user object:
registration-new
login-success
logout-success
The chat service then listens on registration-new and stores some fields of user in its own redis cache. It also listens on login-success and stores the token, and on logout-success to delete the token.
The user service has the following event: user-updated. When this is fired, a listener in the chat service updates the data corresponding to the user id in redis. Like the chat service, the user service also listens on login-success and logout-success and does the same thing as what the chat service does.
My question is the following: is this a good way to do this? It feels a bit counterintuitive to be sharing data everywhere. I need some advice on this. Thank you!
Seems that there's no other way. Microservices architecture puts lots of stress in avoiding data sharing so as to not create dependencies. That means that each microservice will have some data duplicated. That also means that there must exist a way of getting data from other contexts. The preferred methods strive for eventual consistency, such as sending messages to event sourcing or AMQP systems and subscribing to them. You can also use synchronous methods (RPC calls, distributed transactions). That creates additional technologic dependencies, but if you cannot accept eventual consistency it could be the only way.
What channel attributes can be utilized to provide barriers at different points in WebSphere MQ Inbound channel processing as an alternative to just simply disabling it?
From a security perspective, the approach of using multiple controls on a channel definition provides some redundancy. For example, many years ago researchers found a bug (which IBM quickly fixed) in the channel protocol that allowed the channel to start despite an MCAUSER that should have prevented the connection. If the MQ admin had relied entirely on the MCAUSER value, the channels would have been vulnerable until the fix was applied.
For that reason I generally advise to set multiple controls in the channels that you wish to be disabled. The idea is that if one control fails, the channel fails to a safe state. You could, for example, use CHLAUTH to disable a channel but if you then use the Broker's New Broker Configuration Wizard, the first thing it does (last time I checked) is to disable CHLAUTH. Whoops.
Here are a few of the attributes that can be used to disable a channel. Remember that if you apply these to a SYSTEM.DEF.* or SYSTEM.AUTO.* channel, you must override the attribute on creating a new legitimate channel so do not go overboard and use all of them. Or, more precisely, use one control for every layer of tin foil present in your favorite hat. ;-)
Set MCAUSER('*NOBODY') because the asterisk is not a valid character in a user ID.
Set the max message length to something shorter than the MQMD length.
Set SSLCIPH, SSLCAUTH(REQUIRED) and make sure SSLPEER has an impossible value.
Use CHLAUTH rules but make sure that when you create the rules that allow access on legitimate channels that they are not so broad as to authorize access to channels you did not intend.
Specify exits that do not exist.
Note that these address the issue of an attacker who does not already have access to the QMgr. Any attacker with admin access will keep templates on hand from a saveqmgr from each MQ version. This allows the attacker to submit DEFINE commands that contain all the required attributes and thus will not be reliant on the SYSTEM.* objects. The legitimate administrator, however, must use the same approach or at least be cognizant of which attributes must be overridden to define a new channel.
In short, this approach provides perimeter controls and the trade off is in administrative overhead. To be effective, use two or more unrelated controls (CHLAUTH, MCAUSER and invalid exit spec, for example), incorporate the setting into the training for administrators, and do not be tempted to use every possible control because the cost of doing so rises faster than the benefit.
The most appropriate thing to use to provide barriers for inbound channels is the CHLAUTH rules introduced in V7.1. This allows you to block/allow inbound channels based on IP Address (or host name in V8), remote queue manager name or client side user ID, or Certificate Subject's DN (and/or Issuer's DN in V8).
I'm using spring's imap mechanism in order to recieve emails from my account into my server.
this works like a charm.
Anyhow, a new requirmemnt came up - instead of listening to a single email account i will have to listen on a multiple number of accounts.
Iv'e tried creating a new channel for each of these account. it WORKS!
problem is that each channel i added meaning a new thread running.
since i'm talking about a large number of accounts it is quiet an issue.
My question is:
Since all the email accounts (I would like to listen to) are in the same domain i.e:
acount1#myDomain.com
acount2#myDomain.com
acount3#myDomain.com
....
Is it possible to create a single channel with multiple accounts?
Is there any alternative for me than defining N new channels?
thanks.
Nir
I assume you mean channel adapter, not channel (multiple channel adapters can send messages to the same channel).
No, you can't use a single connection for multiple accounts.
This is a limitation of the underlying internet mail protocols.
If you are using imap idle adapters, yes, this will not scale well because it needs a thread for each. However, if you are only talking about a few 10s of accounts, this is probably not an issue. For a much larger number of accounts, it may be better to use a polled adapter.
But, even so, unless it's a fixed number of accounts, the configuration could be burdensome (but you could programmatically spin up new adapters).
For complex scenarios like this, you may want to consider writing your own "adapter" that uses the JavaMail API directly and manages the connections in a more sophisticated way (but you still need a separate connection for each account). It wouldn't have to be a "real" adapter, just a POJO that interracts with JavaMail. Then, when you receive a message from one of the accounts, send it to a channel using a <gateway/>.