Multiple postfix output IP - filter

I have a server with multiple public IP addresses.
I want to send campaign emails on this server.
Sometimes i would like to send mail from a particular IP (it is a filter on the sender email address that gives which IP to use).
The only thing i find is to install multiple postfix instances (one per output IP). Is there a best way to do this ?
I have a second question: Postfix gives a unique queue id to each message. If i have several instances of postfix, do you think thoses uniques id can be the same in 2 postfix instances ?
Thanks

sender_dependent_default_transport_maps is your friend. First, add this to main.cf:
sender_dependent_default_transport_maps = hash:/etc/postfix/sender-transport
Next, create the file /etc/postfix/sender-transport with
#my-sender-domain.com smtp-192-168-0-1:
Any message received with sender #my-sender-domain.com will use the service smtp-192-168-0-1 (can be any name) for sending. Don't forget to postmap /etc/postfix/sender-transport the file.
And then, add the service to master.cf
smtp-192-168-0-1 unix - - n - - smtp
-o smtp_bind_address=192.168.0.1
Again, the service name can be anything, but it must match the one on the hash file. This smtp service will send the message from the IP 192.168.0.1. Change as needed.
Add as many services and lines in the hash file as you want. Don't forget to service postfix restart after that.
There are many other options you can add to the smtp service, like -o smtp_helo_name=my.public.hostname.com, etc.
I just finished to set up a postfix like this :-)

Related

Reference Conf File within Conf File or Apply Rule to All Listening RSyslog Ports

We have a number of individual conf files with their own ruleset that's bound to its unique port. We want to create a single conf file that will filter/drop specific things such as, if msg from IP drop it or if msg contains x drop it. And have the drop filtering apply to all listening ports. Is this possible to do? Should we avoid using rulesets?
We're trying to avoid updating the drop/filter in each conf file for each port every time the filter has a new update.
Would anyone happen to know if one of the following things is possible with RSyslog?
Have 1 conf file that will listen on all rsyslog ports and be processed first? Without specifying each open port.
Have a conf file that calls another file with a rule in it?
Appreciate any help with this.
Typically, the default configuration file, say /etc/rsyslog.conf, will contain a line near the start saying something like
$IncludeConfig /etc/rsyslog.d/*.conf
or the equivalent RainerScript syntax
include(file="/etc/rsyslog.d/*.conf")
If not you can add it.
This will include all files matching the glob pattern, in alphabetical order. So you can optionally put any configuration in that directory, for example in arbitrarily named files 00-some.conf, and 10-somemore.conf and so on.
One file could have lots of input() statements like:
module(load="imtcp" MaxSessions="500")
input(type="imtcp" port="514")
input(type="imtcp" port="10514")
input(type="imtcp" port="20514")
assuming you are expecting to receive incoming tcp connections from remote
clients. See imtcp.
All the data from those remotes will be affected by any following rules.
For example, the last included file in the directory could hold lines like:
if ($msg contains "Password: ") then stop
if ($msg startswith "Debug") then stop
if ($hostname startswith "test") then stop
These will stop further processing of any matching input messages, effectively
deleting them.
The above inputs are all collected into a single global input queue.
All the if rules are applied to all the messages from that queue.
If you want to, you can partition some of the inputs into a new queue,
and write rules that will only apply to that new independent queue. The rest of the
configuration will know nothing about this new queue and rules.
This is called a ruleset. See
here and
here.
For example, you can have a ruleset called "myrules". Move one or more
inputs into the ruleset by adding the extra option:
input(type="imtcp" port="514" ruleset="myrules")
input(type="imtcp" port="10514" ruleset="myrules")
Move the rules to apply to that queue into a ruleset definition:
ruleset(name="myrules"){
if ($msg contains "Password: ") then stop
if ($msg startswith "Debug") then stop
*.* /var/log/mylogfile
}

amavisd-new rule to call external script

We are using amavisd-new (amavisd-new/oldstable,now 1:2.10.1-4) to filter both incomming and outgoing e-mails.
The thing is we receive a log of spam with fake sender from our domain. Since we have less than 100 accounts in our system, is there a plugin that can take the sender address then check it against a valid list of senders?
Thank you a lot.
our system configuration is:
debian stretch
amavisd-new/oldstable,now 1:2.10.1-4
spamassassin/oldstable,oldstable,now 3.4.2-1
If spamassassin checks outgoing email then perhaps a local rule that checks for allowed senders such as:
header LOCAL_WHITELIST From =~ /(me)|(you)|(etc)#mydomain.org/
meta LOCAL_WHITELIST_MATCH ((LOCAL_WHITELIST) =1)
score LOCAL_WHITELIST_MATCH -1.0
meta LOCAL_WHITELIST_MISS ((LOCAL_WHITELIST) =0)
score LOCAL_WHITELIST_MISS 1.0
Unfortunately, I have no idea how to do this just for outgoing email.
It should be straightforward to write a shell script that automatically generates the white list for you and creates the above as a whitelist.cf for spamassassin. That would be cool. Especially if you could get it to run automatically after the creation or deletion of an email account and then amavisd-new reload && service amavisd-new restart.

How to detect sender and destination of a notification in dbus-monitor?

My goal is to filter notifications coming from different applications (mainly from different browser window).
I found that with the help of the dbus-monitor I can write a small script that could filter the notification messages that I am interested in.
The filter script is working well, but I have a small problem:
I am starting with the
dbus-monitor "interface='org.freedesktop.Notifications', destination=':1.40'"
command. I have to added the "destination=':1.40'" because on Ubuntu 20.04 I always got twice the same notification.
The following output of
dbus-monitor --profile "interface='org.freedesktop.Notifications'"
demonstrate the reason:
type timestamp serial sender destination path interface member
# in_reply_to
mc 1612194356.476927 7 :1.227 :1.56 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
mc 1612194356.483161 188 :1.56 :1.40 /org/freedesktop/Notifications org.freedesktop.Notifications Notify
As you can see the sender :1.277 sends to :1.56 first than this will be the sender to :1.40 destination. (Simply notify-send hello test message was sent)
My script is working on that way, but every time system boot up, I have to check the destination number and modify my script accordingly to get worked.
I have two questions:
how to discover the destination string automatically? (:1.40 in the above example)
how to prevent system sending the same message twice? (If this question would be answered, than the question under point 1. became pointless.)

freeswitch - group dialing, registration issue

I am trying to setup a group dial for a given extension.
The bridge command I pass data specifying two call groups.
group/support|group/sales
Inside the directory I have users assigned to these call groups, some of which which are configured using only cellphone numbers by overriding the dial string parameter (no sip device).
However, when I try to call, the given user is not dialed as they are not registered (Originate Failed. Cause USER_NOT_REGISTERED. ) How can I configure a given user xml so that freeswitch will not skip over it for not being registered?
Thanks,
Matt
you can define dial-string in the user entry in the directory so that it dials the user's external number. In this example, I used loopback endpoint, and you can also define a string with the sofia gateway:
<param name="dial-string" value="[group_confirm_key=1,leg_delay_start=15]loopback/0794070224/${context}"/>
group_confirm_key defines that the user has to press 1 to accept the call -- this way you can be sure that the call does not land in voicemail.
leg_delay_start=15 is done because I have a SIP desktop phone, and I let it ring for the first 15 seconds.

Ruby IMAP "changes" since last check

I'm working on an IMAP client using Ruby and Rails. I can successfully import messages, mailboxes, and more... However, after the initial import, how can I detect any changes that have occurred since my last sync?
Currently I am storing the UIDs and UID validity values in the database, comparing them, and searching appropriately. This works, but it doesn't detect deleted messages or changes to message flags, etc.
Do I have to pull all messages every time to detect these changes? How do other IMAP clients do it so quickly (i.e. Apple Mail and Postbox). My script is already taking 10+ seconds per account with very few email addresses:
# select ourself as the current mailbox
#imap_connection.examine(self.location)
# grab all new messages and update them in the database
# if the uid's are still valid, we will just fetch the newest UIDs
# otherwise, we need to search when we last synced, which is slower :(
if self.uid_validity.nil? || uid_validity == self.uid_validity
# for some IMAP servers, if a mailbox is empty, a uid_fetch will fail, so then
begin
messages = #imap_connection.uid_fetch(uid_range, ['UID', 'RFC822', 'FLAGS'])
rescue
# gmail cries if the folder is empty
uids = #imap_connection.uid_search(['ALL'])
messages = #imap_connection.uid_fetch(uids, ['UID', 'RFC822', 'FLAGS']) unless uids.empty?
end
messages.each do |imap_message|
Message.create_from_imap!(imap_message, self.id)
end unless messages.nil?
else
query = self.last_synced.nil? ? ['All'] : ['SINCE', Net::IMAP.format_datetime(self.last_synced)]
#imap_connection.search(query).each do |message_id|
imap_message = #imap_connection.fetch(message_id, ['RFC822', 'FLAGS', 'UID'])[0]
# don't mark the messages as read
##imap_connection.store(message_id, '-FLAGS', [:Seen])
Message.create_from_imap!(imap_message, self.id)
end
end
# now assume all UIDs are valid
self.uid_validity = uid_validity
# now remember that we just fetched all those messages
self.last_synced = Time.now
self.save!
There is an IMAP extension for Quick Flag Changes Resynchronization (RFC-4551). With this extension it is possible to search for all messages that have been changed since the last synchronization (based on some kind of timestamp). However, as far as I know this extension is not widely supported.
There is an informational RFC that describes how IMAP clients should do synchronization (RFC-4549, section 4.3). The text recommends issuing the following two commands:
tag1 UID FETCH <lastseenuid+1>:* <descriptors>
tag2 UID FETCH 1:<lastseenuid> FLAGS
The first command is used to fetch the required information for all unknown mails (without knowing how many mails there are). The second command is used to synchronize the flags for the already seen mails.
AFAIK this method is widely used. Therefore, many IMAP servers contain optimizations in order to provide this information quickly. Typically, the network bandwidth is the limiting factor.
The IMAP protocol is brain dead this way, unfortunately. IDLE really should be able to return this kind of stuff while connected, for example. The FETCH FLAGS suggestion above is the only way to do it.
One thing to be careful of, however, is that UIDs are only valid for a given session per the spec. You should not store them, even if some servers persist them.

Resources