Mutt mail program sporadically delivering emails - macos

first of all, I have to say that this is completely my fault. I did a stupid thing. I sent myself the same email 10,000 times from a shell script. Out of curiosity, really. Who hasn't wondered how long it would take their computer to send 10,000 emails. Nobody? well, I did.
About 600 of these came through within 5 minutes.
Now, whenever I try to send one email from mutt, sometimes it works and sometimes it doesn't- but, when it does make it through, it's usually accompanied by another one or two of those 10,000 emails. Seems like they're still out there, floating around, waiting for me to send another email that they can piggyback on. I've tried sending mails to my own addresses from a few different email providers, and it's just as flaky every time, so I guess that mutt is the problem, rather than yahoo/gmail.
is there anybody who has encountered a problem like this before, and can shed some light on what's going on?
(Using mutt from a terminal on mac osx)

From my experience from sending e-mail marketing. I do not think that this is an issue with mutt. E-mail can take many different paths to get to your server, and if you are sending 10,000 more than likely a couple of issues could occur:
Your ip address and your sender e-mail address will get classed as spam.
Your sending so much traffic that some servers will just give up, and not send the e-mail.
Your ISP will see a lot of traffic and block you.
All the above.
When it comes to sending e-mail you really have to be careful because it can be a big pain to get taken off spam lists, and not to mention you can upset clients.
It's also always important to include an op-out link.
My advice is to take a break, and wait a day and you will see e-mails (that have not been dropped) appear over time. As long as they don't get classed as Spam, which I think a lot of them will do.

Related

3DSecure periodically timing out but taking payment

I am experiencing a very frustrating issue with SagePay Direct when a card payment initiates a 3DSecure challenge.
Customers are reporting either a hanging iFrame, or payment declined response. Whats worse is that in some instances, Sage takes the payment but the user is unaware of this and tries to buy again
Looking at my logs my code is working as expected and is loading the iFrame with the returned ACSURL as the src.
After searching the web, it appears it is a known issue with a timeout occurring on the secure merchant issuer that i hand off to.
The trouble i have is that i have no control of the response(or lack of) from the issuer as its in an iFrame.
Sage have not been very helpful with this problem only going as far as to say "we have heard of customers who experience this issue"
Does anyone have any experience of this problem and know how to resolve it? I guess the bottom line is to turn off the 3DSecure checks but this seems counter productive to the new EU ruling coming into force at some point.
Worth pointing out that this is only affecting a small percentage of my customer base and a lot of transactions are processing successfully (even with the password challenge) but the customers who experience problems are rightly shouting loudly.
anyone any ideas?
Thanks
We process up to 1000-2000 transactions daily via SagePay, using the Direct protocol. They are very cheap but their service is in all honesty fairly terrible. We have a single digit quantity of transactions every day that fail in this way. We've also got another provider and don't experience the same issues.
We have a routine job that asks the SagePay Reporting API about transactions that failed, to see what the current status is (did SagePay get the transaction? was it successfully authorised? etc). This API is utterly, utterly terrible and was a nightmare to integrate with, but it's useful as at least we can refund customers without having to log into the SagePay dashboard.
One thing that we discovered (that isn't documented anywhere on the SagePay site as far as I can tell) is that you're limited to one transaction at a time, or around 20-30 transactions per minute by default. If you go over this (a temporary peak or whatever) your transactions queue up and are delayed. If it gets really busy it completely falls over, and takes a while to recover. We had to switch SagePay off entirely for a few hours due to this (we've got backups in place).
Anyway, so it turns out our transactions were all being processed on one TID (short for Terminal ID). This is akin to a physical card terminal in a shop which can only process one transaction at a time. We asked SagePay support for more and we now have 10-15.
I hope this helps you. I'd recommend implementing a fallback payment supplier in case SagePay fails. A year or two ago they had a 3 day(!!!!) outage which was fairly devastating for us. We now take this seriously!
We've recently had an increase in what I believe may be the same thing. Basically the customer would be sent off to the 3ds page, then returned to the callback page, but for reasons I can't explain the PHP session wouldn't reestablish. The POST response to the callback page was enough to identify the order and complete it (as we'd taken payment), but the customer would then subsequently be prompted to log in again - they'd then see their basket as still having products in and place a second order (that would go through successfully).
After many hours debugging and making changes I managed to replicate this on a development server whilst using mobile emulation...
Long story short, what I have done is to add:
session_regenerate_id();
When I perform the initial vsp register CURL (this is the CURL where you get given the ACSURL). So far, this seems to be enough to ensure that the session gets reestablished when the customer returns to the callback page.

Any better way to watch imap email box update?

I have tried IMAP idle way, which works in most time, but:
Sometimes it's missing event... the status updated which IDLE watched it's a delayed message, make my script confused.
The email ISP sometimes close the imap conneciton, connection maybe just last serveral minutes.
When lots email rush in, such as one email per seconds. IDLE status missing lot's event.
I know this is maybe mostly should blame email ISP, but is there a better way I can get email notification in time and reliable.
or I just use hard way, long loop check email?
IDLE doesn't tell you that there is one new message, it tells you that something happened. It may be one new message, or ten, it may be one message being deleted, or ten, or it may be another change. It's up to you to check. (If you want to test how your code handles it, you can cause large changes using UID COPY and EXPUNGE.)
Connections being closed is also your problem to solve. The IMAP server can close a connection (for good or bad reasons), but usually it's done by a NAT middlebox belonging to the customer. Only the client can reconnect to solve the NAT problem, and solving the NAT problem solves the server problem too, as a side effect.

How do live sessions work (with multiple users)?

This has been on my mind for a while now, and I guess I am asking it now. My question is how do live sessions work? For instance, a live chat session, or the live multi-user updater on JSfiddle.net. How do both items update instantly? In the case of the live chat, is it updating from AJAX to the server every second?
Sorry if my question is misunderstood, but my question is simply, how do live sessions work with multiple users?
EDIT
How does Stack Overflow do it? Every time something happens I get a notification, is that updating to the database every second to see if something happens, or is there a better (more efficient) way of going about doing this?
There is a couple of ways of doing it.
The most common way people are nowadays doing it is through websockets. You can just google that term and learn about it. Basically the webserver notifies you through a socket whenever it decides to.
Another way is polling. People used to do it like this back in the day. Polling is pretty much the dumb way: constantly (or every other second or so) sending an ajax request to the webserver asking if there is any new content.
Another interesting way is sending a get request that stays open for a certain amount of time, even after it gets a response. It sort of functions like a stream that you opened to a file or connection, it stays open untill you close it (or untill some other condition). I'm not too familiar with this method, but I know Google Drive uses it for it's multi-user file editing. Just open two sessions to the same Google Drive document and inspect the page. You'll see in the console that every time you type a block of text it'll send a post, and you'll have at least 1 get request pending at all times. At one point it'll close, and right away a new one starts.
So in short: Websockets, Polling, and whatever you call that last method.

Sending automated alerts through a XMPP server via command line? (WINDOWS)

I've spent hours trying to figure out the answer to this and just continue to come up empty handed. I've setup a XMPP server through OpenFire that is fully functional. My goal with creating the server was placing an alert system for when an event is completed on my server. For example when one of my renders is finished rendering (takes hours, sometimes days), it has the option of running a command when it's finished. This command would then run a .bat file telling a theoretical program to send a message via the broadcast plugin in OpenFire to all parties involved in the render. So it needs to be able to receive parameters such as %N for name of the render and %L for the label of it.
I've located two programs that do exactly what I'm looking to do but one does not work and from the sounds of the comments may have never worked and the second one is seemingly LINUX only. The render server is Windows as is the OpenFire server so naturally it would not work. Here are the links though so you can get an idea.
http://thwack.solarwinds.com/media/40/orion-npm-content/general/136769/xmpp-command-line-client/
http://manpages.ubuntu.com/manpages/jaunty/man1/sendxmpp.1.html
Basically the command I want to push is identical to that of the first link.
xmppalert.exe -m "%N is complete." %L#broadcast.myserver
This would broadcast to everyone in the labels Group that the named render is complete.
If anyone has any idea how to get either of the above links working, know of another way or simply have a better idea on how to accomplish what I'm trying to do please let me know. This is something that has been eating at me for 2 days now.
Thanks.
you can take a look at PoshXMPP which allows you to use XMPP from the Powershell.
http://poshxmpp.codeplex.com/
Alex

Bittorrent protocol 'not available'/'end connection' response?

I like being able to use a torrent app to grab the latest TV show so that I can watch it at my lesiure. The problem is that the structure of the protocol tends to cause a lot of incoming noise on my connection for some time after I close the client. Since I also like to play online games sometimes this means that I have to make sure that my torrent client is shut off about an hour (depending on how long the tracker advertises me to the swarm) before I want to play a game. Otherwise I get a horrible connection to the game because of the persistent flood of incoming torrent requests.
I threw together a small Ruby app to watch the incoming requests so I'd know when the UTP traffic let up:
http://pastebin.com/TbP4TQrK
The thought occurred to me, though, that there may be some response that I could send to notify the clients that I'm no longer participating in the swarm and that they should stop sending requests. I glanced over the protocol specifications but I didn't find anything of the sort. Does anyone more familiar with the protocol know if there's such a response?
Thanks in advance for any advice.
If a bunch of peers on the internet has your IP and think that you're on their swarm, they will try to contact you a few times before giving up. There's nothing you can do about that. Telling them to stop one at a time will probably end up using more bandwidth that just ignoring the UDP packets would.
Now, there are a few things you can do to mitigate it though:
Make sure your client sends stopped requests to all its trackers. This is part of the protocol specification and most clients do this. If this is successful, the tracker won't tell anyone about you past that point. But peers remember having seen you, so it doesn't mean nobody will try to connect to you.
Turn off DHT. The DHT acts much like a tracker, except that it doesn't have the stopped message. It will take something like 15-30 minutes for your IP to time out once it's announced to the DHT.
I think it might also be relevant to ask yourself if these stray incoming 23 byte UDP packets really matter. Presumably you're not flooded by more than a few per second (probably less). Have you made any actual measurements or is it mostly paranoia to wait for them to let up?
I'm assuming you're playing some latency sensitive FPS, in which case the server will most likely blast you with at least 10-50 full MTU packets per second, without any congestion control. I would be surprised if you attract so many bittorrent connection attempts that it would cause any of the game packets to be dropped.

Resources