Any better way to watch imap email box update? - go

I have tried IMAP idle way, which works in most time, but:
Sometimes it's missing event... the status updated which IDLE watched it's a delayed message, make my script confused.
The email ISP sometimes close the imap conneciton, connection maybe just last serveral minutes.
When lots email rush in, such as one email per seconds. IDLE status missing lot's event.
I know this is maybe mostly should blame email ISP, but is there a better way I can get email notification in time and reliable.
or I just use hard way, long loop check email?

IDLE doesn't tell you that there is one new message, it tells you that something happened. It may be one new message, or ten, it may be one message being deleted, or ten, or it may be another change. It's up to you to check. (If you want to test how your code handles it, you can cause large changes using UID COPY and EXPUNGE.)
Connections being closed is also your problem to solve. The IMAP server can close a connection (for good or bad reasons), but usually it's done by a NAT middlebox belonging to the customer. Only the client can reconnect to solve the NAT problem, and solving the NAT problem solves the server problem too, as a side effect.

Related

What exactly does a HTTP or jquery $.ajax timeout mean?

When I issue an $.ajax query with a timeout: parameter, and my timeout is met such that error: is invoked, what does that mean?
More specifically:
does that mean the server received the request, but is still processing it? That may mean some effect may occur, so I may have to cancel it on the server, or somehow invalidate data that was already partially written to a database.
Or does that mean I was never able to reach the server at all? This is nice to know since then I don't have to deal with partial data on a server "save"
Or does that mean the request made it part of the way, and now we lost track of it? In this case, I'd have to actually ask the server, "Oh hey, about that request I sent awhile ago... did you get that one? yeah? okay ignore that last save"
OS Commands like tracert make it clear there may be many servers for a TCP command to go through, so if one becomes unresponsive, it's hard to tell if it got it or not. But some protocols require an echo-back to be considered receivable (so I'm not sure if HTTP or Apache is involved in this)
It is how long the client will wait to hear from the server before giving up.
The server may or may not have done its part. The only way for the client to know about that is for the client to be notified. Since you don't want to to leave a process or a human waiting forever, by using a timeout you specify the time to wait for success before giving up.

Mutt mail program sporadically delivering emails

first of all, I have to say that this is completely my fault. I did a stupid thing. I sent myself the same email 10,000 times from a shell script. Out of curiosity, really. Who hasn't wondered how long it would take their computer to send 10,000 emails. Nobody? well, I did.
About 600 of these came through within 5 minutes.
Now, whenever I try to send one email from mutt, sometimes it works and sometimes it doesn't- but, when it does make it through, it's usually accompanied by another one or two of those 10,000 emails. Seems like they're still out there, floating around, waiting for me to send another email that they can piggyback on. I've tried sending mails to my own addresses from a few different email providers, and it's just as flaky every time, so I guess that mutt is the problem, rather than yahoo/gmail.
is there anybody who has encountered a problem like this before, and can shed some light on what's going on?
(Using mutt from a terminal on mac osx)
From my experience from sending e-mail marketing. I do not think that this is an issue with mutt. E-mail can take many different paths to get to your server, and if you are sending 10,000 more than likely a couple of issues could occur:
Your ip address and your sender e-mail address will get classed as spam.
Your sending so much traffic that some servers will just give up, and not send the e-mail.
Your ISP will see a lot of traffic and block you.
All the above.
When it comes to sending e-mail you really have to be careful because it can be a big pain to get taken off spam lists, and not to mention you can upset clients.
It's also always important to include an op-out link.
My advice is to take a break, and wait a day and you will see e-mails (that have not been dropped) appear over time. As long as they don't get classed as Spam, which I think a lot of them will do.

Getting specific errors when TCP connections disconnect in Windows

I'm trying to improve the usefulness of the error reporting in a server I am working on. The server uses TCP sockets, and it runs on Windows.
The problem is that when a TCP link drops due to some sort of network failure, the error code that I can get from WSARecv() (or the other Windows socket APIs) is not very descriptive. For most network hiccups, I get either WSAECONNRESET (10054) or WSAETIMEDOUT (10060). But there are about a million things that can cause both of these: the local machine is having a problem, the remote machine or process is having a problem, some intermediate router has a problem, etc. This is a problem because the server operator doesn't have a definitive way to investigate the problem, because they don't necessarily even know where the problem is, or who might be responsible.
At the IP level, it's a different story. If the server operator happens to have a network sniffer attached when something bad happens, it's usually pretty easy to sort of what went wrong. For instance, if an intermediate router sent an ICMP unreachable, the router that sent it will put its IP address in there, and that's usually enough to track it down. Put another way, Windows killed the connection for a reason, probably because it got a specific packet that had a specific problem.
However, a large number of failures are experienced in the field, unexpected. It is not realistic to always have a network sniffer attached to a production server. There needs to be a way to track down problems that happen only rarely, intermittently, or randomly.
How can I solve this problem programmatically?
Is there a way to get Windows to cough up a more specific error message? Is there some easy way to capture and mine recent Windows events (perhaps the one Microsoft Network Monitor uses)? One way I've "solved it" before is to keep dumpcap (from Wireshark) running in ring buffer mode, and force it to stop capturing when a bad event happens, that I can mine later.
I'm also open to the possibility that this is not the right way to solve this problem. For instance, perhaps there is some special Windows mode that can be turned on to cause it to log useful information, that a network administrator could use to track this down after-the-fact.

How does Google Docs autosave work?

Okay, I know it sounds generic. But I mean on an AJAX level. I've tried using Firebug to track the NET connections and posts and it's a mystery. Does anyone know how they do the instant autosave constantly without DESTROYING the network / browser?
My guess (and this is only a guess) is that google uses a PUSH service. This seems like the most viable option given their chat client (which is also integrated within the window) also uses this to delivery "real time" messages with minimal latency.
I'm betting they have a whole setup that manages everything connection related and send flags to trigger specific elements. You won't see connection trigers because the initial page visit establishes the connection then just hangs on the entire duration you have the page open. e.g.
You visit the page
The browser established a connection to [example]api.docs.google.com[/example] and remains open
The client-side code then sends various commands and receives an assortment of responses.
These commands are sent back and forth until you either:
Lose the connection (timeout, etc.) in which case it's re-established
The browser window is closed
Example of, how I see, a typical communication:
SERVER: CLIENT:
------- -------
DOC_FETCH mydocument.doc
DOC_CONTENT mydocument.doc 15616 ...
DOC_AUTOSAVE mydocument.doc 24335 ...
IM collaboratorName Hi Joe!
IM_OK collaboratorName OK
AUTOSAVE_OK mydocument.doc OK
Where the DOC_FETCH command is saying I want the data. The server replies with the corresponding DOC_CONTENT <docname> <length> <contents>. Then the client triggers DOC_AUTOSAVE <docname> <length> <content>. Given the number of potential simultaneous requests, I would bet they keep the "context" in the requests/responses so after something is sent it can be matched up. In this example, it knows the IM_OK matches the second request (IM), and the AUTOSAVE_OK matches the first request (AUTOSAVE)--Something like how AOL's IM protocol works.
Again, this is only a guess.
--
To prove this, use something like ethereal and see if you can see the information transferring in the background.

Bittorrent protocol 'not available'/'end connection' response?

I like being able to use a torrent app to grab the latest TV show so that I can watch it at my lesiure. The problem is that the structure of the protocol tends to cause a lot of incoming noise on my connection for some time after I close the client. Since I also like to play online games sometimes this means that I have to make sure that my torrent client is shut off about an hour (depending on how long the tracker advertises me to the swarm) before I want to play a game. Otherwise I get a horrible connection to the game because of the persistent flood of incoming torrent requests.
I threw together a small Ruby app to watch the incoming requests so I'd know when the UTP traffic let up:
http://pastebin.com/TbP4TQrK
The thought occurred to me, though, that there may be some response that I could send to notify the clients that I'm no longer participating in the swarm and that they should stop sending requests. I glanced over the protocol specifications but I didn't find anything of the sort. Does anyone more familiar with the protocol know if there's such a response?
Thanks in advance for any advice.
If a bunch of peers on the internet has your IP and think that you're on their swarm, they will try to contact you a few times before giving up. There's nothing you can do about that. Telling them to stop one at a time will probably end up using more bandwidth that just ignoring the UDP packets would.
Now, there are a few things you can do to mitigate it though:
Make sure your client sends stopped requests to all its trackers. This is part of the protocol specification and most clients do this. If this is successful, the tracker won't tell anyone about you past that point. But peers remember having seen you, so it doesn't mean nobody will try to connect to you.
Turn off DHT. The DHT acts much like a tracker, except that it doesn't have the stopped message. It will take something like 15-30 minutes for your IP to time out once it's announced to the DHT.
I think it might also be relevant to ask yourself if these stray incoming 23 byte UDP packets really matter. Presumably you're not flooded by more than a few per second (probably less). Have you made any actual measurements or is it mostly paranoia to wait for them to let up?
I'm assuming you're playing some latency sensitive FPS, in which case the server will most likely blast you with at least 10-50 full MTU packets per second, without any congestion control. I would be surprised if you attract so many bittorrent connection attempts that it would cause any of the game packets to be dropped.

Resources