Is there a detailed specification for Outlook 365 IMAP? Which RFC's does it comply with? - outlook

Does anyone know if there is a specification for Outlook 365 that would describe their IMAP implementation in detail.
In particular, Which RFC's they comply with. For example, there are many Updates to RFC 3501. RFC 3501 list these RFC's in the update section. 466, 4469, 4551, 5032, 5182, 5738,6186, 6858, 7817, 8314, 8437, 8474
In addition, I'm having problems with how Outlook manages IMAP folder's. Outlook is sending the LIST command but not the LSUB command. LSUB is only send manually if the IMAP Folders outlook option is used. This is different from how other IMAP clients work.

Outlook 365, like all IMAP servers, advertises its extensions when you connect to it. Here's an example where I send the capability command to ask it:
$ openssl s_client -connect outlook.office365.com:993 -crlf
[…]
* OK The Microsoft Exchange IMAP4 service is ready. [Zm5vcmQK]
a capability
* CAPABILITY IMAP4 IMAP4rev1 AUTH=PLAIN AUTH=XOAUTH2 SASL-IR UIDPLUS MOVE ID UNSELECT CHILDREN IDLE NAMESPACE LITERAL+
a OK CAPABILITY completed.
The server almost certainly advertises more extensions after login. The IANA maintains a map from capability name to RFC.

I found that this document
https://learn.microsoft.com/en-us/openspecs/exchange_standards/ms-stanoimap/9e26aea5-bb27-40d2-be9a-c82878c7d567
Provides the best spec for how outlook imap differs from the standard.
I found that Outlook requires the XLIST command to be used in place of the LSUB for folders to be displayed. This is strange since XLISt its NOT listed in the CAPABILITIES string.

Related

Time-Stamping servers API (what signtool uses)

What is the input that (code signing) time stamping servers expect? And what format is the reply?
I've searched VeriSign's site (and more) but found nothing but directions to use tools like signtool. What I want is the ability to create something like signtool from scratch.
To create signtool from scratch, start with the osslsigncode source, as it does roughly the same thing.
For RFC 3161 timestamp servers (supported by signtool), the request and response formats are publicly documented and supported by Openssl. If you use Wireshark to examine the network packets sent by signtool to the timestamp server you'll find them very similar to the openssl ts output.

What is the purpose of a SMTP VRFY Scanner?

I need some assistance with these type of scanners, there seem to be many of them on the web but I can't seem to find specific details of what they are meant to achieve.
I understand that they are communicating on the SMTP port, but I am not certain of what type of information they are trying to get.
The reason I ask this is because I am currently investigating a SMTP VRFY Scanner. I have made the scanner to connect to a windows xp system but it states
Waiting for SMTP banner
220 testing221 Microsoft ESMTP MAIL Service, Version: 6.0.2600.2180 ready at Sun, 27 Sep 2015 19:04:44 +0100
testing221 corresponds to the domain on the SMTP virtual server, on the xp system.
The SMTP VRFY command is intended to allow a sender to verify the correctness of an email address without actually sending an email.
This feature was abused by spammers very early on. As a result, most SMTP servers are configured to ignore the command.
They are effectively useless for the public internet these days. You will find very few, if any, domains configured to support the command.

https with ECDHE-ECDSA-AES256-GCM-SHA384 in windows 2012

I have been a long time reader but this is my first real post on a topic that I couldn't find a solution to.
I am currently hosting a website on Windows 2012 that I would like to get the latest TLS 1.2 ciphersuites running on.
I am aware of how to enable TLS 1.1 and TLS 1.2 in windows and have done so(via registry edits). I have also changed the cipher order to what I would like it to be.
My question is: How do i actually go through and set up my ECDHE / ECDSA portion of the cipher suite after this step?
When i view the site in the latest chrome beta (which supports ECDHE and ECDSA in TLS 1.2 provided you use the supported curves) it seems to skip all of the ECHDE ciphersuites.
Is there something else i need to do to get ECDHE/ECDSA properly enabled?
I have read around on the net trying to solve this myself and they mention making copies of your root cert and then modifying them to somehow support ECDHE. Am i barking up the wrong tree?
Thank you in advance for any and all support with this issue.
Edit: adding clarification/progress
After more research, I have found that in order to get ECDSA to work, you need an ECDSA certificate. The only way to get one at this time is to self-sign, as the cert-cartel has not yet come up with proper cross-licensing agreements and fee structures for Ellipic Curve Certificates yet.
Since self-signing is not an option for this site, I have removed all ECDSA suites from the cipher-order.
Unfortunately, because all of the AES Galois Counter Mode suites were also ECDSA, this rules those out for the time being.
This leaves me with a strongest cipher suite of ECDHE_RSA_WITH_AES_256_CBC_SHA384_P521 which I BELIEVE is supported by the latest version of Chrome beta correct? I can't seem to get Chrome to pick up anything beyond SHA-1. Is there no SHA-2 support? even in the latest beta?
AES-GCM is about how you encrypt the data in your connexion, EC-DSA or RSA about how the server identifies itself to the client. There is therefore no reason why you couldn't do AES-GCM encryption with a RSA authentication.
RFC 5289 does define the needed suite for that :
https://www.rfc-editor.org/rfc/rfc5289#section-3.2
CipherSuite TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 = {0xC0,0x2F};
CipherSuite TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 = {0xC0,0x30};
CipherSuite TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256 = {0xC0,0x31};
CipherSuite TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384 = {0xC0,0x32};
It's not however necessarily easy to find both the client and the server that will support them.
I had similar experiences with Win2008 R2.
Depending on the certificate, GCM cipher is offered by the server or not.
With self-signed ECDSA certificate i got GCM to work but older browsers
or Windows XP can't connect to such a https-site.
Windows doesnt support any TLS_ECDHE_RSA...GCM... ciphers:
http://msdn.microsoft.com/en-us/library/aa374757(v=vs.85).aspx
Thus normal RSA-certificates don't work with GCM under Windows.
Browser compatibility:
http://www.g-sec.lu/sslharden/SSL_comp_report2011.pdf

How can I implement custom verification of an SSL certificate in Ruby's SSLServer?

I'm using SSL to form a trusted connection between two peers. Each peer knows who it expects to be connecting to (or accepting a connection from) at a given time. It should only accept valid certificates, and further, it should only accept certificates with certain attributes (probably by checking the canonical name).
So far, I can get the two sides to talk, based on the example in this question, and its answer. Each side can print out the certificate presented by the other peer.
I'm not sure what the correct way to verify these certificates is, though. The obvious way would be to just look at the certificates after the connection is made and drop the connection if it doesn't meet our expectations.
Is there a more correct way to do this? Is there a callback which is given the peer's presented certificate and can give it a thumbs-up or thumbs-down? Or is the right thing to handle it after SSL is done with its work?
In this case, I am the CA, so trusting the CA isn't an issue. I'm
signing these certificates on behalf of my users. The canonical names
aren't even domain names. Users connect peer-to-peer. I want the
client software I distribute to verify that the connecting user has a
certificate I signed and is the right user.
Its sounds like you are running a Private PKI. Just load the root of the trust chain into OpenSSL with SSL_CTX_load_verify_locations or SSL_load_verify_locations.
Be sure to use SSL_PEER_VERIFY to ensure OpenSSL performs the verification. The call would probably look like SSL_CTX_set_verify(ctx, SSL_VERIFY_PEER, NULL);. If peer validation fails, then the connect will fail.
There are ways to ensure the connect succeeds and then catching the error later. The trick is to set the verify callback, have the verify callback always return 1, and then call SSL_get_verify_result after the connection is setup. See SSL/TLS Client for an example.
Note: in all cases, you still have to perform name checking manually. OpenSSL currently does not do it (its in HEAD for OpenSSL 1.1.0). See libcurl or PostgreSQL for some code you can rip.
An example of a SSL/TLS client is provided by OpenSSL at its wiki. See SSL/TLS Client. There's no server code or example at the moment.
I'm not sure what the correct way to verify these certificates is, though.
The obvious way would be to just look at the certificates after the
connection is made and drop the connection if it doesn't meet our
expectations.
There's a lot to this, and some of it is not obvious. I'm going to break the answer up into parts, but all the parts try to answer your question.
First, you can verify the certificates are well formed. The group responsible in the context of the Web is the CA/Browser forums. They have baseline and extended requirements for creating certificates:
Baseline Certificate Requirements, https://www.cabforum.org/Baseline_Requirements_V1_1_6.pdf
Extended Validation Certificate Requirements, https://www.cabforum.org/Guidelines_v1_4_3.pdf
In the baseline docs, you will find, for example, an IP listed as the Common Name (CN) must also be listed in the Subject Alternate Names (SAN). In the extended docs, you will find that private IPs (Reserved per RFC 1918) cannot be present in a extended validation (EV) certificate; and EV certificates cannot contain wild cards.
Second, you can perform customary validation according to RFC 5280, Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile, http://www.ietf.org/rfc/rfc5280.txt.
The customary checks are the ones like hostname matching, time period validity checks, and verifying an end-entity or leaf certificate (client or server certificate) chains back to a root. In browsers using CAs, that's any number of hundreds of trusted roots or intermediates.
If you choose to perform revocation checking, then you will probably DoS your application (how is that for obvious!). A mobile client on a 3G network cannot download and process a 30MB CRL - it will surely hang the application. And an application cannot perform a OCSP query when the URL is wrong - that will surely fail.
Also, if you are performing hostname matching that includes wildcards, then care must be taken to handle ccTLDs properly. ccTLDs are like *.eu, *.us, or இலங்கை (nic.lk). There's some 5000 or so of them and Mozilla offers a list at http://publicsuffix.org/ (alternately, https://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1).
Third, CAs don't warrant anything, so the answers you get from a CA is worthless. If you don't believe me, then check their Certification Practice Statement (CPS). For example, here is an excerpt from Apple's Certification Authority Certification Practice Statement (18 Sept 2013, page 6):
2.4.1. Warranties to Subscribers
The AAI Sub-CA does not warrant the use of any Certificate to any Subscriber.
2.4.2. CA disclaimers of warranties
To the extent permitted by applicable law, Subscriber agreements, if applicable,
disclaim warranties from Apple, including any warranty of merchantability or
fitness for a particular purpose
That means that they don't warrant the binding of the public key to the organization through the issuer's signature. And that's the whole purpose of X509!.
Fourth, DNS does not provide authentic answers. So you might get a bad answer from DNS and happily march over to a server controlled by your adversary. Or, 10 of the 13 root DNS servers under US control may collude to give you a wrong answer in the name of US national security.
Trying to get an authentic response from a non-US server is near impossible. The "secure DNS" pieces (sans DNSSEC) are still evolving, and I'm not aware of any mainstream implementations.
In the case of colluding US servers, a quorum won't work because the US holds an overwhelming majority.
The problem here is that you are making security decisions based on input from external services (CA and DNS). Essentially, you are conferring too must trust in untrustworthy actors.
A great treatment of the problems with PKI and PKIX is Dr. Peter Gutmann's Engineering Security at www.cs.auckland.ac.nz/~pgut001/pubs/book.pdf. Be sure to read Chapters 1 adn 6. Dr. Gutmann has a witty sense of humor, so its not dry reading. Another great book is Ross Anderson's Security Engineering at http://www.cl.cam.ac.uk/~rja14/book.html.
You have a couple of defenses with all the problems caused by PKI, PKIX, and CAs. First, you can run a private PKI where you are your own certificate authority. In this case, you are not trusting an outsider. Bad DNS answers and rogue servers should be caught because the server's certificate will not form a valid chain.
Second, you can employ a security diversification strategy. Gutmann writes about it in his Engineering Security book, and you should visit "Security through Diversity" starting on page 292 and the "Risk Diversification for Internet Applications" section on page 296.
Third, you can employ a Trust-On-First-Use (TOFU) or Key Continuity strategy. This is similar to Wendlandt, Anderson and Perrig's Perspectives: Improving SSH-style Host Authentication with Multi-Path Probing or SSH's StrictHostKeyChecking option. In this strategy, you do the customary checks and then pin the certificate or public key. You can also ask for other's view of the certificate or public key. Unexpected certificate or key changes should set off alarm bells.
OWASP has an treatment of Certificate and Public Key Pinning at https://www.owasp.org/index.php/Certificate_and_Public_Key_Pinning. Note: some places rotate their certificates every 30 days or so, so you should probably pin public keys if possible. The list of frequent rotators includes Google, and its one of the reasons tools like Certificate Patrol makes so much noise.

Format of EML files used by System.Net.Mail.MailMessage and Microsoft SMTP Server

I'm trying to wrap my head around the EML files I see generated by System.Net.Mail.MailMessage and generated or consumed by Microsoft's SMTP Server. I've been reading RFCs 5322 and 5321 and I'm trying to make sense of the format.
Granted, the majority of the EML files I see are adherent to the message format described in 5322 (or 2322 or 822, however good MS stuck to the standards, I don't know). However, I can't quite decide if the top portion of the file (the X-Sender and X-Receiver lines) constitute the "envelope" as described by 5321.
I guess my questions are:
Is there documentation for the portion of this file with X-Sender/X-Receiver lines (above the message contents)?
Are there other "commands" that can be expected in this section?
Is this a "standard" across the board? i.e. can I expect an EML file that is generated by System.Net.Mail.MailMessage to be parsed correctly in any standard SMTP server?
No, there isn't any documentation. This is something only done by the IIS SMTP service, and there aren't any other commands that I'm aware of.
However, the email is still RFC2822 compliant. It just prepends the message with some X-Headers that are still RFC compliant, but are recognized the IIS SMTP service.
The IIS SMTP service will use the X-Sender value as the SMTP MAIL FROM value, and the X-Receiver as the RCPT TO value.

Resources