Regarding the syntax of hostnames, answers to questions like this often refer to RFC 1123 and RFC 952, but fail to mention RFC 921 which seems to place additional restrictions on hostnames. There are probably a bunch of later RFCs about the DNS (and IDN) which cover constraints on hostnames handled by the DNS.
There is a lot confusion around the valid syntax of hostnames and hostnames handled by the DNS.
Which RFCs specify the syntax requirements on hostnames and which RFCs specify additional constraints on the hostnames handled by the DNS?
You're correct to cite RFC 1123 and RFC 952, but you've omitted RFC 2181 "Clarifications to the DNS Specification". Specifically ยง11 contains this text:
... any binary string whatever can be used as the label of any resource record.
Since a "hostname" is a domain name that has an A record, this text would appear to allow any valid domain name to also be considered a valid hostname.
A couple of years ago I asked one of the authors of this text whether that was the intended interpretation and he confirmed that it was. However that view is not widely accepted and there is still no universally agreed answer within the DNS community to your question of what makes a legal hostname.
p.s. you've misread RFC 1123 - at no point does it say that 63 and 255 are lower limits on labels and domain names. The 63 limit is actually enforced by the wire format of a DNS label that only reserves 6 bits for the length of a label.
You could take a look at the RFC 1035.
This is a purely DNS based RFC and explains some of these limitations.
Related
I can't find anything about the subprotocol used with WebSockets. What is the difference between "chat" and "superchat" subprotocol mentioned in the rfc6455 and where can I find the RFC for "chat" and "superchat" or are they just placeholders?
Both protocols don't exist, so there's no actual difference between them.
These "protocols" were just example name for possible Sec-WebSocket-Protocol header values.
It's pretty much the same as using foo and bar as example names, except they chose chat and superchat as example names.
As previously mentioned, those two names were just meant as examples. The protocol header itself is completely optional, but meant to provide a way to document what sort of dialogue/application this connection is supposed to handle.
In what seems to me to perhaps be a bit of over-engineering or premature bureaucracy, the relevant but "non-normative" RFC section in fact goes to length establishing an IANA registry of WebSocket Subprotocols. What seems to confuse people is that chat and superchat are not listed there.
Most of the examples provided for sending SNMP traps are simple ones like the one below.
snmptrap -v 1 -c public host TRAP-TEST-MIB::demotraps localhost 6 17 '' \
SNMPv2-MIB::sysLocation.0 s "Just here"
Take any MIB file, they contain many complex object groups, for example, systemGroup contains sysLocation, sysName, etc.
Could someone help in bringing out examples to show the way how to send snmp traps which includes such OBJECT-GROUPS. Adding one more question here, Does SNMPTRAPD support internationalization?
It is really bad practice to define the SNMP notification (trap or inform) the way that it contains the entire OBJECT GROUP or even worse the entire SNMP table. The reason is that you don't really need all these variables anyway. The other reason is that the packet/PDU is limited by MTU size. So it is possible that you'll not be able to send the data within single UDP packet due to its size.
The proper scenario would be to have few varbinds and you could also initiate some polling cycle to find out what happens if you need more details when you receive such trap.
SNMPTRAPD and NET-SNMP library in general do not support internationalization (UNICODE). The library is limited to ASCII charset only.
There are commercial products on the market including NetDecision TrapVision and some other that fully support UTF-8 internationalization.
I have been starting to see uggc/uggcf (rot-13 encoded http/https) links show up in our system.
Are these worth supporting, is there actually a demand for it? The IETF document (link) has not been touched since 2001 and I cannot find much information on them at all.
Is there an area of the world where this is more common? I've only noticed them since we went world-wide.
The document describes it as a method to 'secure' the url as well as the data. What is the value of rot-13 encoding the data if it can be reversed without a key? HTTPS handles all of this, except for the domain itself.
I know this is an old answer but this is an interesting topic so I'll try to answer your question:
The "Encrypted Hypertext Transfer Protocol -- UGGC/1.0" specification, is an April Fools RFC. The IETF releases these almost yearly on 1st April, and ROT13 "encryption" would be pointless, since by knowning the encryption algortihm, you would be able to "decrypt" the message, in this case the URL.
So no, it's not worth supporting, and it does not provide any "serious" protection. The only usage I've seen is in some CTFs or hacking/crypto challenges.
Is there exist rfc ftp specifcation fresher than RFC 959?
There were several extensions to FTP presented in later RFCs (rfc2228, rfc2389, rfc3659, rfc4217 and maybe some more) and they are enough for all current FTP-related needs. The only things not adopted as a standard but relatively wide-spread are implicit FTP on port 990 and Mode Z / compression.
You can see the current status of RFC 959 in the status section at the top. If some later edition of the standard had replaced RFC 959, then there would be an entry "Obsoleted by" like there is an "Obsoletes" line for RFC 765, the previous version of the standard. However, other documents have since "Updated" the document (you could think of it as patches) and the documents are listed in the status section as "Updated by". There have also been a number of independent extensions that did not need to "update" the standard, like RFC 2428. There is no reliably maintained index of such extension specifications besides the general RFC Index maintained by the RFC Editor.
RFC 822, 3696 and others specify email address formats and how those should be validated by applications. Unfortunately in practice virtually nobody adheres to them, with most developers tending to invent a regex on the fly or copy-n-paste one from dubious sources to validate their user's email addresses. This in practice leads to many web services requiring email addresses, often as the primary identity of their users, yet only accepting a very limited subset of addresses that the RFCs actually allow.
So, can anything be said about the current state of what is generally considered a "web safe" email address? Is there some common subset that has crystalized over time that's accepted by most services? What's the standard for the HTML 5 email input type, which will hopefully eventually emerge as the default quick front-end validity check for emails?
Please note that I'm not asking what should be done to validate email addresses. Ideally validation should consist of light front-end validation which allows every possible address and possibly some false-positives, followed by a callback validation with the actual email server on the backend. I'm asking instead whether there is any sort of consensus on what the current implementations in the wild regard as valid. If I were to create a regex to validate email addresses (which I'm not, but humor me), what should that be to roughly match what everyone else does? If I were to create a new email address for myself on my own server, what safe subset should I stick to in order to be able to use that address at most web sites?
You pretty much answered your own question in your first paragraph.
TL;DR there is no consensus.