I've got a (proprietary) output from a software that I need to parse. Sadly, there are unescaped user names and I'm scratching my hairs trying to know if I can, or not, describe the files I need to parse using a BNF (or EBNF or ABNF).
The problem, oversimplified (it's really just an example), may look like this:
(data) ::= <username>
<username> ::= (other type of data)
And in some case, instead of appearing at the left or at the right, the username can also appear in the middle of a line.
The problem is that the username is unescaped and there are not enough restrictions on user names (they're printable ASCII, max 20 chars and they can't contain line break). So "=" would be a perfectly valid username, for example. And so would "= 1 = john = 2" (because user, at sign-on, where allowed to choose any user name they wanted and these appear unescaped in the output I've got).
I'm asking because my parser chocked on some very creative usernames (once again, not in my control, they're "weird" and I need to deal with it) and I cannot find an easy way to deal with this. Also note that I do not know in advance the user names (for example I don't have access to a database that would contain all the user names that the users created).
So are unrestricted and unescaped user names incompatibles with BNF?
P.S: be cool with me if I made mistakes, it's my first post on stackoverflow :)
BNF doesn't "care" for user names per-se. It works on the token level. If you define a username token, you can build describe a grammar using BNF based on it.
Your problem should be solved on the lexer level. The lexer should be smart enough to recognize user names, even when they're not escaped, and pass username tokens to the parser.
In theory you could describe all kinds of user names with a grammar, but this heavily depends on the other things in your language. Is = a valid token on its own right? How can you tell a username having = in it apart if it is? I think you'll have to describe the rest of the rules and valid tokens in your language to get a fuller answer here.
It might be possible to work by recognising things that are not usernames and then declaring everything else a username, even if this means parsing from right to left instead of left to right or doing something equally eccentric.
It may be worth looking to see if your input is actually ambiguous: can you find two different situations that lead to identical output being generated? If so, you need to go back and get requirements for which of them to favour, or what sort of error to produce, or whatever. If not, the reason why not might help you work out what your parser or lexer or whatever needs to do.
Related
I am writing an app that has a sign-up form. This article made me doubt everything I knew about human names. My question is: does a person's name necessarily have positive length? Or can I validate names in this way and be confident that I have not denied anyone their identity?
P.S.: one might ask why am I validating at all. The answer is that this is for a school project and proper validation is a part of the mark. The article above proves that person's name can be pretty much any string of positive length but I don't know if zero length is OK.
With all types of programming, you have to draw a distinction between what is meaningful in the real world, and what is meaningful for your software solution.
How the data is to be used will validate what type of validation is required.
For instance, if your software interfaces with a government API, and the government API requires a first name and surname, you should do the same.
If you're interacting with bank accounts, you may have a single string which represents that account name, which many or may not be a human name or not, but may have other constraints around length.
If the name is only to be used for display purposes, maybe there is no point to capture the name at all, and instead you should capture a preferred display name (which doesn't needlessly assume a certain number of name components).
When writing software, you should target to make as few assumptions as possible, unless those assumptions will cause an increase in complexity of your software solution. If the software requires people to have non-empty names, then you should validate at the border that this is true.
In addition, if you were my student, you would have already lost marks for conflating null, and an empty string. In this instance, null would represent you lack data about the name, and an empty string would indicate that user has specified that their name is empty.
Also, if you decide not to validate something, you should at least leave a comment to indicate that you thought of it. If you do something unusual, it's possible a future developer may come along and fix the "bug". In addition, this helps you avoid losing marks.
I do not want to be too strict as there may be thousands of possible characters in a possible first name
Normal english alphabets, accented letters, non english letters, numbers(??), common punctuation synbols
e.g.
D'souza
D'Anza
M.D. Shah (dots and space)
Al-Rashid
Jatin "Tom" Shah
However, I do not want to except HTML tags, semicolons etc
Is there a list of such characters which is absolutely bad from a web application perspective
I can then use RegEx to blacklist these characters
Background on my application
It is a Java Servlet-JSP based web app.
Tomcat on Linux with MySQL (and sometimes MongoDB) as a backend
What I have tried so far
String regex = "[^<>~##$%;]*";
if(!fname.matches(regex))
throw new InputValidationException("Invalid FirstName")
My question is more on the design than coding ... I am looking for a exhaustive (well to a good degree of exhaustiveness) list of characters that I should blacklist
A better approach is to accept anything anyone wants to enter and then escape any problematic characters in the context where they might cause a problem.
For instance, there's no reason to prohibit people from using <i> in their names (although it might be highly unlikely that it's a legit name), and it only poses a potential problem (XSS) when you are generating HTML for your users. Similarly, disallowing quotes, semi-colons, etc. only make sense in other scenarios (SQL queries, etc.). If the rules are different in different places and you want to sanitize input, then you need all the rules in the same place (what about whitespace? Are you gong to create filenames including the user's first name? If so, maybe you'll have to add that to the blacklist).
Assume that you are going to get it wrong in at least one case: maybe there is something you haven't considered for your first implementation, so you go back and add the new item(s) to your blacklist. You still have users who have already registered with tainted data. So, you can either run through your entire database sanitizing the data (which could take a very very long time), or you can just do what you really have to do anyway: sanitize data as it is being presented for the current medium. That way, you only have to manage the sanitization at the relevant points (no need to protect HTML output from SQL injection attacks) and it will work for all your data, not just data you collect after you implement your blacklist.
I am using Topsy, It returns me title of highest ranking article of my mebsite, It returns me one RSS file which contains post title with there link. For now i am only taking post name and using post title am trying to search in mysql database using following function like this:
get_post_by_title($postTitle,'post');
But the problem is topsy returns me post title but it also add some special characters in RSS file like " ' " replace with " ’ " this charecters.Because of this get_post_by_title() function does not return me post by title name.
EDIT : It returns me one post title like this :
iPad Applications In Bloom’s Taxonomy NEXT
Here single quote is special charecter.
Please help me. Thanks
First let's clear up a misconception: that character in your example is not a "special" character. It is Unicode code point U+2019, "RIGHT SINGLE QUOTATION MARK." Its HTML entity reference is ’. It's an ordinary character - it just happens to be an ordinary character that has no representation in ASCII. Before getting to an answer to your specific question, I need to tell you to read Joel Spolsky's article "The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)" - it is just what it says on the tin, and unless you absorb at least a little more knowledge about Unicode, you will keep running into problems like this. Don't fret too much: everyone runs into problems like this until they learn how to deal with text. Unicode isn't "hard" so much as it is "prone to exposing unconscious assumptions we make about how text works." †
Now, to your question.
If I'm reading you right, what's happening to you is that you have posts with non-ASCII characters in their titles such as ’ which aren't showing up when you search for them with get_post_by_title() (it seems like you're using something similar to the accepted answer on this question - is that right?) There are two paths to a solution: store the titles in a format that's easier for you to search, or use a searching method that can find non-ASCII characters.
Storing the titles differently would require that you run them through PHP's built-in htmlentities() function or before storing them in your Wordpress DB - you would also want to make sure that you convert characters with no HTML entity equivalent to '\xNN' form, and to make sure that your DB's collation/charset is set to UTF-8 or another Unicode-aware encoding. This will be a nontrivial amount of effort. ‡
Using a different searching method doesn't require tinkering with your DB or digging into WordPress internals, but it does require very careful fiddling with search string. You'll need to either use the exact character you're looking for in a search, expressed as a '\xNN' character reference if necessary, or use wildcards carefully in the search.
Either way, good luck. It may be possible to offer more specific advice if more of your code is visible.
†: By the way, your life with regards to Unicode will also get much, much easier if you use better languages than PHP and better databases than MySQL. WordPress is inextricably tied to PHP and MySQL: PHP & MySQL are both woefully, horrendous, hilariously bad at handling Unicode issues correctly. Your life as a programmer will get better if you extirpate PHP & MySQL from it.
‡: Seriously, PHP is atrociously bad at this, and MySQL is in a shoelaces-tied-together state of fumbling. Avoid them.
remove from wp-config.php
//define('DB_CHARSET', 'utf8');
//define('DB_COLLATE','utf8_unicode_ci');
You can easily remove special characters using preg_replace, see this post -> http://code-tricks.com/filter-non-ascii-characters-using-php/
I'm doing an audit of a system, which the developers insist is SQL injection proof. This they achieve by stripping out the single-quotes in the login form - but the code behind is not parameterized; it's still using literal SQL like so:
username = username.Replace("'", "");
var sql = "select * from user where username = '" + username + "'";
Is this really secure? Is there another way of inserting a single quote, perhaps by using an escape character? The DB in use is Oracle 10g.
Maybe you can also fail them because not using bind variables will have a very negative impact on performance.
A few tips:
1- It is not necessarily the ' character that can be used as a quote. Try this:
select q'#Oracle's quote operator#' from dual;
2- Another tip from "Innocent Code" book says: Don't massage invalid input to make it valid (by escaping or removing). Read the relevant section of the book for some very interesting examples. Summary of rules are here.
Have a look at the testing guide here: http://www.owasp.org/index.php/Main_Page That should give you more devious test scenarios, perhaps enough to prompt a reassessment of the SQL coding strategy :-)
No, it is not secure. SQL injection doesn't require a single-quote character to succeed. You can use AND, OR, JOIN, etc to make it happen. For example, suppose a web application having a URL like this: http://www.example.com/news.php?id=100.
You can do many things if the ID parameter is not properly validated. For example, if its type is not checked, you could simply use this: ?id=100 AND INSERT INTO NEWS (id, ...) VALUES (...). The same is valid for JOIN, etc. I won't teach how to explore it because not all readers have good intentions like you appear to have. So, for those planning to use a simple REPLACE, be aware that this WILL NOT prevent an attack.
So, no one can have a name like O'Brian in their system?
The single quote check won't help if the parameter is numeric - then 1; DROP TABLE user;-- would cause some trouble =)
I wonder how they handle dates...
If the mechanism for executing queries got smart like PHP, and limited queries to only ever run one query, then there shouldn't be an issue with injection attacks...
What is the client language ? That is, we'd have to be sure exactly what datatype of username is and what the Replace method does in regard to that datatype. Also how the actual concatenation works for that datatype. There may be some character set translation that would translate some quote-like character in UTF-8 to a "regular" quote.
For the very simple example you show it should just work, but the performance will be awful (as per Thilo's comment). You'd need to look at the options for cursor_sharing
For this SQL
select * from user where username = '[blah]'
As long as [blah] didn't include a single quote, it should be interpreted as single CHAR value. If the string was more than 4000 bytes, it would raise an error and I'd be interested to see how that was handled. Similarly an empty string or one consisting solely of single quotes. Control characters (end-of-file, for example) might also give it some issues but that might depend on whether they can be entered at the front-end.
For a username, it would be legitimate to limit the characterset to alphanumerics, and possibly a limited set of punctuation (dot and underscore perhaps). So if you did take a character filtering approach, I'd prefer to see a whitelist of acceptable characters rather than blacklisting single quotes, control characters etc.
In summary, as a general approach it is poor security, and negatively impacts performance. In particular cases, it might (probably?) won't expose any vulnerabilities. But you'd want to do a LOT of testing to be sure it doesn't.
I have a python script that receives text messages from users, and processes them as a query. However, some users have signatures automatically appended to their messages, and the script incorrectly treats them as actual content. What's the best programmatic way to recognize and remove these signatures?
(I'd prefer in python, but am fine with any other language too, as well as just saying it in pseudocode)
If the signatures are appended to the body of the message such that they're actually part of the body text, then there are only two ways to remove them:
Heuristics, such as "anything following three dashes must be a signature". These may be effective if you spend some time tuning them.
A classifier. This is a lot of work to set up, and requires that you "train" it by marking some message parts as signatures. These can also be very effective, but like heuristics will never work 100% of the time.
If the signature always follows a specific pattern, you should be able to just use a regular expression to trim it off.
However, if the user can setup their signature any way they wish, and there is no leading characters (ie: -- at the beginning), this is going to be very difficult. The only reliable way to do this would be to know the content of the signature for each user in advance so you can strip it out. Imagine a worst-case scenario: Somebody could always send a blank message, with a signature that was a fully valid "query". There'd be no way for the script to differentiate that from a "query" message with no signature.