Related
This produces newlines:
%(https://api.foursquare.com/v2/venues/search
?ll=80.914207,%2030.328466&radius=200
&v=20161201&m=foursquare&categoryId=4d4b7105d754a06374d81259
&intent=browse)
This produces spaces:
"https://api.foursquare.com/v2/venues/search
?ll=80.914207,%2030.328466&radius=200
&v=20161201&m=foursquare&categoryId=4d4b7105d754a06374d81259
&intent=browse"
This produces one string:
"https://api.foursquare.com/v2/venues/search"\
"?ll=80.914207,%2030.328466&radius=200"\
"&v=20161201&m=foursquare&categoryId=4d4b7105d754a06374d81259"\
"&intent=browse"
When I want to separate one string on multiple lines to read it better on screen, is it preferred to use the escape character?
My IDE complains that I should use single quoted strings rather than double quoted strings since there is no interpolation.
Normally you'd put something like this on one line, readability be damned, because the alternatives are going to be problematic. There's no way of declaring a string with whitespace ignored, but you can do this:
url = %w[ https://api.foursquare.com/v2/venues/search
?ll=80.914207,%2030.328466&radius=200
&v=20161201&m=foursquare&categoryId=4d4b7105d754a06374d81259
&intent=browse
].join
Where you explicitly remove the whitespace.
I'd actually suggest avoiding this whole mess by properly composing this URI:
uri = url("https://api.foursquare.com/v2/venues/search",
ll: [ 80.914207,30.328466 ],
radius: 200,
v: 20161201,
m: 'foursquare',
categoryId: '4d4b7105d754a06374d81259',
intent: 'browse'
)
Where you have some kind of helper function that properly encodes that using URI or other tools. By keeping your parameters as data, not as encoded strings, for as long as possible you make it easier to spot bugs as well as make last-second changes to them.
The answer by #tadman definitely suggests the proper way to do it; I’ll post another approach just for the sake of diversity:
query = "https://api.foursquare.com/v2/venues/search"
"?ll=80.914207,%2030.328466&radius=200"
"&v=20161201&m=foursquare&categoryId=4d4b7105d754a06374d81259"
"&intent=browse"
Yes, without any visible concatenation, 4 strings in quotes one by one in a row. This example won’t work in irb/pry (due to it’s REPL nature,) but the above is the most efficient way to concatenate strings in ruby without producing any intermediate result.
Contrived example to test in pry/irb:
value = "a" "b" "c" "d"
I am trying to grasp the concept of Regular Expressions but seem to be missing something.
I want to ensure that someone enters a string that ends with .wav in a field. Should be a pretty simple Regular Expression.
I've tried this...
[RegularExpression(#"$.wav")]
but seem to be incorrect. Any help is appreciated. Thanks!
$ is the anchor for the end of the string, so $.wav doesn't make any sense. You can't have any characters after the end of the string. Also, . has a special meaning for regex (it just means 'any character') so you need to escape it.
Try writing
\.wav$
If that doesn't work, try
.*\.wav$
(It depends on if the RegularExpression attribute wants to match the whole string, or just a part of it. .* means 'any character, 0 or more times')
Another thing you should consider is what to do with extra whitespace in the field. Users have a terrible habit of adding extra white space in inputs - its why various .Trim() functions are so important. Here, RegularExpressionAttribute might be evaluated before you can trim the input, so you might want to write this:
.*\.wav[\s]*$
The [\s]* section means 'any whitespace character (tabs, space, linebreak, etc) 0 or more times'.
You should read a tutorial on regex. It's not so hard to understand for simple problems like this. When I was learning I found this site pretty handy: http://www.regular-expressions.info/
How can I match a balanced pair of delimiters not escaped by backslash (that is itself not escaped by a backslash) (without the need to consider nesting)? For example with backticks, I tried this, but the escaped backtick is not working as escaped.
regex = /(?!<\\)`(.*?)(?!<\\)`/
"hello `how\` are` you"
# => $1: "how\\"
# expected "how\\` are"
And the regex above does not consider a backslash that is escaped by a backslash and is in front of a backtick, but I would like to.
How does StackOverflow do this?
The purpose of this is not much complicated. I have documentation texts, which include the backtick notation for inline code just like StackOverflow, and I want to display that in an HTML file with the inline code decorated with some span material. There would be no nesting, but escaped backticks or escaped backslashes may appear anywhere.
Lookbehind is the first thing everyone thinks of for this kind of problem, but it's the wrong tool, even in flavors like .NET that support unrestricted lookbehinds. You can hack something up, but it's going to be ugly, even in .NET. Here's a better way:
`[^`\\]*(\\.[^`\\]*)*`
The first part starts from the opening delimiter and gobbles up anything that's not the delimiter or a backslash. If the next character is a backslash, it consumes that and the character following it, whatever it may be. It could be the delimiter character, another backslash, or anything else, it doesn't matter.
It repeats those steps as many times as necessary, and when neither [^`\\] nor \\. can match, the next character must be the closing delimiter. Or the end of the string, but I'm assuming the input is well formed. But if it's not well formed, this regex will fail very quickly. I mention that because of this other approach I see a lot:
`(?:[^`\\]+|\\.)*`
This works fine on well-formed input, but what happens if you remove the last backtick from your sample input?
"hello `how\` are you"
According to RegexBuddy, after encountering the first backtick, this regex performed 9,252 distinct operations (or steps) before it could give up and report failure; mine failed in ten steps.
EDIT To extract just the par inside the delimiters, wrap that part in a capturing group. You'll still have to remove the backslashes manually.
`([^`\\]*(?:\\.[^`\\]*)*)`
I also changed the other group to non-capturing, which I should have done from the start. I don't avoid capturing religiously, but if you are using them to capture stuff, any other groups you use should be non-capturing.
EDIT I think I've been reading too much into the question. On StackOverflow, if you want to include literal backticks in an inline-code segment or a comment, you use three backticks as the the delimiter, not just one. Since there's no need to escape backticks, you can ignore backslashes as well. Your regex could turn out to be as simple as this:
```(.*?)```
Dealing with the possibility of false delimiters, you use the same basic technique:
```([^`]*(?:`(?!``)[^`]*)*)```
Is this what you're after?
By the way, this answer doesn't contradict #nneonneo's comment above. This answer doesn't consider the context in which the match is taking place. Is it in the source code of a program or web page? If it is, did the match occur inside a comment or a string literal? How do I even know the first backtick I found wasn't escaped? Regexes don't know anything about the context in which they operate; that's what parsers are for.
If you don't need nesting, regexes can indeed be a proper tool. Lexers of programming languages, for instance, use regexes to tokenize strings, and strings usually allow their own delimiters as an escaped content. Anything more complicated than that will probably need a full-blown parser though.
The "general formula" is to match an escaped character (\\.) or any character that's valid as content but don't need to be escaped ([^{list of invalid chars}]). A "naïve" solution would be joining them with or (|), but for a more efficient variant see #AlanMoore's answer.
The complete example is shown below, in two variants: the first assumes than backslashes should only be used for escaping inside the string, the second assumes that a backslash anywhere in the text escapes the next character.
`((?:\\.|[^`\\])*)`
(?:\\.|[^`\\])*`((?:\\.|[^`\\])*)`
Working examples here and here. However, as #nneonneo commented (and I endorsed), regexes are not meant to do a complete parse, so you'd better keep things simple if you want them to work out right (do you want to find a token in the text, or do you want to delimit it already knowing where it starts? The answer to that question is important to decide which strategy works best for your case).
For coding reasons which would horrify you (I'm too embarrassed to say), I need to store a number of text items in a single string.
I will delimit them using a character.
Which character is best to use for this, i.e. which character is the least likely to appear in the text? Must be printable and probably less than 128 in ASCII to avoid locale issues.
I would choose "Unit Separator" ASCII code "US": ASCII 31 (0x1F)
In the old, old days, most things were done serially, without random access. This meant that a few control codes were embedded into ASCII.
ASCII 28 (0x1C) File Separator - Used to indicate separation between files on a data input stream.
ASCII 29 (0x1D) Group Separator - Used to indicate separation between tables on a data input stream (called groups back then).
ASCII 30 (0x1E) Record Separator - Used to indicate separation between records within a table (within a group). These roughly map to a tuple in modern nomenclature.
ASCII 31 (0x1F) Unit Separator - Used to indicate separation between units within a record. The roughly map to fields in modern nomenclature.
Unit Separator is in ASCII, and there is Unicode support for displaying it (typically a "us" in the same glyph) but many fonts don't display it.
If you must display it, I would recommend displaying it in-application, after it was parsed into fields.
Assuming for some embarrassing reason you can't use CSV I'd say go with the data. Take some sample data, and do a simple character count for each value 0-127. Choose one of the ones which doesn't occur. If there is too much choice get a bigger data set. It won't take much time to write, and you'll get the answer best for you.
The answer will be different for different problem domains, so | (pipe) is common in shell scripts, ^ is common in math formulae, and the same is probably true for most other characters.
I personally think I'd go for | (pipe) if given a choice but going with real data is safest.
And whatever you do, make sure you've worked out an escaping scheme!
When using different languages, this symbol: ¬
proved to be the best. However I'm still testing.
Probably | or ^ or ~ you could also combine two characters
You said "printable", but that can include characters such as a tab (0x09) or form feed (0x0c). I almost always choose tabs rather than commas for delimited files, since commas can sometimes appear in text.
(Interestingly enough the ascii table has characters GS (0x1D), RS (0x1E), and US (0x1F) for group, record, and unit separators, whatever those are/were.)
If by "printable" you mean a character that a user could recognize and easily type in, I would go for the pipe | symbol first, with a few other weird characters (# or ~ or ^ or \, or backtick which I can't seem to enter here) as a possibility. These characters +=!$%&*()-'":;<>,.?/ seem like they would be more likely to occur in user input. As for underscore _ and hash # and the brackets {}[] I don't know.
How about you use a CSV style format? Characters can be escaped in a standard CSV format, and there's already a lot of parsers already written.
Can you use a pipe symbol? That's usually the next most common delimiter after comma or tab delimited strings. It's unlikely most text would contain a pipe, and ord('|') returns 124 for me, so that seems to fit your requirements.
For fast escaping I use stuff like this:
say you want to concatinate str1, str2 and str3
what I do is:
delimitedStr=str1.Replace("#","#a").Replace("|","#p")+"|"+str2.Replace("#","#a").Replace("|","#p")+"|"+str3.Replace("#","#a").Replace("|","#p");
then to retrieve original use:
splitStr=delimitedStr.Split("|".ToCharArray());
str1=splitStr[0].Replace("#p","|").Replace("#a","#");
str2=splitStr[1].Replace("#p","|").Replace("#a","#");
str3=splitStr[2].Replace("#p","|").Replace("#a","#");
note: the order of the replace is important
its unbreakable and easy to implement
Pipe for the win! |
We use ascii 0x7f which is pseudo-printable and hardly ever comes up in regular usage.
Well it's going to depend on the nature of your text to some extent but a vertical bar 0x7C doesn't crop up in text very often.
I don't think I've ever seen an ampersand followed by a comma in natural text, but you can check the file first to see if it contains the delimiter, and if so, use an alternative. If you want to always be able to know that the delimiter you use will not cause a conflict, then do a loop checking the file for the delimiter you want, and if it exists, then double the string until the file no longer has a match. It doesn't matter if there are similar strings because your program will only look for exact delimiter matches.
This can be good or bad (usually bad) depending on the situation and language, but keep mind mind that you can always Base64 encode the whole thing. You then don't have to worry about escaping and unescaping various patterns on each side, and you can simply seperate and split strings based on a character which isn't used in your Base64 charset.
I have had to resort to this solution when faced with putting XML documents into XML properties/nodes. Properties can't have CDATA blocks in them at all, and nodes escaped as CDATA obviously cannot have further CDATA blocks inside that without breaking the structure.
CSV is probably a better idea for most situations, though.
Both pipe and caret are the obvious choices. I would note that if users are expected to type the entire response, caret is easier to find on any keyboard than is pipe.
I've used double pipe and double caret before. The idea of a non printable char works if your not hand creating or modifying the file. For quick random access file storage and retrieval field width is used. You don't even have to read the file.. your literally pulling from the file by reference. This is how databases do some storage.. but they also manage the spaces between records and such. And it introduced the problem of max data element width. (Index attach a header which is used to define the width of each element and it's data type in the original old days.. later they introduced compression with remapping chars. This allows for a text file to get about 1/8 the size in transmission.. variable length char encoding for the win
make it dynamic : )
announce your control characters in the file header
for example
delimiter: ~
escape: \
wrapline: $
width: 19
hello world~this i$
s \\just\\ a sampl$
e text~$someVar$~h$
ere is some \~\~ma$
rkdown strikethrou$
gh\~\~ text
would give the strings
hello world
this is \just\ a sample text
$someVar$
here is some ~~markdown strikethrough~~ text
i have implemented something similar:
a plaintar text container format,
to escape and wrap utf16 text in ascii,
as an alternative to mime multipart messages.
see https://github.com/milahu/live-diff-html-editor
Which style of Ruby string quoting do you favour? Up until now I've always used 'single quotes' unless the string contains certain escape sequences or interpolation, in which case I obviously have to use "double quotes".
However, is there really any reason not to just use double quoted strings everywhere?
Don't use double quotes if you have to escape them. And don't fall in "single vs double quotes" trap. Ruby has excellent support for arbitrary delimiters for string literals:
Mirror of Site - https://web.archive.org/web/20160310224440/http://rors.org/2008/10/26/dont-escape-in-strings
Original Site -
http://rors.org/2008/10/26/dont-escape-in-strings
I always use single quotes unless I need interpolation.
Why? It looks nicer. When you have a ton of stuff on the screen, lots of single quotes give you less "visual clutter" than lots of double quotes.
I'd like to note that this isn't something I deliberately decided to do, just something that I've 'evolved' over time in trying to achieve nicer looking code.
Occasionally I'll use %q or %Q if I need in-line quotes. I've only ever used heredocs maybe once or twice.
Like many programmers, I try to be as specific as is practical. This means that I try to make the compiler do as little work as possible by having my code as simple as possible. So for strings, I use the simplest method that suffices for my needs for that string.
<<END
For strings containing multiple newlines,
particularly when the string is going to
be output to the screen (and thus formatting
matters), I use heredocs.
END
%q[Because I strongly dislike backslash quoting when unnecessary, I use %Q or %q
for strings containing ' or " characters (usually with square braces, because they
happen to be the easiest to type and least likely to appear in the text inside).]
"For strings needing interpretation, I use %s."%['double quotes']
'For the most common case, needing none of the above, I use single quotes.'
My first simple test of the quality of syntax highlighting provided by a program is to see how well it handles all methods of quoting.
I use single quotes unless I need interpolation. The argument about it being troublesome to change later when you need interpolation swings in the other direction, too: You have to change from double to single when you found that there was a # or a \ in your string that caused an escape you didn't intend.
The advantage of defaulting to single quotes is that, in a codebase which adopts this convention, the quote type acts as a visual cue as to whether to expect interpolated expressions or not. This is even more pronounced when your editor or IDE highlights the two string types differently.
I use %{.....} syntax for multi-line strings.
I usually use double quotes unless I specifically need to disable escaping/interpolation.
I see arguments for both:
For using mostly double quotes:
The github ruby style guideline advocates always using double quotes:
It's easier to search for a string foobar by searching for "foobar" if you were consistent with quoting. However, I'm not. So I search for ['"]foobar['"] turning on regexps.
For using some combination of single double quotes:
Know if you need to look for string interpolation.
Might be slightly faster (although so slight it wasn't enough to affect the github style guide).
I used to use single quotes until I knew I needed interpolation. Then I found that I was wasting a lot of time when I'd go back and have to change some single-quotes to double-quotes. Performance testing showed no measurable speed impact of using double-quotes, so I advocate always using double-quotes.
The only exception is when using sub/gsub with back-references in the replacement string. Then you should use single quotes, since it's simpler.
mystring.gsub( /(fo+)bar/, '\1baz' )
mystring.gsub( /(fo+)bar/, "\\1baz" )
I use single quotes unless I need interpolation, or the string contains single quotes.
However, I just learned the arbitrary delimiter trick from Dejan's answer, and I think it's great. =)
Single quote preserve the characters inside them. But double quotes evaluate and parse them. See the following example:
"Welcome #{#user.name} to App!"
Results:
Welcome Bhojendra to App!
But,
'Welcome #{#user.name} to App!'
Results:
Welcome #{#user.name} to App!