What is the meaning of following line of code in Golang? - go

var asciiSpace = [256]uint8{'\t': 1, '\n': 1, '\v': 1, '\f': 1, '\r': 1, ' ': 1}
how come we are allowed to have :1 in the code above and what is the meaning of that?

asciiSpace is declared an array of uint8 with indexes 0 .. 255 (i.e. the ASCII range), and the values for the indexed elements are set to 1.
The array indexes are given as '\t', '\n' etc. indicating they refer to whitespace characters.
My guess is you misinterpreted the sequence "index : value".
A similar example is given in a (randomly chosen) Go Tutorial.

Related

Inequality comparison between strings in Oracle

SELECT LAST_NAME FROM Employees
WHERE last_name < 'King';
In the book 'SQL Fundamentals I Exam Guide' it says that on the comparison LAST_NAME < 'King' occurs the following convertion in the NLS settings assuming a US7ASCII database character set with AMERICAN NLS settings:
K + i + n + g = 75 + 105 + 110 + 103 = 393.
Then for each row in the table EMPLOYEES table, the LAST_NAME column is similarly converted to a numeric value. If this value is less than 393, then the row is selected.
But when i execute the SELECT command above, on SQL*PLUS, it returns rows(for example 'Greenberg', 'Bernestein') that does not follow the rule mentioned on the book. Are there any settings that i need to make to obtain rows that satisfy that rule?
This rule is certainly not valid. If it was, then you could swap the characters and you would still get the same result 393. But character ordering matters when comparing words.
To get a value appropriate for comparison you would have to calculate like this:
K + i + n + g = ((75 × 256 + 105) × 256 + 110) × 256 + 103
But you would exceed the valid range of numeric values for long words. For 7-bit ASCII codes (strictly in the range 0 ... 127) you could also multiply with 128 instead of 256.
--
In realty, the values are compared one by one, i.e (in pseudo code):
valueOf(last_name[0]) < 75 OR
valueOf(last_name[1]) < 105 OR
valueOf(last_name[2]) < 110 OR
valueOf(last_name[3]) < 103
... where the comparisions stop at the first inequality encountered or if the end of one of the words is reached, then the lengths of the words are compared.
In other words, the characters of the 2 words are compared character by character until two different characters are encountered. Then the comparison of these two characters yields the final result.
Take 'Kelvin' < 'King' as an example:
'K' < 'K' ==> false
'e' < 'i' ==> true
final result = true
Other example 'King' < 'Kelvin' (words are swapped):
'K' < 'K' ==> false
'i' < 'e' ==> false, the characters are not equal, therefore stop
final result = false
Other example 'be' < 'begin':
'b' < 'b' ==> false
'e' < 'e' ==> false
end of first word reached, length('be') < length('begin') ==> true
final result = true
The actual comparison of two characters is performed by comparing their numeric values, as you have mentioned already.
If that is actually what the book says, the book is wildly and frighteningly incorrect. If we're talking about the Oracle Press book, I would strongly suspect that you're misreading the explanation because I am hard-pressed to imagine how that mistake could make it through without getting caught by the author, the editor, or a reviewer.
To compare two strings, you do exactly the same thing that you do when you're putting strings in alphabetical order by hand. The string "B" comes after the string "All My Data" and before the string "Changes Constantly". You take the first character of the string and look at the decimal representation ('A' is 65, 'B' is 66, and 'C' is 67) and order based on that. If there are ties, say "All Data" and "All Indexes", you move on to the second character and compare until you can break the tie 'D' is 68 which is less than 'I' which is 73 so "All Data" < "All Indexes".

Why do numeric string comparisons give unexpected results?

'10:' < '1:'
# => true
Can someone explain me why the result in the above example is true? If I just compare '1:' and '2:' I get the result expected:
'1:' < '2:'
# => true
Strings are compared character by character.
When you compare 1: vs 2:, the comparison begins with 2 vs 1, and the comparison stops there with the expected result.
When you compare 1: vs 10:, the comparison begins with 1 vs 1, and since it is a tie, the comparison moves on to the next comparison, which is : vs 0, and the comparison stops there with the result that you have found surprising (given your expectation that the integers within the strings would be compared).
To do the comparison you expect, use to_i to convert both operands to integers.
It is character by character comparison in ASCII.
'10:' < '1:' is (49 < 49) || (48 < 58) || (58 < ?)
#=> true
'1:' < '2:' is (49 < 50) || (58 < 58)
#=> true
Left to Right boolean check is used and check breaks where true is found.
Note: It is just my observation over various example patterns.
The first character of each of your two strings are the same. And as Dave said in the comments, the second character of the first, '0', is less than ':', so the first string is less than the second.
Because the ASCII code for 0 is 48, which is smaller than the ASCII code for :, which is 58.

ruby special match variables confusion

This code produces the expected result:
def test_sub_is_like_find_and_replace
assert_equal "one t-three", "one two-three".sub(/(t\w*)/) { $1[0, 1] }
end
I understand that $1 is a variable for the first match, but I am not clear what the [0,1] is, or why it takes out the last two letters of "two".
This is covered in the String.[] documentation, in particular:
str[start, length] → new_str or nil
So, $1[0, 1] means, "slice the string returning from character at index 0 to index 0 + 1."
The [0,1] can be applied to any string to find 1 character starting at index position 0:
>> "Hello"[0,1]
=> "H"
Just for fun, something other than 0 and 1:
>> "Hello World"[3,5]
=> "lo Wo"
Starts at index position 3, takes 5 characters.
In your case
"two"[0, 1]
you take one character at index 0, namely "t". It looks like it removed the last two characters; in reality it produced only the first.

Fastest way to parse fixed length fields out of a byte array in scala?

In Scala, I receive a UDP message, and end up with a DatagramPacket whose buffer has Array[Byte] containing the message. This message, which is all ASCII characters, is entirely fixed length fields, some of them numbers, other single characters or strings. What is the fastest way to parse these fields out of the message data?
As an example, suppose my message has the following format:
2 bytes - message type, either "AB" or "PQ" or "XY"
1 byte - status, either a, b, c, f, j, r, p or 6
4 bytes - a 4-character name
1 byte - sign for value 1, either space or "-"
6 bytes - integer value 1, in ASCII, with leading spaces, eg. " 1234"
1 byte - sign for value 2
6 bytes - decimal value 2
so a message could look like
ABjTst1 5467- 23.87
Message type "AB", status "j", name "Tst1", value 1 is 5467 and value 2 is -23.87
What I have done so far is get an array message: Array[Byte], and then take slices from it,
such as
val msgType= new String(message.slice(0, 2))
val status = message(2).toChar
val name = new String(message.slice(3, 7))
val val1Sign = message(7).toChar
val val1= (new String(message.slice(8, 14)).trim.toInt * (if (val1Sign == '-') -1 else 1))
val val2Sign = message(14).toChar
val val2= (new String(message.slice(15, 21)).trim.toFloat * (if (val2Sign == '-') -1 else 1))
Of course, reused functionality, like parsing a number, would normally go in a function.
This technique is straightforward, but is there a better way to be doing this if speed is important?
Writing your own byte-array-to-primitive conversions would improve speed somewhat (if you're really that in need of speed), since it would avoid making an extra String object. Also, rather than slicing the array (which requires you to make another array), you should use the String constructor
String(byte[] bytes, int offset, int length)
which avoids making the extra copy.
I don't have the data to make performance tests, but maybe you have? Did you try pattern matching, with a precompiled pattern?
The numbers in the comment enumerate the opening parens, which correspondend to the groups:
// 12 3 4 5 6 7 8 9 10
val pattern = Pattern.compile ("((AB)|(PQ)|(XY))([abcfjrp6])(.{4})([- ])( {0,6}[0-9]{0,6})([- ])([ 0-9.]{1,6})")
//
def msplit (message: String) = {
val matcher = pattern.matcher (message)
if (matcher.find ())
List (1, 5, 6, 7, 8, 9, 10).foreach (g => println (matcher.group(g)))
}
val s = "ABjTst1 5467- 23.87"
msplit (s)
Pattern/Matcher is of course Javaland - maybe you find a more scala-way-solution with "...".r
Result:
AB
j
Tst1
5467
-
23.87

Sorting in Lua, counting number of items

Two quick questions (I hope...) with the following code. The script below checks if a number is prime, and if not, returns all the factors for that number, otherwise it just returns that the number prime. Pay no attention to the zs. stuff in the script, for that is client specific and has no bearing on script functionality.
The script itself works almost wonderfully, except for two minor details - the first being the factor list doesn't return itself sorted... that is, for 24, it'd return 1, 2, 12, 3, 8, 4, 6, and 24 instead of 1, 2, 3, 4, 6, 8, 12, and 24. I can't print it as a table, so it does need to be returned as a list. If it has to be sorted as a table first THEN turned into a list, I can deal with that. All that matters is the end result being the list.
The other detail is that I need to check if there are only two numbers in the list or more. If there are only two numbers, it's a prime (1 and the number). The current way I have it does not work. Is there a way to accomplish this? I appreciate all the help!
function get_all_factors(number)
local factors = 1
for possible_factor=2, math.sqrt(number), 1 do
local remainder = number%possible_factor
if remainder == 0 then
local factor, factor_pair = possible_factor, number/possible_factor
factors = factors .. ", " .. factor
if factor ~= factor_pair then
factors = factors .. ", " .. factor_pair
end
end
end
factors = factors .. ", and " .. number
return factors
end
local allfactors = get_all_factors(zs.param(1))
if zs.func.numitems(allfactors)==2 then
return zs.param(1) .. " is prime."
else
return zs.param(1) .. " is not prime, and its factors are: " .. allfactors
end
If I understood your problem correctly I recommend splitting up your logic a bit. The idea would be first to create a table containing the fractions and then doing the sort and after that creating the string representation.
-- Creates a table containing all the factors for a number.
function createFactors(n)
local factors = {}
-- The for loop etc. would be here. If you find a factor then add
-- it in the table.
-- ...
table.insert(factors, n)
-- ...
--# Once you've found all the factors just return the table.
return factors
end
-- Lua offers a method for sorting tables called table.sort.
local factors = createFactors(139)
table.sort(factors)
-- There is a method for creating a string representation of a table
-- called table.concat, the first parameter is the table and the second
-- is the character that is used to delimit the values.
table.concat(factors, ", ")
Nice ansewr from ponzao. To put the finishing touches on your result, here's a general-purpose routine for turning a list of strings in Lua into an English string, with "and", that represents the list:
function string.commafy(t, andword)
andword = andword or 'and'
local n = #t
if n == 1 then
return t[1]
elseif n == 2 then
return table.concat { t[1], ' ', andword, ' ', t[2] }
else
local last = t[n]
t[n] = andword .. ' ' .. t[n]
local answer = table.concat(t, ', ')
t[n] = last
return answer
end
end
Instead of table.concat(factors, ', ') use string.commafy(factors).

Resources