I think i have have a bug in JSON.Net - debugging

So I'm a bit of a beginner think i've got some unexpected behaviour/ a bug, though it may well be operator error rather than anything else but either way I'm stumped and don't know what to do.
I'm reading in a JSON string! from
https://beta-api.betfair.com/exchange/betting/rest/v1/en/navigation/lhm.json
I'm passing it with JSON.Net(v6.0.3 from Nuget) ill get to how in minute, but im getting a error where two of the supposedly unique objects returned have the same ID, something of a problem. While trying to work out where I had mashed it up I looked at the JSON string with the Visual Studio JSON Visualiser and that is showing two different ID's as expected.
Edit
I've uploaded two pictures but had to od it externally and copied in the section of JSON that's relevant.
http://imgur.com/pk2hIJI,SZDSSLh
{
"children": [
{
"children": [
{
"exchangeId": "1",
"id": "1.114548892",
"name": "Moneyline",
"type": "MARKET"
}
],
"id": "27229997",
"name": "Hamilton # Calgary",
"type": "EVENT"
},
{
"children": [
{
"exchangeId": "1",
"id": "1.114548889",
"name": "Moneyline",
"type": "MARKET"
}
],
"id": "27229996",
"name": "Toronto # Ottawa",
"type": "EVENT"
}
],
"id": "74587734296",
"name": "Games 18 July",
"type": "GROUP"
},
To fetch the string i am using an object inherited from HTTPclient, with
BFresponce = Await Me.GetAsync(BetFairBetaAddress & RestAddress & Method)
Dim x = Await BFresponce.Content.ReadAsStringAsync 'not normaly here just so i can veiw the string
Return JsonConvertHelper.DeserializeObject(Of T)(Await BFresponce.Content.ReadAsStreamAsync())
With my own help function
Public Shared Function DeserializeObject(Of T)(stream As Stream) As T
Dim serializer As New JsonSerializer()
Using streamReader As New StreamReader(stream)
Return serializer.Deserialize(streamReader, GetType(T))
End Using
End Function
And the class beeing passed in T is
Namespace BetFairNS
Public Class NavigationData
Public Property name As String
Public Property id As Single
Public Property exchangeId As Integer
Public Property type As NavigationDataType
Public Property children As List(Of NavigationData)
End Class
Public Enum NavigationDataType
EVENT_TYPE
GROUP
[EVENT]
MARKET
RACE
End Enum
End Namespace
So the crux of it is have i mashed this up somewhere? or if its a bug what do I do?

There is nothing wrong with Json.Net. The JSON data file you linked to has 260 instances of recurring IDs, all of them in the Horse Racing category. Here are the first 5:
Duplicate id found: 1.114591860
Path 1: ROOT > Horse Racing > 1600m 3yo > 1600m 3yo
Path 2: ROOT > Horse Racing > FRA > Chant (FRA) 14th Jul > 1600m 3yo
Duplicate id found: 1.114591859
Path 1: ROOT > Horse Racing > 1600m 3yo > To Be Placed
Path 2: ROOT > Horse Racing > FRA > Chant (FRA) 14th Jul > To Be Placed
Duplicate id found: 1.114591864
Path 1: ROOT > Horse Racing > 1600m 3yo > 1600m 3yo
Path 2: ROOT > Horse Racing > FRA > Chant (FRA) 14th Jul > 1600m 3yo
Duplicate id found: 1.114591863
Path 1: ROOT > Horse Racing > 1600m 3yo > To Be Placed
Path 2: ROOT > Horse Racing > FRA > Chant (FRA) 14th Jul > To Be Placed
Duplicate id found: 1.114591869
Path 1: ROOT > Horse Racing > 1600m Grp1 > 1600m Grp1
Path 2: ROOT > Horse Racing > FRA > Chant (FRA) 14th Jul > 1600m Grp1
You can check this simply by downloading the file using a web browser, saving it to disk, then opening it with a text editor and searching for the ID values I've listed. Each one appears twice, at different places in the hierarchy.
Does it say somewhere in the API documentation for this site that all IDs in the JSON will be distinct? It looks to me like they simply decided to list the same node at more than one level for browsing convenience (i.e. list all the races directly under "horse racing" and also list them by country/event). You are probably going to need to change your assumptions about the data and adjust your code accordingly.
EDIT
Now that you have shared the actual ID / name of the node that is giving you trouble, the problem is clear. You've declared the id field of your NavigationData class as Single when it should be String. Single is a floating point type, and is not suitable for holding ID values, even if they may have a decimal point in them.
Again, take a closer look at the actual JSON file. If you search for "Hamilton # Calgary", you will see that it has an ID of 27229997. The other node, "Toronto # Ottawa", immediately beneath it, has an ID of 27229996. In your debugger image, the values both show as 27229996.0 The IDs are getting mangled most likely because Single does not have the capability to represent the number 27229997 exactly as a binary floating point number, so the closest neighboring value is being chosen instead. This is a very bad thing when you need an exact representation (as you always do with an ID).
The key point is to use the right tool for the job. You cannot assume that a third-party ID will always be numeric or contain only a single decimal point, and you will never do math operations on an ID. In short, there's no reason to make it a decimal type. Declare it as String and that will fix the problem. I would also recommend the same for the exchangeId field for the same reason.

Related

Is there a way to dynamically generate dropdownoptions in Umbraco 7.3 backoffice?

I have a folderstructure that looks a little like this (DocumentType in square brackets):
Municipality [Landingpage]
|_Areas [Area]
|_Plot [Plot]
|_Subdivision [Zoning]
| |_Ballig, Falkevej [Zone]
| |_Durup, Torpager [Zone]
| |_...
| |_Vinkel, Vinkelpletvej [Zone]
|_Properties [Properties]
|_Private Homes [Types]
|_Coporate Buildings [Types]
|_Other Buildinglots [Types]
In each [Zone] I would like to have a dropdown called "Type". And that type should be a value matching the name of one of the [Properties] [Types].
Visual example (comma separation = new option in dropdown):
You could try nuPickers (https://our.umbraco.com/packages/backoffice-extensions/nupickers/) - there's an XML data source that let's you do Xpath for content (https://github.com/uComponents/nuPickers/wiki/Data-Source-Xml).
You'd be setting "Options Xpath" to something like "//Types" to get all document of type "Types" in a dropdown. I think (haven't used nuPickers for a while).

Elasticsearch stop words relative path

Can somebody tell me please what elasticsearch documentation means by relative path to config directory? I dont see any in ES instalation. I need to find a stop words file which is defined in es index like "stopwords_path": "stopwords/slovak.txt" but I cant find any file with this name. May be Win 10 is not able to find it cause it has really poor search engine. Thanks a lot.
As written in the documentation you should create the file slovak.txt according this syntax:
A path (either relative to config location, or absolute) to a
stopwords file configuration. Each stop word should be in its own
"line" (separated by a line break). The file must be UTF-8 encoded.
so you should create a slowak.txt file like this:
a
aby
aj
ak
aká
akáže
aké
akého
akéhože
akej
akejže
akému
akémuže
akéže
ako
akom
akomže
akou
akouže
akože
akú
akúže
aký
akých
akýchže
akým
akými
akýmiže
akýmže
akýže
ale
alebo
ani
áno
asi
avšak
až
ba
bez
bezo
bol
bola
boli
bolo
buď
bude
budem
budeme
budeš
budete
budú
by
byť
cez
cezo
čej
či
čí
čia
čie
čieho
čiemu
čím
čími
čiu
čo
čoho
čom
čomu
čou
čože
ďalší
ďalšia
ďalšie
ďalšieho
ďalšiemu
ďalších
ďalším
ďalšími
ďalšiu
ďalšom
ďalšou
dnes
do
ešte
ho
hoci
i
iba
ich
im
iná
iné
iného
inej
inému
iní
inom
inú
iný
iných
iným
inými
ja
je
jeho
jej
jemu
ju
k
ká
kam
kamže
každá
každé
každého
každému
každí
každou
každú
každý
každých
každým
každými
káže
kde
ké
keď
keďže
kej
kejže
kéže
kie
kieho
kiehože
kiemu
kiemuže
kieže
koho
kom
komu
kou
kouže
kto
ktorá
ktoré
ktorej
ktorí
ktorou
ktorú
ktorý
ktorých
ktorým
ktorými
ku
kú
kúže
ký
kýho
kýhože
kým
kýmu
kýmuže
kýže
lebo
leda
ledaže
len
ma
má
majú
mal
mala
mali
mám
máme
máš
mať
máte
medzi
mi
mňa
mne
mnou
moja
moje
mojej
mojich
mojim
mojimi
mojou
moju
možno
môcť
môj
môjho
môže
môžem
môžeme
môžeš
môžete
môžu
mu
musí
musia
musieť
musím
musíme
musíš
musíte
my
na
nad
nado
najmä
nám
nami
nás
náš
naša
naše
našej
nášho
naši
našich
našim
našimi
našou
ne
neho
nech
nej
nejaká
nejaké
nejakého
nejakej
nejakému
nejakom
nejakou
nejakú
nejaký
nejakých
nejakým
nejakými
nemu
než
nič
ničím
ničoho
ničom
ničomu
nie
niečo
niektorá
niektoré
niektorého
niektorej
niektorému
niektorom
niektorou
niektorú
niektorý
niektorých
niektorým
niektorými
nielen
nich
nim
ním
nimi
no
ňom
ňou
ňu
o
od
odo
on
oň
ona
oňho
oni
ono
ony
po
pod
podľa
podo
pokiaľ
popod
popri
potom
poza
práve
pre
prečo
pred
predo
preto
pretože
pri
s
sa
seba
sebe
sebou
sem
si
sme
so
som
ste
sú
svoj
svoja
svoje
svojho
svojich
svojim
svojím
svojimi
svojou
svoju
ta
tá
tak
taká
takáto
také
takéto
takej
takejto
takého
takéhoto
takému
takémuto
takí
taký
takýto
takú
takúto
takže
tam
táto
teba
tebe
tebou
teda
tej
tejto
ten
tento
ti
tí
tie
tieto
tiež
títo
to
toho
tohto
tohoto
tom
tomto
tomu
tomuto
toto
tou
touto
tu
tú
túto
tvoj
tvoja
tvoje
tvojej
tvojho
tvoji
tvojich
tvojim
tvojím
tvojimi
ty
tých
tým
tými
týmto
u
už
v
vám
vami
vás
váš
vaša
vaše
vašej
vášho
vaši
vašich
vašim
vaším
veď
viac
vo
však
všetci
všetka
všetko
všetky
všetok
vy
z
za
začo
začože
zo
že
This file have to be inside ES_PATH_CONF so in linux is /etc/elasticsearch/ and in windows is C:\ProgramData\Elastic\Elasticsearch\config Then you follow relative path notation. So if it is C:\ProgramData\Elastic\Elasticsearch\config\slowak.txt, you should set your path in this way:
"stopwords_path":"slowak.txt"
if you would put it inside C:\ProgramData\Elastic\Elasticsearch\config\synonym\slowak.txt you you set:
"stopwords_path":"synonym\slowak.txt"
What this documentation means is that you can provide your own path or use the relative file to define your own stop words in a text file.
if you are using the relative path then it should be inside your config folder or elasticsearch, where your elasticsearch.yml is present.
If you choose to have an absolute path, then you can store this file to any location where elasticsearch has access.
Just reproduced your issue and used GET Settings API to tell the current location of this file
For example:
GET yourindex/_settings
Retrurns the path which you gave while creating this setting.
{
"stopwords": {
"settings": {
"index": {
"number_of_shards": "1",
"provided_name": "stopwords",
"creation_date": "1587374021579",
"analysis": {
"filter": {
"my_stop": {
"type": "stop",
"stopwords": [
"and",
"is",
"the"
],
"stopwords_path": "opster.txt". -> this is the file location which in this is relative
}
}
},
"number_of_replicas": "1",
"uuid": "EQyF7JydTXGXoebh52yNpg",
"version": {
"created": "7060199"
}
}
}
}
}
Update: an example with the absolute path given by me on my tar installation of Elasticsearch on the ubuntu EC2 machine and using same GET index setting figures out that.

Highlight part of code block

I have a very large code block in my .rst file, which I would like to highlight just a small portion of and make it bold. Consider the following rst:
wall of text. wall of text. wall of text.wall of text. wall of text. wall of text.wall of text. wall of text. wall of text.
wall of text. wall of text. wall of text.wall of text. wall of text. wall of text.wall of text. wall of text. wall of text.
**Example 1: Explain showing a table scan operation**::
EXPLAIN FORMAT=JSON
SELECT * FROM Country WHERE continent='Asia' and population > 5000000;
{
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "53.80" # This query costs 53.80 cost units
},
"table": {
"table_name": "Country",
"access_type": "ALL", # ALL is a table scan
"rows_examined_per_scan": 239, # Accessing all 239 rows in the table
"rows_produced_per_join": 11,
"filtered": "4.76",
"cost_info": {
"read_cost": "51.52",
"eval_cost": "2.28",
"prefix_cost": "53.80",
"data_read_per_join": "2K"
},
"used_columns": [
"Code",
"Name",
"Continent",
"Region",
"SurfaceArea",
"IndepYear",
"Population",
"LifeExpectancy",
"GNP",
"GNPOld",
"LocalName",
"GovernmentForm",
"HeadOfState",
"Capital",
"Code2"
],
"attached_condition": "((`world`.`Country`.`Continent` = 'Asia') and (`world`.`Country`.`Population` > 5000000))"
}
}
}
When it converts to html, it syntax highlights by default (good), but I also want to specify a few lines that should be bold (the ones with comments on them, but possibly others too.)
I was thinking of adding a trailing character sequence on the line (.e.g. ###) and then writing a post-parser script to modify the html files generated. Is there a better way?
The code-block directive has an emphasize-lines option. The following highlights the lines with comments in your code.
**Example 1: Explain showing a table scan operation**
.. code-block:: python
:emphasize-lines: 7, 11, 12
EXPLAIN FORMAT=JSON
SELECT * FROM Country WHERE continent='Asia' and population > 5000000;
{
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "53.80" # This query costs 53.80 cost units
},
"table": {
"table_name": "Country",
"access_type": "ALL", # ALL is a table scan
"rows_examined_per_scan": 239, # Accessing all 239 rows in the table
"rows_produced_per_join": 11,
"filtered": "4.76",
"cost_info": {
"read_cost": "51.52",
"eval_cost": "2.28",
"prefix_cost": "53.80",
"data_read_per_join": "2K"
},
"used_columns": [
"Code",
"Name",
"Continent",
"Region",
"SurfaceArea",
"IndepYear",
"Population",
"LifeExpectancy",
"GNP",
"GNPOld",
"LocalName",
"GovernmentForm",
"HeadOfState",
"Capital",
"Code2"
],
"attached_condition": "((`world`.`Country`.`Continent` = 'Asia') and (`world`.`Country`.`Population` > 5000000))"
}
}
}

Sublime Text - Goto line and column

Currently, the Go to line shortcut (CTRL+G in windows/linux) only allows to navigate to a specific line.
It would be nice to optionally allow the column number to be specified after comma, e.g.
:30,11 to go to line 30, column 11
Is there any plugin or custom script to achieve this?
Update 3
This is now part of Sublime Text 3 starting in build number 3080:
Goto Anything supports :line:col syntax in addition to :line
For example, you can use :30:11 to go to line 30, column 11.
Update 1 - outdated
I just realized you've tagged this as sublime-text-3 and I'm using 2. It may work for you, but I haven't tested in 3.
Update 2 - outdated
Added some sanity checks and some modifications to GotoRowCol.py
Created github repo sublimetext2-GotoRowCol
Forked and submitted a pull request to commit addition to package_control_channel
Edit 3: all requirements of the package_control repo have been met. this package is now available in the package repository in the application ( install -> GotoRowCol to install ).
I too would like this feature. There's probably a better way to distribute this but I haven't really invested a lot of time into it. I read through some plugin dev tutorial really quick, and used some other plugin code to patch this thing together.
Select the menu option Tools -> New Plugin
A new example template will open up. Paste this into the template:
import sublime, sublime_plugin
class PromptGotoRowColCommand(sublime_plugin.WindowCommand):
def run(self, automatic = True):
self.window.show_input_panel(
'Enter a row and a column',
'1 1',
self.gotoRowCol,
None,
None
)
pass
def gotoRowCol(self, text):
try:
(row, col) = map(str, text.split(" "))
if self.window.active_view():
self.window.active_view().run_command(
"goto_row_col",
{"row": row, "col": col}
)
except ValueError:
pass
class GotoRowColCommand(sublime_plugin.TextCommand):
def run(self, edit, row, col):
print("INFO: Input: " + str({"row": row, "col": col}))
# rows and columns are zero based, so subtract 1
# convert text to int
(row, col) = (int(row) - 1, int(col) - 1)
if row > -1 and col > -1:
# col may be greater than the row length
col = min(col, len(self.view.substr(self.view.full_line(self.view.text_point(row, 0))))-1)
print("INFO: Calculated: " + str({"row": row, "col": col})) # r1.01 (->)
self.view.sel().clear()
self.view.sel().add(sublime.Region(self.view.text_point(row, col)))
self.view.show(self.view.text_point(row, col))
else:
print("ERROR: row or col are less than zero") # r1.01 (->)
Save the file. When the "Save As" dialog opens, it should be in the the Sublime Text 2\Packages\User\ directory. Navigate up one level to and create the folder Sublime Text 2\Packages\GotoRowCol\ and save the file with the name GotoRowCol.py.
Create a new file in the same directory Sublime Text 2\Packages\GotoRowCol\GotoRowCol.sublime-commands and open GotoRowCol.sublime-commands in sublime text. Paste this into the file:
[
{
"caption": "GotoRowCol",
"command": "prompt_goto_row_col"
}
]
Save the file. This should register the GotoRowCol plugin in the sublime text system. To use it, hit ctrl + shift + p then type GotoRowCol and hit ENTER. A prompt will show up at the bottom of the sublime text window with two number prepopulated, the first one is the row you want to go to, the second one is the column. Enter the values you desire, then hit ENTER.
I know this is a complex operation, but it's what I have right now and is working for me.

Ruby - URL to Markdown

TOTAL rookie here.
I'm working on customizing a script made by Brett Terpstra - http://brettterpstra.com/2013/11/01/save-pocket-favorites-to-nvalt-with-ifttt-and-hazel/
Mine is a different use: I'd like to save my pinboard bookmarks with a specific tag to a file in dropbox in Markdown.
I feed it a text file such as:
Title: Yesterday is over.
URL: http://www.jonacuff.com/blog/want-to-change-the-world-get-doing/
Tags: 2md, 2wcx, 2pdf
Date: June 20, 2013 at 06:20PM
Image: notused
Excerpt: You can't start the next chapter of your life if you keep re-reading the last one.
And it outputs the markdown file.
Everything works great except when the 'excerpt' (see above) is more than one line. Sometimes it's a couple of paragraphs. When that happens, it stops working. When I hit enter from the command line, it's still waiting for more input.
Here's an example of a file that it doesn't work on:
Title: Talking ’bout my Generation.
URL: http://blog.greglaurie.com/?p=8881
Tags: 2md, 2wcx, 2pdf
Date: June 28, 2013 at 09:46PM
Image: notused
Excerpt: Contrast two men from the 19th century: Max Jukes and Jonathan Edwards.
Max Jukes lived in New York. He did not believe in Christ or in raising his children in the way of the Lord. He refused to take his children to church, even when they asked to go. Of his 1,026 descendants:
•300 were sent to prison for an average term of 13 years
•190 were prostitutes
•680 were admitted alcoholics
His family, thus far, has cost the state in excess of $420,000 and has made no contribution to society.
Jonathan Edwards also lived in New York, at the same time as Jukes. He was known to have studied 13 hours a day and, in spite of his busy schedule of writing, teaching, and pastoring, he made it a habit to come home and spend an hour each day with his children. He also saw to it that his children were in church every Sunday. Of his 929 descendants:
•430 were ministers
•86 became university professors
•13 became university presidents
•75 authored good books
•7 were elected to the United States Congress
•1 was Vice President of the United States
Edwards’ family never cost the state one cent.
We tend to think that our decisions only affect ourselves, but they have ramifications for generations to come.
Here's a screenshot of what it looks like after I run the command: https://www.dropbox.com/s/i9zg483k7nkdp6f/Screenshot%202013-11-22%2016.39.17.png
I'm hoping it's something easy. Any ideas?
#!/usr/bin/env ruby
# Works with IFTTT recipe https://ifttt.com/recipes/125999
#
# Set Hazel to watch the folder you specify in the recipe.
# Make sure nvALT is set to store its notes as individual files.
# Edit the $target_folder variable below to point to your nvALT
# ntoes folder.
require 'date'
require 'open-uri'
require 'net/http'
require 'fileutils'
require 'cgi'
$target_folder = "~/Dropbox/messx/urls2md"
def url_to_markdown(url)
res = Net::HTTP.post_form(URI.parse("http://heckyesmarkdown.com/go/"),{'u'=>url,'read'=>'1'})
if res.code.to_i == 200
res.body
else
false
end
end
file = ARGV[0]
begin
input = IO.read(file).force_encoding('utf-8')
headers = {}
input.each_line {|line|
key, value = line.split(/: /)
headers[key] = value.strip || ""
}
outfile = File.join(File.expand_path($target_folder), headers['Title'].gsub(/["!*?'|]/,'') + ".txt")
date = Time.now.strftime("%Y-%m-%d %H:%M")
date_added = Date.parse(headers['Date']).strftime("%Y-%m-%d %H:%M")
content = "Title: #{headers['Title']}\nDate: #{date}\nDate Added: #{date_added}\nSource: #{headers['URL']}\n"
tags = false
if headers['Tags'].length > 0
tag_arr = header s['Tags'].split(", ")
tag_arr.map! {|tag|
%Q{"#{tag.strip}"}
}
tags = tag_arr.join(" ")
content += "Keywords: #{tags}\n"
end
markdown = url_to_markdown(headers['URL']).force_encoding('utf-8')
if markdown
content += headers['Image'].length > 0 ? "\n\n> #{headers['Excerpt']}\n\n---#{markdown}\n" : "\n\n"+markdown
else
content += headers['Image'].length > 0 ? "\n\n![](#{headers['Image']})\n\n#{headers['Excerpt']}\n" : "\n\n"+headers['Excerpt']
end
File.open(outfile,'w') {|f|
f.puts content
}
if tags && File.exists?("/usr/local/bin/openmeta")
%x{/usr/local/bin/openmeta -a #{tags} -p "#{outfile}"}
end
# FileUtils.rm(file)
rescue Exception => e
puts e
end
How about this? Modify your input.each_line area accordingly:
headers = {}
key = nil
input.each_line do |line|
match = /^(?<key>\w+)\s*:\s*(?<value>.*)/.match(line)
value = line
if match
key = match[:key].strip
headers[key] = match[:value].strip
else
headers[key] += line
end
end
First, splitting on just ":" is dangerous since that can be in content. Instead, a (simplified from code) regex of /^\w+:.*/ will match "Word: Content". Since the lines after the "Excerpt:" aren't prefixed, you need to hang on to the last seen key, and just append if there's no key for this line. You may need to add a newline in there, depending on what you're doing with that header information, but it seems to work.

Resources