How to elegantly beautify a topojson code? - topojson

Is there some tool to explore the tree structure of the one-line topojson files ? (beautify)
{"type":"Topology","transform":{"scale":[0.0015484881821515486,0.0010301030103010299],"translate":-5.491666666666662,41.008333333333354]},"objects": {"level_0001m":{"type":"GeometryCollection","geometries":[{"type":"Polygon","arcs":[[0]],"properties":{"name":1}},{"type":"Polygon","arcs":[[1]],"properties":{"name":1}},{ ... }]},"level_0050m":{ ... }}}
Comments: My current method is to open the topojson .json into a text editor, and to manually look for clues while browsing. I end up sumarizing the whole by hand and keeping an handy note, something like :
{
"type":"Topology",
"transform":
{
"scale": [0.0015484881821515486,0.0010301030103010299],
"translate":[-5.491666666666662,41.008333333333354]
},
"objects": {
"level_0001m":
{
"type":"GeometryCollection",
"geometries":
[
{"type":"Polygon","arcs":[[0]],"properties":{"name":1}},
{"type":"Polygon","arcs":[[1]],"properties":{"name":1}},
{ ... }
]
},
"level_0050m":
{ ... }
}
}
But is there some more advanced tools to open, explore, edit topojson ?

Try this, jsbeautifier. I just did it this way.

If you're working with Windows try JSONedit. It's generic JSON editor, but it is relatively efficient when handling medium size JSON files (like your world-50m.json: 747 kB, 254k nodes including 165k int and 88k array nodes). Similar files to your notes can be created by deleting array elements after few initial ones (RMB + "Delete all siblings after node").

http://jsonprettyprint.com/json-pretty-printer.php
I tried this one with a file of 1.9 mb and it worked, maybe it'll work for you aswell

js-beautify from the command line generates json the way I would write them by hand.
https://github.com/einars/js-beautify

Use a JSON prettifier, example: http://pro.jsonlint.com
Use http://jsoneditoronline.org

Related

Using a Power Automate flow, how do I convert JSON array to a delimited string?

In Power Automate I am calling an API which returns this JSON:
{
"status":"200",
"Suburbs":[
{
"ID":"1000",
"Name":"CONCORD WEST",
"Postcode":"2138"
},
{
"ID":"1001",
"Name":"LIBERTY GROVE",
"Postcode":"2138"
},
{
"ID":"1002",
"Name":"RHODES",
"Postcode":"2138"
},
{
"ID":"3891",
"Name":"UHRS POINT",
"Postcode":"2138"
},
{
"ID":"1003",
"Name":"YARALLA",
"Postcode":"2138"
}
]
}
Using PA actions, how do I convert this JSON to a String variable that looks like this?:
"CONCORD WEST, LIBERTY GROVE, RHODES, UHRS POINT, YARALLA"
I figured out how to do this. I prefer not to use complex code-style expressions in Power Automate flows as I think they are hard to understand and hard to maintain so used standard PA actions where I could.
I parsed the JSON, then used "Select" to pick out the suburb names, then used concat() within a "for each" loop through the Suburbs array. I think that Compose could probably be used in the place of the concat() but stopped investigating once I'd found this solution.

How can I two-phase split large Json File on NiFi

I'm using NiFi for recover and put to Kafka many data. I'm actually in test phase and i'm using a large Json file.
My Json file countains 500K recordings.
Actually, I have a processor getFile for get the file and a SplitJson.
JsonPath Expression : $..posts.*
This configuration works with little file that countain 50K recordings but for large files, she crashes.
My Json file looks like that, with the 500K registeries in "posts":[]
{
"meta":{
"requestid":"request1000",
"http_code":200,
"network":"twitter",
"query_type":"realtime",
"limit":10,
"page":0
},
"posts":[
{
"network":"twitter",
"posted":"posted1",
"postid":"id1",
"text":"text1",
"lang":"lang1",
"type":"type1",
"sentiment":"sentiment1",
"url":"url1"
},
{
"network":"twitter",
"posted":"posted2",
"postid":"id2",
"text":"text2",
"lang":"lang2",
"type":"type2",
"sentiment":"sentiment2",
"url":"url2"
}
]
}
I read some documentations for this problem but, topics are for text file and speakers propose to link many SplitText for split progressively the file. With a rigide structure like my Json, I don't understand how I can do that.
I'm looking for a solution that she makes the job on 500K recordings well.
Unfortunately I think this case (large array inside a record) is not handled very well right now...
SplitJson requires the entire flow file to be read into memory, and it also doesn't have an outgoing split size. So this won't work.
SplitRecord generally would be the correct solution, but currently there are two JSON record readers - JsonTreeReader and JsonPathReader. Both of these stream records, but the issue here is there is only one huge record, so they will each read the entire document into memory.
There have been a couple of efforts around this specific problem, but unfortunately none of them have made it into a release.
This PR which is now closed had added a new JSON record reader which could stream records starting from a JSON path, which in your case could be $.posts:
https://github.com/apache/nifi/pull/3222
With that reader you wouldn't even do a split, you would just send the flow file to PublishKafkaRecord_2_0 (or whichever appropriate version of PublishKafkaRecord), and it would read each record and publish to Kafka.
There is also an open PR for a new SelectJson processor that looks like it could potentially help:
https://github.com/apache/nifi/pull/3455
Try using SplitRecord processor in NiFi.
Define Record Reader/Writer controller services in SplitRecord processor.
Then configure Records Per Split to 1 and use Splits relationship for further processing.
(OR)
if you want to flatten and fork the record then use ForkRecord processor in NiFi.
For usage refer to this link.
I had the same issue with json and used to write streaming parser
Use ExeuteGroovyScript processor with
the following code.
It should split large incoming file to small ones:
#Grab(group='acme.groovy', module='acmejson', version='20200120')
import groovyx.acme.json.AcmeJsonParser
import groovyx.acme.json.AcmeJsonOutput
def ff=session.get()
if(!ff)return
def objMeta=null
def count=0
ff.read().withReader("UTF-8"){reader->
new AcmeJsonParser().withFilter{
onValue('$.meta'){
//just remember it to use later
objMeta=it
}
onValue('$.posts.[*]'){objPost->
def ffOut = ff.clone(false) //clone without content
ffOut.post_index=count //add attribite with index
//write small json
ffOut.write("UTF-8"){writer->
AcmeJsonOutput.writeJson([meta:objMeta, post:objPost], writer, true)
}
REL_SUCCESS << ffOut //transfer to success
count++
}
}.parse(reader)
}
ff.remove()
output file example:
{
"meta": {
"requestid": "request1000",
"http_code": 200,
"network": "twitter",
"query_type": "realtime",
"limit": 10,
"page": 0
},
"post": {
"network": "twitter",
"posted": "posted11",
"postid": "id11",
"text": "text11",
"lang": "lang11",
"type": "type11",
"sentiment": "sentiment11",
"url": "url11"
}
}

Extract the content of OUTLINE ( or AL OUTLINE ) from VS Code

Is there any bash command so that we can extract the content of OUTLINE or AL OUTLINE section of the VS Code and write the same into some text document ?
I made a VSCode extension to accomplish this.
Extension page
GitHub Repo
Install and run ctrl + shift + p -> List Symbols
If you don't get a better answer, you can try the Show Functions extension.
It can produce an output of a (clickable) list of functions and symbols into a separate editor which you can then Ctrl-A to copy and paste.
You don't say what languages you are using, I use the following for .js files:
"funcList": {
"doubleSpacing": true,
"filters": [
{
"extensions": [
".js"
],
"native": "/^[a-z]+\\s+\\w+\\s*\\(.*\\)/mgi",
"display": "/\\S* +(\\w+\\s*\\(.*\\))/1",
"sort": 0
}
]
}
which captures and displays the function name and args like:
loadCountryTaxonomy(country)
toggleSearchResultsPanel()
updatetaxArticleQueries(data)
but you can modify the regex to your requirements. I don't try to list symbols other than functions but apparently you can with this extension.

zingchart setseriesdata visibility issue

Pretty straight forward question, as soon as i use setseries data the visibility my pie chart is no longer visible. I have checked the plot object and the series were updated correctly, however since I do not find a visibility attribute anywhere in the plot object, i am at a loss.
The lack of zingcharts documentation and proper examples does not aid either. Im fairly certain this is a simple scenario to solve, but I've been unable to do so.
zingchart.exec('organismplot', 'setseriesdata', {
"data": [
{
"values":data_update.organisms,
"text":"active",
"background-color":"#2d4962",
"border-width":"1px",
"shadow":0,
"visible":1
},
{
"values":(data_update.totalorganism-data_update.organisms),
"text":"passive",
"background-color":"#2d4962",
"border-width":"1px",
"shadow":0,
"visible":0
}]
I'm a member of the ZingChart team, and I'm happy to help you out!
What is the type of data_update.organisms and data_update.totalorganism-data_update.organisms? Make sure that you are passing a single element array, or if those are simply single values, wrap the variables in brackets to create a single value array for the "values" attribute. E.G.:
"data": [
{
"values":[data_update.organisms], // If data_update.organisms is a single value.
"text":"active",
"background-color":"#2d4962",
"border-width":"1px",
"shadow":0,
"visible":1
},
{
"values":[data_update.totalorganism-data_update.organisms], // Again, single value array.
"text":"passive",
"background-color":"#2d4962",
"border-width":"1px",
"shadow":0,
"visible":0
}
]
I've created a demo using your exact method call, except I've changed the "values" attributes to use a single value array, which are needed for pie charts. Check out the demo here.
I hope that helps. Let me know if you need some more help!

How do I parse a curly bracket structured file (e.g. Ruby)?

The contents of the file looks like (any deep possible):
{
{bla: XBS/333: bla9,1-}
}
{
{q: XBS/333: bla9,1-}
{{}}
{x:{t: QWA/333: C}}
}
How do I parse it to e.g. an Array or a Hash with Ruby? What do you think is a good data structure object to store it to?
(It's a SWIFT-Banking file, if that helps)
Try any proper parsers generator, for example http://treetop.rubyforge.org/

Resources