Get top level parent node for any node - algorithm

Given the format at the end of the question, what's the best way to get the top-level name for a given item?
Top-level names are the ones with parentId = 1.
def getTopLevel(name: String): String = {
// Environment(150) -> Environment(150) - since its parentId is 1
// Assassination -> Security - since Assassination(12) -> Terrorism(10) -> Security(2)
}
Here's my current approach but is there something better?
unmapped = categories.size
Loop through this list until there are still unmapped items.
- build a Map(Int, String) for top levels.
- build a Map(Int, Int) - that maps an id to top level id.
- keep track of unmapped items
once loop exits, I can use both Maps to get the job done.
[
{
"name": "Destination Overview",
"id": 1,
"parentId": null
},
{
"name": "Environment",
"id": 150,
"parentId": 1
},
{
"name": "Security",
"id": 2,
"parentId": 1
},
{
"name": "Armed Conflict",
"id": 10223,
"parentId": 2
},
{
"name": "Civil Unrest",
"id": 21,
"parentId": 2
},
{
"name": "Terrorism",
"id": 10,
"parentId": 2
},
{
"name": "Assassination",
"id": 12,
"parentId": 10
}
]

This is actually two questions.
Parsing Json into a Scala collection and
Using that collection to trace items back to the top parent
For the first question, you can use play-json. The second part can be handled with a tail-recursive function. Here is the full program that solves both problems:
import play.api.libs.json.{Json, Reads}
case class Node(name: String, id: Int, parentId: Option[Int])
object JsonParentFinder {
def main(args: Array[String]): Unit = {
val s =
"""
|[
| {
| "name": "Destination Overview",
| "id": 1,
| "parentId": null
| },
| {
| "name": "Environment",
| "id": 150,
| "parentId": 1
| },
// rest of the json
|]
|""".stripMargin
implicit val NodeReads : Reads[Node] =Json.reads[Node]
val r = Json.parse(s).as[Seq[Node]]
.map(x => x.id -> x).toMap
println(getTopLevelNode(150, r))
println(getTopLevelNode(12, r))
}
def getTopLevelNode(itemId : Int, nodes: Map[Int, Node], path : List[Node] = List.empty[Node]) : List[Node] = {
if(nodes(itemId).id == 1)
nodes(itemId) +: path
else
getTopLevelNode(nodes(nodes(itemId).parentId.get).id, nodes, nodes(itemId) +: path)
}
}
Output will be:
List(Node(Destination Overview,1,None), Node(Environment,150,Some(1)))
List(Node(Destination Overview,1,None), Node(Security,2,Some(1)), Node(Terrorism,10,Some(2)), Node(Assassination,12,Some(10)))
A few notes:
I have not implemented comprehensive error-handling logic. The implicit assumption is that the only item with parentId==None is the root node. nodes(itemId).parentId.get could lead to failure.
Also, in creating the map, the assumption is that all items have unique ids.
Another assumption is that all nodes eventually have a path to the root node. If that is not the case, this will fail. But it should be straightforward to fix these cases by adding more stop conditions.
I am prepending items to the accumulator list(named path here) because prepend operation on Scala's lists takes constant time. You can just reverse the resulting list or use another data structure like Vector to efficiently build the path.

Related

GraphQL on clause with enum type

I have a question regarding GraphQL because I do not know if it is possible or not.
I have a simple scheme like this:
enum Range{
D,
D_1,
D_7
}
type Data {
id: Int!
levels(range: [Range!]):[LevelEntry]
}
type LevelEntry{
range: Range!
levelData: LevelData
}
type LevelData {
range: Range!
users: Int
name: String
stairs: Int
money: Float
}
Basically I want to do a query so I can retrieve different attributes for the different entries on the levelData property of levels array which can be filtered by some levels range.
For instance:
data {
"id": 1,
"levels": [
{
"range": D,
"levelData": {
"range": D,
"users": 1
}
},
{
"range": D_1,
"levelData": {
"range": D_1,
"users": 1,
"name": "somename"
}
}
]
This means i want for D "range, users" properties and for D_1 "range,users,name" properties
I have done an example of query but I do not know if this is possible:
query data(range: [D,D_1]){
id,
levels {
range
... on D {
range,
users
}
... on D_1 {
range,
users,
name
}
}
}
Is it possible? If it is how can i do it?

Null pointer exception while consuming streams

{
"rules": [
{
"rank": 1,
"grades": [
{
"id": 100,
"hierarchyCode": 32
},
{
"id": 200,
"hierarchyCode": 33
}
]
},
{
"rank": 2,
"grades": []
}
]
}
I've a json like above and I'm using streams to return "hierarchyCode" based on some condition. For example if I pass "200" my result should print 33. So far I did something like this:
request.getRules().stream()
.flatMap(ruleDTO -> ruleDTO.getGrades().stream())
.map(gradeDTO -> gradeDTO.getHierarchyCode())
.forEach(hierarchyCode -> {
//I'm doing some business logic here
Optional<SomePojo> dsf = someList.stream()
.filter(pojo -> hierarchyCode.equals(pojo.getId())) // lets say pojo.getId() returns 200
.findFirst();
System.out.println(dsf.get().getCode());
});
So in the first iteration for the expected output it returns 33, but in the second iteration it is failing with Null pointer instead of just skipping the loop since "grades" array is empty this time. How do I handle the null pointer exception here?
You can use the below code snippet using Java 8:
int result;
int valueToFilter = 200;
List<Grade> gradeList = data.getRules().stream().map(Rule::getGrades).filter(x-> x!=null && !x.isEmpty()).flatMap(Collection::stream).collect(Collectors.toList())
Optional<Grade> optional = gradeList.stream().filter(x -> x.getId() == valueToFilter).findFirst();
if(optional.isPresent()){
result = optional.get().getHierarchyCode();
System.out.println(result);
}
I have created POJO's according to my code, you can try this approach with your code structure.
In case you need POJO's as per this code, i will share the same as well.
Thanks,
Girdhar

Go: Removing duplicate rows after SQL join result

I’m running a joined SQL query for locations and events (occuring at the locations). In the results, naturally the location data is replicated per row, as there’s a one-to-many relationship: one location holds multiple events.
What’s an optimal approach to clean up the multiplied location data?
Staying with a single SQL operation, what makes the most sense is performing a check while looping through the query results (rows).
However I cannot seem to access the locations object to check for a pre-existing location ID.
Edit:
This is the SQL output. As you see, location data naturally occurs multiple times, because it's shared across events. Ultimately this will be sent out as JSON eventually, with nested structs, one for locations, one for events.
id title latlng id title locationid
1 Fox Thea... 43.6640673,-79.4213863 1 Bob's Event 1
1 Fox Thea... 43.6640673,-79.4213863 2 Jill's Event 1
2 Wrigley ... 43.6640673,-79.4213863 3 Mary's Event 2
3 Blues Bar 43.6640673,-79.4213863 4 John's Event 3
1 Fox Thea... 43.6640673,-79.4213863 5 Monthly G... 1
1 Fox Thea... 43.6640673,-79.4213863 6 A Special... 1
1 Fox Thea... 43.6640673,-79.4213863 7 The Final... 1
The JSON output. As you see location data is multiplied making for a larger JSON file.
{
"Locations": [
{
"ID": 1,
"Title": "Fox Theatre",
"Latlng": "43.6640673,-79.4213863",
},
{
"ID": 1,
"Title": "Fox Theatre",
"Latlng": "43.6640673,-79.4213863",
},
{
"ID": 2,
"Title": "Wrigley Field",
"Latlng": "43.6640673,-79.4213863",
},
{
"ID": 3,
"Title": "Blues Bar",
"Latlng": "43.6640673,-79.4213863",
},
{
"ID": 1,
"Title": "Fox Theatre",
"Latlng": "43.6640673,-79.4213863",
},
{
"ID": 1,
"Title": "Fox Theatre",
"Latlng": "43.6640673,-79.4213863",
},
{
"ID": 1,
"Title": "Fox Theatre",
"Latlng": "43.6640673,-79.4213863",
}
],
"Events": [
{
"ID": 1,
"Title": "Bob's Event",
"Location": 1
},
{
"ID": 2,
"Title": "Jill's Event",
"Location": 1
},
{
"ID": 3,
"Title": "Mary's Event",
"Location": 2
},
{
"ID": 4,
"Title": "John's Event",
"Location": 3
},
{
"ID": 5,
"Title": "Monthly Gathering",
"Location": 1
},
{
"ID": 6,
"Title": "A Special Event",
"Location": 1
},
{
"ID": 7,
"Title": "The Final Contest",
"Location": 1
}
]
}
Structs:
// Event type
type Event struct {
ID int `schema:"id"`
Title string `schema:"title"`
LocationID int `schema:"locationid"`
}
// Location type
type Location struct {
ID int `schema:"id"`
Title string `schema:"title"`
Latlng string `schema:"latlng"`
}
// LocationsEvents type
type LocationsEvents struct {
Locations []Location `schema:"locations"`
Events []Event `schema:"events"`
}
Function running the query and looping through rows:
func getLocationsEvents(db *sql.DB, start, count int) ([]Location, []Event, error) {
var locations = []Location{}
var events = []Event{}
rows, err := db.Query("SELECT locations.id, locations.title, locations.latlng, events.id, events.title, events.locationid FROM locations LEFT JOIN events ON locations.id = events.locationid LIMIT ? OFFSET ?", count, start)
if err != nil {
return locations, events, err
}
defer rows.Close()
for rows.Next() {
var location Location
var event Event
err := rows.Scan(&location.ID, &location.Title, &location.Latlng, &event.ID, &event.Title, &event.LocationID);
if err != nil {
return locations, events, err
}
// Here I can print locations and see it getting longer with each loop iteration
fmt.Println(locations)
// How can I check if an ID exists in locations?
// Ideally, if location.ID already exists in locations, then only append event, otherwise, append both the location and event
locations = append(locations, location)
events = append(events, event)
}
return locations, events, nil
}
Function called on by router:
func (a *App) getLocationsEventsJSON(w http.ResponseWriter, r *http.Request) {
count := 99
start := 0
if count > 10 || count < 1 {
count = 10
}
if start < 0 {
start = 0
}
locations, events, err := getLocationsEvents(a.DB, start, count)
if err != nil {
respondWithError(w, http.StatusInternalServerError, err.Error())
return
}
var locationsEvents LocationsEvents
locationsEvents.Locations = locations
locationsEvents.Events = events
respondWithJSON(w, http.StatusOK, locationsEvents)
}
Function sending data out as JSON (part of REST API):
func respondWithJSON(w http.ResponseWriter, code int, payload interface{}) {
response, _ := json.Marshal(payload)
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(code)
w.Write(response)
}
UPDATE:
Reverting to doing this with the SQL query, what are the possibilities? Using GROUP BY? Here is an example SQL:
SELECT locations.id, locations.title, locations.latlng, events.id, events.title, events.locationid
FROM locations
LEFT JOIN events ON locations.id = events.locationid
GROUP BY locations.id, events.id
The result set still contains duplicated location data, however it's nicely grouped and sorted.
Then there's the possibility of sub-queries:
http://www.w3resource.com/sql/subqueries/understanding-sql-subqueries.php but now I'm running multiple SQL queries, something I wanted to avoid.
In reality I don't think I can avoid the duplicated location data when using a single join query like I am. How else would I receive a resultset of joined data, without having location data replicated? Having the SQL server send me pre-made JSON data as I need it (locations and events seperated)? From my understanding it's better doing that work after receiving results.
I think you can split your request in two: locations (SELECT * FROM locations) and events (SELECT * FROM events) and then pass them to JSON marshaller.
These 2 requests will be very easy and fast for database to perform. Next they will be easier to cache intermediate results.
but now I'm running multiple SQL queries, something I wanted to avoid.
Could you pls clarify this moment - why do you want to avoid multiple queries? What task do you want to solve and what limitations have? Sometimes set of small easy queries are better than one overcomplicated.
If you are querying the database yourself, you should be able to avoid any duplicates in the first place.
In the end of your query add "GROUP BY {unique field}".
Example that should give a unique list of locations that are on you event list
SELECT location.*
FROM location.ID, location.Title, location.Latlng
INNER JOIN event ON event.ID=location.ID
GROUP BY location.ID

How can I do this in painless script Elasticsearch 5.3

We're trying to replicate this ES plugin https://github.com/MLnick/elasticsearch-vector-scoring. The reason is AWS ES doesn't allow any custom plugin to be installed. The plugin is just doing dot product and cosine similarity so I'm guessing it should be really simple to replicate that in painless script. It looks like groovy scripting is deprecated in 5.0.
Here's the source code of the plugin.
/**
* #param params index that a scored are placed in this parameter. Initialize them here.
*/
#SuppressWarnings("unchecked")
private PayloadVectorScoreScript(Map<String, Object> params) {
params.entrySet();
// get field to score
field = (String) params.get("field");
// get query vector
vector = (List<Double>) params.get("vector");
// cosine flag
Object cosineParam = params.get("cosine");
if (cosineParam != null) {
cosine = (boolean) cosineParam;
}
if (field == null || vector == null) {
throw new IllegalArgumentException("cannot initialize " + SCRIPT_NAME + ": field or vector parameter missing!");
}
// init index
index = new ArrayList<>(vector.size());
for (int i = 0; i < vector.size(); i++) {
index.add(String.valueOf(i));
}
if (vector.size() != index.size()) {
throw new IllegalArgumentException("cannot initialize " + SCRIPT_NAME + ": index and vector array must have same length!");
}
if (cosine) {
// compute query vector norm once
for (double v: vector) {
queryVectorNorm += Math.pow(v, 2.0);
}
}
}
#Override
public Object run() {
float score = 0;
// first, get the ShardTerms object for the field.
IndexField indexField = this.indexLookup().get(field);
double docVectorNorm = 0.0f;
for (int i = 0; i < index.size(); i++) {
// get the vector value stored in the term payload
IndexFieldTerm indexTermField = indexField.get(index.get(i), IndexLookup.FLAG_PAYLOADS);
float payload = 0f;
if (indexTermField != null) {
Iterator<TermPosition> iter = indexTermField.iterator();
if (iter.hasNext()) {
payload = iter.next().payloadAsFloat(0f);
if (cosine) {
// doc vector norm
docVectorNorm += Math.pow(payload, 2.0);
}
}
}
// dot product
score += payload * vector.get(i);
}
if (cosine) {
// cosine similarity score
if (docVectorNorm == 0 || queryVectorNorm == 0) return 0f;
return score / (Math.sqrt(docVectorNorm) * Math.sqrt(queryVectorNorm));
} else {
// dot product score
return score;
}
}
I'm trying to start with just getting a field from index. But I'm getting error.
Here's the shape of my index.
I've enabled delimited_payload_filter
"settings" : {
"analysis": {
"analyzer": {
"payload_analyzer": {
"type": "custom",
"tokenizer":"whitespace",
"filter":"delimited_payload_filter"
}
}
}
}
And I have a field called #model_factor to store a vector.
{
"movies" : {
"properties" : {
"#model_factor": {
"type": "text",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer"
}
}
}
}
And this is the shape of the document
{
"#model_factor":"0|1.2 1|0.1 2|0.4 3|-0.2 4|0.3",
"name": "Test 1"
}
Here's how I use the script
{
"query": {
"function_score": {
"query" : {
"query_string": {
"query": "*"
}
},
"script_score": {
"script": {
"inline": "def termInfo = doc['_index']['#model_factor'].get('1', 4);",
"lang": "painless",
"params": {
"field": "#model_factor",
"vector": [0.1,2.3,-1.6,0.7,-1.3],
"cosine" : true
}
}
},
"boost_mode": "replace"
}
}
}
And this is the error I got.
"failures": [
{
"shard": 2,
"index": "test",
"node": "ShL2G7B_Q_CMII5OvuFJNQ",
"reason": {
"type": "script_exception",
"reason": "runtime error",
"caused_by": {
"type": "wrong_method_type_exception",
"reason": "wrong_method_type_exception: cannot convert MethodHandle(List,int)int to (Object,String)String"
},
"script_stack": [
"termInfo = doc['_index']['#model_factor'].get('1',4);",
" ^---- HERE"
],
"script": "def termInfo = doc['_index']['#model_factor'].get('1',4);",
"lang": "painless"
}
}
]
The question is how do I access the index field to get #model_factor in painless scripting?
Option 1
Due to the fact that #model_factor is a text field, in painless scripting, it would be possible to access it, setting fielddata=true in the mapping. So the mapping should be:
{
"movies" : {
"properties" : {
"#model_factor": {
"type": "text",
"term_vector": "with_positions_offsets_payloads",
"analyzer" : "payload_analyzer",
"fielddata" : true
}
}
}
}
And then it can be scored accessing doc-values:
{
"query": {
"function_score": {
"query" : {
"query_string": {
"query": "*"
}
},
"script_score": {
"script": {
"inline": "return Double.parseDouble(doc['#model_factor'].get(1)) * params.vector[1];",
"lang": "painless",
"params": {
"vector": [0.1,2.3,-1.6,0.7,-1.3]
}
}
},
"boost_mode": "replace"
}
}
}
Problems with Option 1
So it is possible to access the field data value setting fielddata=true, but in this case, the value is the vector index as a term, not the value of the vector which is stored in the payload. Unfortunately, it looks like there is no way to access the Token Payload (where the real vector index value is stored) using painless scripting and doc-values. See the source code for elasticsearch and another similar question re: accessing term info.
So the answer is that using painless scripting is NOT possible to access the payload.
I tried also to store the vector values with a simple pattern tokenizer but when accessing the term vector values the order is not preserved, and this is probably the reason for which the author of the plugin decided to use the term as a string and then retrieve the position 0 of the vector as the term "0" and then find the real vector value in the payload.
Option 2
A very simple alternative is to use n fields in the documents, each of them represents a position in the vector, so in your example, we have a 5 dim vector with values stored in v0...v4 directly as double:
{
"#model_factor":"0|1.2 1|0.1 2|0.4 3|-0.2 4|0.3",
"name": "Test 1",
"v0" : 1.2,
"v1" : 0.1,
"v2" : 0.4,
"v3" : -0.2,
"v4" : 0.3
}
and then the painless scripting should be:
{
"query": {
"function_score": {
"query" : {
"query_string": {
"query": "*"
}
},
"script_score": {
"script": {
"inline": "return doc['v0'].getValue() * params.vector[0];",
"lang": "painless",
"params": {
"vector": [0.1,2.3,-1.6,0.7,-1.3]
}
}
},
"boost_mode": "replace"
}
}
}
It should be easily possible to iterate on the input vector length and get the fields dynamically to calculate the dot product modifying doc['v0'].getValue() * params.vector[0] that I wrote for simplicity.
Problems with Option2
Option 2 is viable as long as the vector dimension remains not big. I think that default Elasticsearch max number of fields per document is 1000, but it can be changed also in AWS environment:
curl -X PUT \
'https://.../indexName/_settings' \
-H 'cache-control: no-cache' \
-H 'content-type: application/json'
-d '{
"index.mapping.total_fields.limit": 2000
}'
Moreover, it should be tested also the script speed on a large number of documents.
Maybe in re-scoring / re-ranking scenarios, it is a viable solution.
Option 3
The third option is really an experiment and the most fascinating in my opinion.
It tries to exploit the internal Elasticsearch representation of the Vector Space Model and does not use any scripting to score but reuse the default similarity score based on tf/idf.
Lucene, that seats at Elasticsearch core, is already using internally a modification of the cosine similarity to calculate the similarity score between documents in his Vector Space Model representation of terms as the formula below, taken from the TFIDFSImilarity javadoc, shows:
In particular, the weights of the vector representing the field are the tf/idf values of the terms of that field.
So we could index a document with termvectors, using as term the index of the vector. If we repeat it N times, we represent the value of the vector, exploiting the tf part of the scoring formula.
This means that the domain of the vector should be transformed and rescaled in {1.. Infinite} Positive Integer numbers domain. We start from 1 so that we are sure that all the documents contain all the terms, it will make it easier to exploit the formula.
For example, the vector: [21, 54, 45] can be indexed as a field in a document using a simple whitespace analyzer and the following value:
{
"#model_factor" : "0<repeated 21 times> 1<repeated 54 times> 2<repeated 45 times>",
"name": "Test 1"
}
then to query, i.e. calculate the dot product, we boost the single terms that represent the index position of the vector.
So using the same example above the input vector: [45, 1, 1] will be transformed in the query:
"should": [
{
"term": {
"#model_factor": {
"value": "0",
"boost": 45
}
}
},
{
"term": {
"#model_factor": "1" // boost:1 by default
}
},
{
"term": {
"#model_factor": "2" // boost:1 by default
}
}
]
norm(t,d) should be disabled in the mapping so that it is not used in the formula above. The idf part is constant for all the documents because all of them contains all the terms (having all the vectors the same dimension).
queryNorm(q) is the same for all the documents in the formula above so it is not a problem.
coord(q,d) is a constant because all the documents contain all the terms.
Problems with Option 3
Need to be tested.
It works only for positive numbers vectors, see this question in math stackoverflow for making it works also for negative numbers.
It is not the exact same of a dot product but very close to find similar documents based on raw vectors.
Scalability on large vector dimension can be an issue at querying time because this means we need to do a N dim terms query with different boosting.
I will try it in a test index and edit this question with the results.

How do you quickly find the URL for a Win32 API on MSDN?

How do you quickly find the URL for a Win32 API on MSDN? It's easy for .NET methods -- just add the method name (for example, System.Byte.ToString) to http://msdn.microsoft.com/library/.
However, for Win32 APIs (say GetLongPathName), this doesn't work: http://msdn.microsoft.com/en-us/library/GetLongPathName.
I want to be able to use the URL in code comments or documentation. So the URL one gets with an MSDN or Google search (for example, http://msdn.microsoft.com/library/aa364980.aspx) isn't really what I'm looking for. I'd really like my code comments to look something like:
// blah blah blah. See http://msdn.microsoft.com/en-us/library/GetLongPathName for more information.
What's the magic pixie dust for Win32 APIs? Or does it only work for .NET methods?
Google might be your best bet. I know the msdn site search has time and again pointed me in the wrong direction, but a quick switch to Google ("GetLongPathName site:msdn.microsoft.com") never steers me wrong.
FWIW if you have the MSDN installed locally on your machine the Zeus editor has a feature to search the local copy of the MSDN.
For example, placing the cursor on the GetLongPathName word within a text document and using the Zeus Help, Quick Help, Current Word menu, the following MSDN page gets loaded:
ms-help://MS.VSCC.v80/MS.MSDN.vAug06.en/fileio/fs/getlongpathname.htm
I am using Linkify by cough me, which lets you link
// see msdn:GetLongPathName
to the google search japollock mentions.
You could use MSDN search.
http://social.msdn.microsoft.com/Search/en-US/?Refinement=86&Query=GetLongPathName
Refinement=86 stands for Win32 search.
MSDN GET api (I don't know how new this is)
"https://learn.microsoft.com/api/search?locale=en-us&scoringprofile=semantic-captions&%24top=1&search=" functionName
returns json like such:
{
"results": [
{
"title": "KeClearEvent function (wdm.h) - Windows drivers",
"url": "https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/wdm/nf-wdm-keclearevent",
"displayUrl": {
"content": "/windows-hardware/drivers/ddi/wdm/nf-wdm-keclearevent",
"hitHighlights": [
{
"start": 41,
"length": 12
}
]
},
"description": "The KeClearEvent routine sets an event to a not-signaled state.",
"descriptions": [
{
"content": "KeClearEvent function (wdm.h) Article 04/18/2022 2 minutes to read In this article The KeClearEvent routine sets an event to a not-signaled state.",
"hitHighlights": [
{
"start": 0,
"length": 12
},
{
"start": 87,
"length": 12
}
]
},
{
"content": "For better performance, use KeClearEvent unless the caller uses the value returned by KeResetEvent to determine what to do next.",
"hitHighlights": [
{
"start": 28,
"length": 12
}
]
}
],
"lastUpdatedDate": "2022-04-18T04:31:00+00:00",
"breadcrumbs": []
}
],
"spellingCorrection": [],
"scopeRemoved": false,
"count": 18,
"nextLink": "https://learn.microsoft.com/api/Search?locale=en-us\u0026search=KeClearEvent\u0026$skip=1\u0026$top=1",
"srcheng": "02"
}
if you want really fast, you can bind it to a hotkey
AutohotkeyV2
#SingleInstance force
ListLines 0
KeyHistory 0
SendMode "Input" ; Recommended for new scripts due to its superior speed and reliability.
SetWorkingDir A_ScriptDir ; Ensures a consistent starting directory.
linkFromName(functionName) {
json:=downloadToVar("https://learn.microsoft.com/api/search?locale=en-us&scoringprofile=semantic-captions&%24top=1&search=" functionName)
obj:=JSON_parse(json)
if (obj.results.Length) {
RegExMatch(obj.results[1].title, ".*?(?=\s|$)", &OutputVar)
if (OutputVar.0 = functionName) {
validUrl:=obj.results[1].url
} else if (OutputVar.0 = functionName "W" || OutputVar.0 = functionName "A") {
validUrl:=SubStr(obj.results[1].url, 1, -1) "w"
}
; A_Clipboard:=validUrl
Run validUrl
}
}
; linkFromName("GetLongPathNameW") ;works
; linkFromName("GetLongPathName") ;works
linkFromName(A_Clipboard)
Exitapp
f3::Exitapp
downloadToVar(url) {
whr := ComObject("WinHttp.WinHttpRequest.5.1")
whr.Open("GET", url, true)
whr.SetRequestHeader("User-Agent", "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)")
whr.Send()
; Using 'true' above and the call below allows the script to remain responsive.
whr.WaitForResponse()
return whr.ResponseText
}
JSON_parse(str) {
c_:=1
return JSON_value()
JSON_value() {
char_:=SubStr(str, c_, 1)
Switch char_ {
case "{":
obj_:=Map()
;object
c_++
loop {
skip_s()
if (SubStr(str, c_, 1) == "}") {
c_++
return obj_
}
; key_:=JSON_objKey()
; a or "a"
if (SubStr(str, c_, 1) == "`"") {
RegExMatch(str, "(?:\\.|.)*?(?=`")", &OutputVar, c_ + 1)
key_:=RegExReplace(OutputVar.0, "\\(.)", "$1")
c_+=OutputVar.Len
} else {
RegExMatch(str, ".*?(?=[\s:])", &OutputVar, c_)
key_:=OutputVar.0
c_+=OutputVar.Len
}
c_:=InStr(str, ":", true, c_) + 1
skip_s()
value_:=JSON_value()
obj_[key_]:=value_
obj_.DefineProp(key_, {Value: value_})
skip_s()
if (SubStr(str, c_, 1) == ",") {
c_++, skip_s()
}
}
case "[":
arr_:=[]
;array
c_++
loop {
skip_s()
if (SubStr(str, c_, 1) == "]") {
c_++
return arr_
}
value_:=JSON_value()
arr_.Push(value_)
skip_s()
char_:=SubStr(str, c_, 1)
if (char_ == ",") {
c_++, skip_s()
}
}
case "`"":
RegExMatch(str, "(?:\\.|.)*?(?=`")", &OutputVar, c_ + 1)
unquoted:=RegExReplace(OutputVar.0, "\\(.)", "$1")
c_+=OutputVar.Len + 2
return unquoted
case "0", "1", "2", "3", "4", "5", "6", "7", "8", "9":
RegExMatch(str, "[0-9.]*", &OutputVar, c_)
c_+=OutputVar.Len
return Number(OutputVar.0)
case "t":
;"true"
c_+=4
return {a:1}
case "f":
;"false"
c_+=5
return {a:0}
case "n":
;"null"
c_+=4
return {a:-1}
}
}
skip_s() {
RegExMatch(str, "\s*", &OutputVar, c_)
c_+=OutputVar.Len
}
}

Resources