Aerospike Query Return Highest Value - go

I'm trying to create a query for my Aerospike database, that would return the highest value in a specific bin; similar to the way that the MAX() function works in MySQL. For example, if I had a set like this:
+--------------+---------+
| filename | version |
+--------------+---------+
| alphabet.doc | 4 |
| people.doc | 2 |
| alphabet.doc | 6 |
| people.doc | 3 |
+--------------+---------+
What I need is to only return the filename with the highest version number. At the moment I can add a filter like this:
stmt := db.NewStatement(DBns, DBset, "filename", "version")
stmt.Addfilter(db.NewEqualFilter("filename", "alphabet.doc"))
// run database query
records := runQuery(stmt)
Anyone know how to do this?

You can apply a Lua user-defined function (UDF) to the query to filter the results efficiently.
E.g. here is a Stream UDF that would return the record with the max. version number:
function maxVersion(stream, bin)
-- The stream function cannot return record objects directly,
-- so we have to map to a Map data type first.
local function toArray(rec)
local result = map()
result['filename'] = rec['filename']
result['version'] = rec['version']
return result
end
local function findMax(a, b)
if a.version > b.version then
return a
else
return b
end
end
return stream : map(toArray) : reduce(findMax)
end
Using the Go client you would execute the function like this:
stmt := NewStatement(ns, set)
recordset, _ := client.QueryAggregate(nil, stmt, "udfFilter", "maxVersion")
for rec := range recordset.Results() {
res := rec.Record.Bins["SUCCESS"].(map[interface{}]interface{})
fmt.Printf("filename with max. version: %s (ver. %d)\n", res["filename"], res["version"])
}
I've uploaded a fully working example as a Gist here: https://gist.github.com/jhecking/b98783bea7564d610ea291b5ac47808c
You can find more information about how to work with Stream UDFs for query aggregation here: http://www.aerospike.com/docs/guide/aggregation.html

Related

Generate random UUIDv4 with Elm

I'm trying to generate random UUID's v4 within a loop:
randomUuid =
-- TODO: find a way to generate random uuid for variableId
updatedVariables =
group.variables |> List.map (\variable -> { variable | id = randomUuid })
I read the doc of elm/random and elm/uuid but could not find how to generate an UUID without using a seed.
The only thing I could do is:
newUuid : Random.Seed -> ( String, Random.Seed )
newUuid seed =
seed
|> Random.step UUID.generator
|> Tuple.mapFirst UUID.toString
I see that elm/random as an independentSeed function but I cannot get it to generate a seed.
The node equivalent of what I'm trying to achieve with randomUuid is:
const { uuid } = require('uuidv4');
const randomUuid = uuid();
I feel like I might be missing some important concept in Elm here but cannot figure that one on my own. Any help or pointer would be greatly appreciated.
Generating random values is an effect and as such a pure language cannot just perform it.
However, there is a pure version of randomness, which is using random seeds. These have the property that every time you generate a value using the same seed, you get the same value - hence this is just a pure computation and is completely ok in a pure context.
Elm allows you to execute effects as Cmd, the thing you return from your init and update functions. So one option you have is to always return Random.generate GotANewUUID UUID.generator before you need it, then perform your computation when you handle the GotANewUUID msg.
The other option is to keep track of the random seed. You either start with a deterministic one with Random.initialSeed (probably not what you want with UUIDs, as they would be exactly the same on every run of your program), or in your init function you return Random.generate GotNewSeed Random.independentSeed. Then you store the seed in your model.
Every time you need to generate a new UUID, you use your newUuid function above, making sure to store the new seed.
Here's an example:
import Random
import UUID
type alias Thing =
{ id : String
-- , some other stuff
}
type alias Model =
{ seed : Random.Seed
, stuff : List Thing
}
type Msg
= GotNewSeed Random.Seed
| AddAThing Thing
| AddABunchOfThings (List Thing)
init : () -> (Model, Cmd Msg)
init flags =
({ seed = Random.initialSeed 567876567
-- Let's start with a deterministic seed
-- so you don't need to deal with Maybe Seed later
, stuff = []
}, Random.generate GotNewSeed Random.independentSeed
)
update : Msg -> Model -> (Model, Cmd Msg)
update msg model =
case msg of
GotNewSeed seed ->
({model | seed = seed}, Cmd.none)
AddAThing thing ->
let
(newId, newSeed) =
newUuid model.seed
in
({ model | stuff = { thing | id = newId } :: model.stuff
, seed = newSeed }
, Cmd.none
)
AddABunchOfThings things ->
let
(newStuff, newSeed) =
List.foldl (\thing (stuff, seed) ->
newUuid seed
|> Tuple.mapFirst (\id ->
{ thing | id = id } :: stuff
)
) (model.stuff, model.seed) things
in
({model | stuff = newStuff, seed = newSeed}, Cmd.none)

MySQL escape string

How can I filter input from URL param example
localhost:8080/v1/data/:id
And I want to use filter like mysql_real_escape_string param for id in Golang, I can't use ? cause this filter is dynamic, this param can be use or no, like this example
if status != "99" {
where = append(where, "vs.stats = '1'")
}
if cari != "" {
where = append(where, "(lm.title_member like '%"+cari+"%' OR " +
"lm.nama_member like '%"+cari+"%' )")
}
query := "select vs.*, lm.nama_member from volks_shift vs left join list_member lm on vs.id_m=lm.id_m where vs.id_s=?"
rows, err := s.DB.QueryContext(ctx, query, id_s)
and I want secure cari val, without use ?
There is no escape function in the database/sql package, see related issue #18478 (and it's also not nice to invoke a mysql-specific function when using a database abstraction layer).
But it is also not needed as you can still use ? in a dynamic query. Just build the query parameters dynamically together with the query, like so:
query := "SELECT vs.*, lm.nama_member" +
" FROM volks_shift vs LEFT JOIN list_member lm ON vs.id_m=lm.id_m" +
" WHERE vs.id_s=?"
params := []interface{}{id_s}
if status != "99" {
query += " AND vs.stats = '1'"
}
if cari != "" {
query += " AND (lm.title_member LIKE ? OR lm.nama_member LIKE ?)"
params = append(params, "%"+cari+"%", "%"+cari+"%")
}
rows, err := s.DB.QueryContext(ctx, query, params...)

Parsing a URL query string into a map of parameters with XPath

What would be the most readable way to parse a URL query string into a { 'param': 'value' } map in XSLT/XPath 3.0?
Note: this is the inverse function of the one described in Building a URL query string from a map of parameters with XPath.
Update: I neglected to mention that the function should support multi-value parameters such as a=1&a=2, and ideally parse them as an xs:string* sequence.
declare namespace map = "http://www.w3.org/2005/xpath-functions/map";
let $querystring := "a=1&b=2&c=3"
return
( tokenize($querystring, "&")
! (let $param := tokenize(., "=")
return map:entry($param[1], $param[2]) )
) => map:merge()
In order to support multiple values, you could can apply the $options parameter specifying what to do with duplicates:
declare namespace map = "http://www.w3.org/2005/xpath-functions/map";
let $querystring := "a=1&b=2&a=3"
return
( tokenize($querystring, "&")
! (let $param := tokenize(., "=")
return map:entry($param[1], $param[2]) )
) => map:merge(map:entry('duplicates', 'combine'))
2 more answers by Christian Grün:
let $querystring := "a=1&b=2&a=3"
return map:merge(
for $query in tokenize($querystring, "&")
let $param := tokenize($query, "=")
return map:entry(head($param), tail($param)),
map { 'duplicates': 'combine' }
)
One more solution (if you don’t wanna use the for clause):
let $querystring := "a=1&b=2&a=3"
return map:merge(
tokenize($querystring, "&")
! array { tokenize(., "=") }
! map:entry(.(1), .(2)),
map { 'duplicates': 'combine' }
)
let's see - substring to get ? and strip any trailing #... fragment identifier
then tokenize on [&;] (actually [;&] to get name=value pairs, which are separated by & or (less commonly) ;
then substring-before and after, or tokenize again, to get before and after the = (name value)
then uridecode the name and the value separately
let $query := substring-after($uri, '?'),
$beforefrag := substring-before($query || '#', '#')
return
tokenize($beforefrag, '[;&]')
! [substring-before(., '='), substring-after(., '=') ]
! map:entry(local:uridecode(.(1), local:uridecode(.(2))
might give us a sequene of map entries, and we can use map:merge on that.
If we know our input is plausibly encoded, we could use
declare function local:uridecode($input as xs:string?) as xs:string?
{
parse-xml-fragment(replace($input, '=(..)', '&x$1;'))
};
but a better version would just replace the two hex characters. It's really unfortunate we don't have a version of replace() that takes a function argument to be called for each matching subexpression, ala perl's e flag.```
and of course you can put that into
(...) => map:merge()

Loop in records in datatable at forumla field in crystal report

I have datatable attached to my Crystal Report with the following structure
TypeId
TypeName
I want to display TypeName in GroupHeaderSection based on condition
For example
if TypeId = 1 then display hans
if TypeId = 2 then display MNHS
I tried the following formula to display records from this datatable
WhilePrintingRecords;
Local NumberVar result := -1;
Local NumberVar i := 1;
Local StringVar inString := "";
While i <= 5 And result = -1 Do
(
// inString := IIF({DTPMS_RptLocationTr.LocationTypeId} = 1,{DTPMS_RptLocationTr.LocationTypeName},"")
If {DTPMS_RptLocationTr.LocationTypeId} = 5 Then
inString := {DTPMS_RptLocationTr.LocationTypeName};
i := i + 1;
);
inString
Any suggestion on how to solve this
I found how to solve my issue.
First i changed the way records returned from database I returned the data like that
Type1 | Type2 | Type3
======================
hans | MNHS | nhues
so now i can bind data directly from datatable to report header

Determine country from IP - IPv6

In my project, I have a function in postgres (plpgsql) that determines country from a given ip address:
CREATE OR REPLACE FUNCTION get_country_for_ip(character varying)
RETURNS character varying AS
$BODY$
declare
ip ALIAS for $1;
ccode varchar;
cparts varchar[];
nparts bigint[];
addr bigint;
begin
cparts := string_to_array(ip, '.');
if array_upper(cparts, 1) <> 4 then
raise exception 'gcfi01: Invalid IP address: %', ip;
end if;
nparts := array[a2i(cparts[1])::bigint, a2i(cparts[2])::bigint, a2i(cparts[3])::bigint, a2i(cparts[4])::bigint];
if(nparts[1] is null or nparts[1] < 0 or nparts[1] > 255 or
nparts[2] is null or nparts[2] < 0 or nparts[2] > 255 or
nparts[3] is null or nparts[3] < 0 or nparts[3] > 255 or
nparts[4] is null or nparts[4] < 0 or nparts[4] > 255) then
raise exception 'gcfi02: Invalid IP address: %', ip;
end if;
addr := (nparts[1] << 24) | (nparts[2] << 16) | (nparts[3] << 8) | nparts[4];
addr := nparts[1] * 256 * 65536 + nparts[2] * 65536 + nparts[3] * 256 + nparts[4];
select into ccode t_country_code from ip_to_country where addr between n_from and n_to limit 1;
if ccode is null then
ccode := '';
end if;
return ccode;
end;$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
This may not be the most efficient, but it does the job. Note that it uses an internal table (ip_to_country), which contains data as below (the numbers n_from and n_to are the long values of the start and end of address ranges:
n_from | n_to | t_country_code
----------+----------+----------------
0 | 16777215 | ZZ
16777216 | 16777471 | AU
...
Now we are starting to look at the IPv6 addressing as well - and I need to add similar functionality for IPv6 addresses. I have a similar set of data for IPv6, which looks like this:
t_start | t_end | t_country_code
-------------+-----------------------------------------+----------------
:: | ff:ffff:ffff:ffff:ffff:ffff:ffff:ffff | ZZ
100:: | 1ff:ffff:ffff:ffff:ffff:ffff:ffff:ffff | ZZ
...
2000:: | 2000:ffff:ffff:ffff:ffff:ffff:ffff:ffff | ZZ
...
2001:1200:: | 2001:1200:ffff:ffff:ffff:ffff:ffff:ffff | MX
...
Now, given an IP address ::1, how do I (1) check that it's a valid IPv6 address and (2) get the corresponding country mapping?
I believe I found the solution. It involves modifying the data first and then some massaging of the input. Here's what worked.
First, the data needs to be converted so that all addresses are full, without shortening, with semicolon separators removed. The sample data shown in my question is converted to:
t_start | t_end | t_country_code
----------------------------------+----------------------------------+----------------
00000000000000000000000000000000 | 00ffffffffffffffffffffffffffffff | ZZ
01000000000000000000000000000000 | 01ffffffffffffffffffffffffffffff | ZZ
...
20000000000000000000000000000000 | 2000ffffffffffffffffffffffffffff | ZZ
...
20011200000000000000000000000000 | 20011200ffffffffffffffffffffffff | MX
...
This is what is stored in the database.
The next step was to convert the IP address received in the code to be in the same format. This is done in PHP with the following code (assume that $ip_address is the incoming IPv6 address):
$addr_bin = inet_pton($ip_address);
$bytes = unpack('n*', $addr_bin);
$ip_address = implode('', array_map(function ($b) {return sprintf("%04x", $b); }, $bytes));
Now variable $ip_adress wil contain the full IPv6 address, for example
:: => 00000000000000000000000000000000
2001:1200::ab => 200112000000000000000000000000ab
and so on.
Now you can simply compare this full address with the ranges in the database. I added a second function to the database to deal with IPv6 addresses, which looks like this:
CREATE OR REPLACE FUNCTION get_country_for_ipv6(character varying)
RETURNS character varying AS
$BODY$
declare
ip ALIAS for $1;
ccode varchar;
begin
select into ccode t_country_code from ipv6_to_country where addr between n_from and n_to limit 1;
if ccode is null then
ccode := '';
end if;
return ccode;
end;$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
Finally, in my php code I added the code that calls one or the other Postgres function based on the input ip_address.
First, I see a couple things you are doing that will pose problems. The first is this use of varchar and long to represent IP addresses when PostgreSQL has perfectly valid INET and CIDR types that will do what you want only better and faster. Note these do not support GIN indexing properly at present so you can't do exclude constraints on them. If you need that, look at the ip4r extension which does support this.
Note as a patch for now you can cast your varchar to inet. Inet also supports both ipv4 and ipv6 addresses as does cidr, and similar types exist on ip4r.
This will solve the ipv6 validation issue for you, and likely cut down on your storage as well as provide better operational checks and better performance.
As for countries, I am also thinking that the mappings may not be so straight-forward.

Resources