When I try to make a search query with bleve I always get 10 or less results. How can I increase this limit or make a search between all the data?
Here is code example, where I expect to get 35 results, but getting 10 (ps calc.Rand() function is giving random bytes):
package search
import (
"testing"
"github.com/blevesearch/bleve/v2"
)
func TestSearcher(t *testing.T) {
mapping := bleve.NewIndexMapping()
searcher, _ := bleve.New("search/src", mapping)
for i := 0; i < 35; i++ {
searcher.Index(string(calc.Rand()), "stuff")
}
query := bleve.NewMatchQuery("stuff")
search := bleve.NewSearchRequest(query)
searchRez, _ := searcher.Search(search)
t.Error(len(searchRez.Hits))
}
Result I get:
--- FAIL: TestSearcher (2.33s)
/Users/ ... /search/search_test.go:86: 10
Result I expect:
--- FAIL: TestSearcher (2.33s)
/Users/ ... /search/search_test.go:86: 35
How do I access all the 35 values, that are stored by this index?
Set the field Size on SearchRequest:
Size/From describe how much and which part of the result set to return.
search := bleve.NewSearchRequest(query)
search.Size = 35 // or the page size you want
searchRez, _ := searcher.Search(search)
fmt.Println(len(searchRez.Hits)) // prints 35
or with pagination, you set the field From incrementally based on page size. A contrived example:
count, _ := searcher.DocCount()
for i := 0; i < int(count); i += 10 /* page size */ {
search.From = i
searchRez, _ := searcher.Search(search)
fmt.Println(len(searchRez.Hits)) // prints 10 10 10 5
}
Related
with the following sample, I can add a new column that is diverted from the row values.
it's working well.
package main
import (
"fmt"
"strings"
"github.com/go-gota/gota/dataframe"
"github.com/go-gota/gota/series"
)
func main() {
csvStr := `accountId,deposit,Withdrawals
anil0001,50,10
vikas0002,10,10
ravi0003,20,10
user1111,NaN,20`
df := dataframe.ReadCSV(strings.NewReader(csvStr))
// Within a row, elements are indexed by their column index.
indexDeposit := 1
indexWithdrawals := 2
// Rapply reads the data by rows.
// You can access each element of the row using
// s.Elem(index) or s.Val(index).
// To browse by columns use Capply.
s := df.Rapply(func(s series.Series) series.Series {
deposit, err := s.Elem(indexDeposit).Int()
if err != nil {
return series.Ints("NAN")
}
withdrawal, err := s.Elem(indexWithdrawals).Int()
if err != nil {
return series.Ints("NAN")
}
return series.Ints(deposit - withdrawal)
})
// The new series is appended to
// the data source via a call to Mutate.
// You can print s to read its content.
df = df.Mutate(s.Col("X0")).
Rename("deposit_Withdrawals_diff", "X0")
fmt.Println(df)
}
but the question is that, I want to add an index ( row counter) to each row. ( later on I want to join it with a subset of that) So I need an index.
something like
index,accountId,deposit,Withdrawals
1,anil0001,50,10
2,vikas0002,10,10
3,ravi0003,20,10
4,user1111,NaN,20
I see there are no GetIndex or Index methods on series. How can I add this index?
I did it with a global variable ( but I'm not sure it's the best solution for gota, maybe for pure go developer is a good solution :) )
index := 0
s := df.Rapply(func(s series.Series) series.Series {
index++
return series.Ints(index)
})
df = df.Mutate(s.Col("X0")).
Rename("index", "X0")
fmt.Println(df)
I solved Sales by Match problem in this way:
package main
import (
"fmt"
)
func main() {
var amount int
_, _ = fmt.Scanf("%d", &amount)
pairs := 0
set := make(map[int]bool)
for i := 0; i < amount; i++ {
var number int
_, _ = fmt.Scanf("%d", &number)
if set[number] {
set[number] = false
pairs++
} else {
set[number] = true
}
}
println(pairs)
}
I tested it with the following input:
9
10 20 20 10 10 30 50 10 20
Here's the result:
So, as you can see, everything works fine. But when I run the tests I see the following result:
I don't understand why they are not passed, so, please, can anyone explain what's the problem in my solution? Thanks in advance I would appreciate any help
Change println(pairs) to fmt.Print(pairs)because println writes to stderr and hackerrank looks at stdout for the result.
I'm trying to understand why my code in Go doesn't work the way I thought it would. When I execute this test, it fails:
func TestConversion(t *testing.T) {
type myType struct {
a uint8
value uint64
}
myVar1 := myType{a: 1, value: 12345}
var copyFrom []byte
copyFromHeader := (*reflect.SliceHeader)(unsafe.Pointer(©From))
copyFromHeader.Data = uintptr(unsafe.Pointer(&myVar1))
copyFromHeader.Cap = 9
copyFromHeader.Len = 9
copyTo := make([]byte, len(copyFrom))
for i := range copyFrom {
copyTo[i] = copyFrom[i]
}
myVar2 := (*myType)(unsafe.Pointer(©From[0]))
myVar3 := (*myType)(unsafe.Pointer(©To[0]))
if myVar2.value != myVar3.value {
t.Fatalf("Expected myVar3.value to be %d, but it is %d", myVar2.value, myVar3.value)
}
}
The output will be:
slab_test.go:67: Expected myVar3.value to be 12345, but it is 57
However, if I increase copyFromHeader.Data by 1 before the copying of the data, then it all works fine. Like this:
copyFromHeader.Data = uintptr(unsafe.Pointer(&myVar1)) + 1
I don't understand why it seems to shift the underlying data by one byte.
There are 7 padding bytes between a and value. You're only getting the least significant byte of 12345 (57) in value. When you move copyFrom down by one byte, the values of myVar2.value and myVar3.value are both 48 (the second byte of 12345), so your test passes. It should work if you change 9 to 16.
Is there some particular reason you're copying the struct that way?
I want to read a slice and append new data to it. I had a webapi in json format to read the statistics. So I want to grab the statistic, give it to a function, write a new slice with the existing stats from the json before.
Then I want to append new data at the end of it. And give the "new" Slice (Struct) with the old statistics from the json and the new together.
I tried so much, that's it.
This is the json looks like
{"24hreward":0,"currentHashrate":0,"hashrate":0,"history":[{"currenthashrate":6024360000,"online":0,"offline":0,"timestamp":0}],"pageSize":30,"payments":null,"paymentsTotal":0,"rewards":[{"blockheight":223115,"timestamp":1518179084,"blockhash":"0x*************ee03802ee52f411102159a7c9268fec4e46571daa07e84","reward":6024360000,"percent":1,"immature":false}],"roundShares":3674,"stats":{"balance":6024360000,"blocksFound":1,"immature":0,"lastShare":1518208862},"sumrewards":[{"inverval":3600,"reward":0,"name":"Last 60 minutes","offset":0},{"inverval":43200,"reward":0,"name":"Last 12 hours","offset":0},{"inverval":86400,"reward":0,"name":"Last 24 hours","offset":0},{"inverval":604800,"reward":0,"name":"Last 7 days","offset":0},{"inverval":2592000,"reward":6024360000,"name":"Last 30 days","offset":0}],"workers":{},"workersOffline":0,"workersOnline":0,"workersTotal":0}
That's the struct
type HistoryData struct {
CurrentHashrate int64 `json:"currenthashrate"`
Online int64 `json:"online"`
Offline int64 `json:"offline"`
Timestamp int64 `json:"timestamp"`
}
There you can see "history". I want to read the history and give it to my function here
cmds, err := tx.Exec(func() error {
tx.ZRemRangeByScore(r.formatKey("hashrate", login), "-inf", fmt.Sprint("(", now-largeWindow))
tx.ZRangeWithScores(r.formatKey("hashrate", login), 0, -1)
tx.LRange(r.formatKey("lastshares"), 0, 4999)
tx.ZRevRangeWithScores(r.formatKey("rewards", login), 0, 39)
tx.ZRevRangeWithScores(r.formatKey("rewards", login), 0, -1)
tx.ZRevRangeWithScores(r.formatKey("history", login), 0, 64)
return nil
})
stats["history"] = convertHistoryResults(currentHashrate, online, offline, now, cmds[5].(*redis.ZSliceCmd))
There I want to give in convertHistoryResults "currentHashrate, online, offline, now" and cmds[5] from top but there is nothing. The json shows data but he cannot read i think :/ Thats my Problem
And here the function convertHistoryResults to build the list for the "history" in the json
var count = 0;
func convertHistoryResults(currenthashrate int64, online int64, offline int64, now int64, rows ...*redis.ZSliceCmd) []*HistoryData {
var result []*HistoryData
log.Println("run1")
log.Println(count)
log.Println(rows)
history := HistoryData{}
for _, row := range rows {
for _, v := range row.Val() {
log.Println("run3")
fields := strings.Split(v.Member.(string), ":")
history.CurrentHashrate, _ = strconv.ParseInt(fields[0], 10, 64)
history.Online, _ = strconv.ParseInt(fields[1], 10, 64)
history.Offline, _ = strconv.ParseInt(fields[2], 10, 64)
history.Timestamp, _ = strconv.ParseInt(fields[3], 10, 64)
result = append(result, &history)
}
}
if(count > 0) { // Count = 40 ~4-6min
history.CurrentHashrate = currenthashrate
history.Online = online
history.Offline = offline
history.Timestamp = now
result = append(result, &history)
log.Println("run2")
//if len(result) > 300 {
// result = result[1:]
//}
count = 0;
} else {
log.Println("run4")
count++
}
return result
}
Here I build new HistoryData and want to read the "old" rows and append them to the HistoryData, than i want to give the new Data into it with count, cause i dont want this all seconds.
But my Problem is I cannot read the old "history" from the json but I don't know why.
With "rewards" its working. Its a code from https://github.com/sammy007/open-ethereum-pool
What i tried is, to read the "history" and give them to convertHistoryResults to build new History to append that to "history".
Hope someone can help me to get the old "history" and rebuild to new "history"
This script is running from much different users, so I must rebuild the "history" for each one running it.
That is the print in log
[ZREVRANGE miner:history:0x******************** 0 64 WITHSCORES: []]
You see, WITHSCORES is []
I also tried in for Rewards, thats like that
[ZREVRANGE miner:rewards:0x******************** 0 64 WITHSCORES: [{1.518179084e+09 6024360000:1.000000000:0:0x**********************159a7c9268fec4e46571daa07e84:223115:1518179084}]]
When the RSS feeds updates (it doesn't right now, just dummy data) the new items are appended to the "feed" slice. Over time this could mean that it contains millions of items, I don't want that.
So when there are more than 100 items in the slice it should delete items starting from the top (item 0). In this example I'm using an RSS file with ust 100 items so the sample code below should delete from the top after 50 items:
package main
import (
"fmt"
"github.com/SlyMarbo/rss"
"time"
)
var feed *rss.Feed
var directory = "./dump"
func main() {
for {
checkRSS()
// Check every minute if feed.Refresh has passed so it run's update()
time.Sleep(1 * time.Minute)
}
}
func checkRSS() (*rss.Feed, error) {
var err error
// If feed is still empty fetch it first so we can run update()
if feed == nil {
feed, err = rss.Fetch("http://cloud.dgier.nl/api.xml")
} else {
err = feed.Update()
}
length := len(feed.Items)
for key, value := range feed.Items {
fmt.Println(key, value.Title, value.Read)
if key >= 50 {
fmt.Println("Item key is > 50")
}
}
fmt.Printf("Current length: %d\n", length)
fmt.Printf("Refreshing at %s\n", feed.Refresh)
return feed, err
}
If the number of items in the feed grows over the limit, slice it:
length := len(feed.Items)
if length > limit {
feed.Items = feed.Items[length - limit:]
}
When the length is over the limit, the new length will be exactly limit.
You don't need a for loop there.
To achieve this you probably want to use subslicing. Say you want to remove x items from feed, you can simply do feed = feed[x:] which will yield all items after index x-1 and assign it back to the feed slice. If in your actual code you just want to remove the first item then it would be feed = feed[1:]