How to dynamic add a column to container.NewGridWithColumns() - go

How can I add an extra column to a container.NewGridWithColumns in Go/Fyne?
I have a number of items (containers) rendered in a ColumnGrid. Then via a dialog I want to add an item. The problem is that I can not find a way to extend the original ColumnGrid.
My code:
func main() {
a := app.New()
w = a.NewWindow("myApp")
window = container.NewBorder(toolbar(), footer(), nil, nil, content())
w.SetContent(window)
w.ShowAndRun()
}
func content() *fyne.Container {
top := topRow()
bottom := bottomRow()
return container.NewGridWithRows(2, top, bottom)
}
var items []*fyne.Container
func bottomRow() *fyne.Container {
items = nil
db := sql.NewDB()
list, err := db.List()
if err != nil {
//handle error
}
for _, l := range list {
items = append(items, renderChart(l))
}
ct = container.NewGridWithColumns(len(items))
for _, item := range items {
ct.Add(item)
}
return ct
}
func dlgAdd() {
entry := widget.NewEntry()
entry.PlaceHolder = "name"
e := container.NewGridWithRows(2, entry)
d := dialog.NewCustomConfirm(
"Add Item",
"Add",
"Cancel",
e,
func(v bool) {
if !v {
//Cancelled
return
}
if entry.Text == "" {
//without input
return
}
//write entry.Text to db
db := sql.NewDB()
err := db.AddItem(entry.Text)
if err != nil {
return
}
//report succes
i := dialog.NewInformation("Succes", fmt.Sprintf("Item %s added", entry.Text), w)
i.Show()
i.SetOnClosed(func() {
Here is the problem, how to add an extra column to the container ct
and then add the item from the dialog to the new column
This will not work
ct = container.NewGridWithColumns(len(ct.Objects) + 1)
items = append(items, renderChart(entry.Text))
for _, item := range items {
ct.Add(item)
}
This will also not work
bottomRow()
w.Canvas().Refresh(window)
})
}, w)
d.Show()
I am really stuck here.

I found the solution :
ct.Add(newObject)
ct.Layout = (container.NewGridWithColumns(len(ct.Objects))).Layout
ct.Refresh()

Yes, Container.Add is the solution, you just needed to save a reference to the Container instead of creating a new one.
Depending on the Layout it may not always be necessary to change it, for example GridWithRows(...) will keep adding columns if the number of items is increased (because the row count is fixed).

Related

Handling query results in Golang

Consider the following method which executes a gremlin query in Go and then interprets or parses the results.
func (n NeptuneGremlinGraph) Query(assetID string, version string, entityID string) ([]hz.Component, error) {
defer utils.TimeTracker(time.Now(), fmt.Sprintf("Graph Query"))
g := gremlin.Traversal_().WithRemote(n.connection)
anonT := gremlin.T__
results, err := g.V(makeId(assetID, version, entityID)).
Repeat(anonT.As("a").InE().OutV().SimplePath()).
Emit().Until(anonT.OutE().Count().Is(0)).
Filter(anonT.HasLabel("Component")).
Project("entity", "component").
By(anonT.Select("a").ElementMap()).
By(anonT.ElementMap()).
ToList()
if err != nil {
return nil, err
}
cnt := 0
for _, r := range results {
var entityID, componentID, value string
if m, ok := r.Data.(map[any]any); ok {
if entity, ok := m["entity"]; ok {
if entity, ok := entity.(map[any]any); ok {
if id, ok := entity["id"]; ok {
if id, ok := id.(string); ok {
_, _, entityID = splitId(id)
}
}
}
}
if component, ok := m["component"]; ok {
if component, ok := component.(map[any]any); ok {
if id, ok := component["component_id"]; ok {
if id, ok := id.(string); ok {
componentID = id
}
}
if v, ok := component["value"]; ok {
if v, ok := v.(string); ok {
value = v
}
}
}
}
log.Printf("%s, %s, %s\n", entityID, componentID, value)
} else {
log.Printf("not a map\n")
}
}
log.Printf("path cnt = %d\n", cnt)
return nil, nil
}
Obviously I could add helper methods to clean up the query processing code. But either way the query processing code has to deal with multiple layers of map[any]any and any values.
Am I missing some methods in the driver Result object that make this easier?
The Go GLV does not have any built in tools to assist in traversing maps. I would suggest not using the ElementMap() step if you do not need the full map. Since it appears that the only data you are looking for is the id of both “entity” and “component” as well as the component value, you could simplify your result by using a traversal which only selects these items, instead of the full element maps. The following is an example from gremlin console doing something similar to this using a sample dataset:
gremlin> g.V().repeat(__.as("a").inE().outV().simplePath()).emit().until(__.outE().count().is(0)).filter(__.hasLab
el("person")).project("entityID", "personID", "personValue").by(__.select("a").id()).by(__.id()).by(__.values()).toList()
==>{entityID=2, personID=1, personValue=marko}
==>{entityID=3, personID=1, personValue=marko}
==>{entityID=3, personID=4, personValue=josh}
==>{entityID=3, personID=6, personValue=peter}
==>{entityID=4, personID=1, personValue=marko}
==>{entityID=4, personID=1, personValue=marko}
==>{entityID=5, personID=4, personValue=josh}
==>{entityID=4, personID=1, personValue=marko}
This cleans things up, but is obviously not safe, and could lead to panics.
for _, r := range results {
var entityID, componentID, value string
if m, ok := r.Data.(map[any]any); ok {
_, _, entityID = splitId(m["entity"].(map[any]any)["id"].(string))
componentID = m["component"].(map[any]any)["component_id"].(string)
value = m["component"].(map[any]any)["value"].(string)
components = append(components, hz.Component{
EntityID: entityID,
ComponentID: componentID,
Value: value,
})
} else {
log.Printf("not a map\n")
}
}

gob decoder returns only first element in the array

So I was trying to create a mock DB, and in the current implementation, I am trying to make an insert and select which insert rows and select returns them. I decided to use a bytes.Buffer to help maintain a memory block I could insert a slice of rows in, and deserialize that memory block when I call select but it seems select just returns the first row instead of all the rows passed to the array.
main.go
func main() {
inputBuffer := compiler.NewInputBuffer()
scanner := bufio.NewScanner(os.Stdin)
for {
PrintPrompt()
scanner.Scan()
command := scanner.Text()
inputBuffer.Buffer = command
if strings.HasPrefix(inputBuffer.Buffer, ".") {
switch compiler.DoMetaCommand(inputBuffer) {
case compiler.MetaCommandSuccess:
continue
case compiler.MetaCommandUnrecognizedCommand:
fmt.Printf("Unrecognized command %q \n", inputBuffer.Buffer)
continue
}
}
var statement compiler.Statement
switch compiler.PrepareStatement(inputBuffer, &statement) {
case compiler.PrepareSuccess:
case compiler.PrepareUnrecognizedStatement:
fmt.Printf("Unrecognized command at start of %q \n", inputBuffer.Buffer)
continue
case compiler.PrepareSyntaxError:
fmt.Println("Syntax error. Could not parse statement.")
continue
}
compiler.ExecuteStatement(statement)
fmt.Println("Executed")
}
}
func PrintPrompt() {
fmt.Printf("db > ")
}
Above is the code responsible for collecting user input etc.
package compiler
import (
"bytes"
"log"
"os"
"strconv"
"strings"
)
type Row struct {
ID int32
Username string
Email string
}
type Statement struct {
RowToInsert Row
Type StatementType
}
var (
RowsTable = make([]Row, 0)
RowsTableBuffer bytes.Buffer
)
func DoMetaCommand(buffer InputBuffer) MetaCommandResult {
if buffer.Buffer == ".exit" {
os.Exit(0)
} else {
return MetaCommandUnrecognizedCommand
}
return MetaCommandSuccess
}
func PrepareStatement(buffer InputBuffer, statement *Statement) PrepareResult {
if len(buffer.Buffer) > 6 {
bufferArguments := strings.Fields(buffer.Buffer)
if bufferArguments[0] == "insert" {
statement.Type = StatementInsert
if len(bufferArguments) < 4 {
return PrepareSyntaxError
} else {
i, err := strconv.Atoi(bufferArguments[1])
if err != nil {
log.Printf("%q is not a valid id\n", bufferArguments[1])
return PrepareSyntaxError
} else {
statement.RowToInsert.ID = int32(i)
}
statement.RowToInsert.Username = bufferArguments[2]
statement.RowToInsert.Email = bufferArguments[3]
}
RowsTable = append(RowsTable, statement.RowToInsert)
return PrepareSuccess
}
}
if buffer.Buffer == "select" {
statement.Type = StatementSelect
return PrepareSuccess
}
return PrepareUnrecognizedStatement
}
func ExecuteStatement(statement Statement) {
switch statement.Type {
case (StatementInsert):
SerializeRow(RowsTable)
case (StatementSelect):
DeserializeRow()
}
}
The code above is for parsing and appending the entries into statements and depending on the keywords, it's either an insert or select [Took the code for defining enums out and left core logic]
func SerializeRow(r []Row) {
encoder := gob.NewEncoder(&RowsTableBuffer)
err := encoder.Encode(r)
if err != nil {
log.Println("encode error:", err)
}
}
func DeserializeRow() {
var rowsBuffer = RowsTableBuffer
rowsTable := make([]Row, 0)
decoder := gob.NewDecoder(&rowsBuffer)
err := decoder.Decode(&rowsTable)
if err != nil {
log.Println("decode error:", err)
}
fmt.Println(rowsTable)
}
So the code above uses a global buffer in which the slice being appended to in PrepareStatement()will be encoded after an insert is done. A select ought to return the slice of all rows but just returns the first element for some reason.
Example (in terminal):
If I make 2 inserts:
db > insert 1 john c#mail.com
db > insert 2 collins k#mail.com
Then I make a select:
select
=> it returns [{1 john c#mail.com}] only.
Is there anything I am missing here? Thanks for your support.
So the answer was pretty simple. We were creating a new encoder in the SerializeRow function instead of creating it once. We pulled it out of the function and created a global.
var (
encoder = gob.NewEncoder(&RowsTableBuffer)
decoder = gob.NewDecoder(&RowsTableBuffer)
)
func SerializeRow(r Row) {
err := encoder.Encode(r)
if err != nil {
log.Println("encode error:", err)
}
}
func DeserializeRow() {
var rows Row
err := decoder.Decode(&rows)
for err == nil {
if err != nil {
log.Fatal("decode error:", err)
}
fmt.Printf("%d %s %s\n", rows.ID, rows.Username, rows.Email)
err = decoder.Decode(&rows)
}
}

Golang for each filtering into a new var

Im working with for each loop and var of information and filtering it by A) regex.matchString B)Timecomparrison. The filtering works well and I have the data I need but currently I'm outputting it to screen via fmt.Println in part of the loop. My goal is to take that data and build another var with the now filtered list. I guess I need to make a new variable and add to it? But how do I return that and something I can use later?
Any assistance is appreciated.
for _, thing := range things {
if thing.element1 != nil {
matched, err := regexp.MatchString(z, element1)
if err != nil {
fmt.Println(err)
}
if matched {
if timecomparrison(element2, a) {
// this is a section that needs to be added new var and returned as a var
fmt.Println("****")
fmt.Println("element1:", element1)
fmt.Println("element2:", element2)
}
}
}
}
}
I think you need something like this.
type Thing struct {
element1 string
element2 string
}
func filter() []Thing {
things := []Thing{
{element1: "element1", element2: "element2"},
}
var result []Thing
regex := "/{}d/"
date := time.Now
for _, thing := range things {
if thing.element1 != nil {
matched, err := regexp.MatchString(regex, thing.element1)
if err != nil {
fmt.Println(err)
}
if matched {
if timeComparison(thing.element2, date) {
// this is a section that needs to be added new var and returned as a var
fmt.Println("****")
fmt.Println("element1:", thing.element1)
fmt.Println("element2:", thing.element2)
result = append(result, thing)
}
}
}
}
return result
}
I cleaned the code, added a type and some data, fixed some issues and renamed some things, but you should get the idea :)

How to make goroutines work with anonymous functions returning value in a loop

I am working on a custom script to fetch data from RackSpace cloudfiles container and make a list of all the files in a given container (container has around 100 million files) and I have been working on parallelizing the code and currently stuck.
// function to read data from channel and display
// currently just displaying, but there will be allot of processing done on this data
func extractObjectItemsFromList(objListChan <-chan []string) {
fmt.Println("ExtractObjectItemsFromList")
for _, c := range <-objListChan {
fmt.Println(urlPrefix, c, "\t", count)
}
}
func main()
// fetching data using flags
ao := gophercloud.AuthOptions{
Username: *userName,
APIKey: *apiKey,
}
provider, err := rackspace.AuthenticatedClient(ao)
client, err := rackspace.NewObjectStorageV1(provider,gophercloud.EndpointOpts{
Region: *region,
})
if err != nil {
logFatal(err)
}
// We have the option of filtering objects by their attributes
opts := &objects.ListOpts{
Full: true,
Prefix: *prefix,
}
var objectListChan = make(chan []string)
go extractObjectItemsFromList(objectListChan)
// Retrieve a pager (i.e. a paginated collection)
pager := objects.List(client, *containerName, opts)
// Not working
// By default EachPage contains 10000 records
// Define an anonymous function to be executed on each page's iteration
lerr := pager.EachPage(func(page pagination.Page) (bool, error) { // Get a slice of objects.Object structs
objectList, err := objects.ExtractNames(page)
if err != nil {
logFatal(err)
}
for _, o := range objectList {
_ = o
}
objectListChan <- objectList
return true, nil
})
if lerr != nil {
logFatal(lerr)
}
//---------------------------------------------------
// below code is working
//---------------------------------------------------
// working, but only works inside the loop, this keeps on fetching new pages and showing new records, 10000 per page
// By default EachPage contains 10000 records
// Define an anonymous function to be executed on each page's iteration
lerr := pager.EachPage(func(page pagination.Page) (bool, error) { // Get a slice of objects.Object structs
objectList, err := objects.ExtractNames(page)
if err != nil {
logFatal(err)
}
for _, o := range objectList {
fmt.Println(o)
}
return true, nil
})
if lerr != nil {
logFatal(lerr)
}
The first 10000 records are displayed but then it stuck and nothing happens. If I do not use channel and just run the plain loop it works perfectly fine, which kills the purpose of parallelizing.
for _, c := range <-objListChan {
fmt.Println(urlPrefix, c, "\t", count)
}
Your async worker pops one list from the channel, iterates it and exits. You need to have two loops: one reading the channel (range objListChan), the other - reading the (just retrieved) object list.

Go adding items to array recursively not working

I have been doing some golang programming and usually has been lot of fun. Now I have this code I need to port from C# to go and it is just not working.
The idea is to fill a tree of employees from database but inner slices are not being filled up on each call.
Better to write the code here
func (db *DalBase) TitleAllChildren(tx *gorp.Transaction) (items []Title, err error) {
var dbChildren []entities.Title
_, err = tx.Select(&dbChildren, "select * from title where idparent is null order by name")
if err != nil {
return
}
items = make([]Title, 0)
for i := range dbChildren {
currItem := &dbChildren[i]
item := &Title{Id: currItem.Id, Name: currItem.Name}
err = db.TitleChildrenRecursive(tx, item)
if err != nil {
return
}
items = append(items, *item)
}
return
}
func (db *DalBase) TitleChildrenRecursive(tx *gorp.Transaction, u *Title) (err error) {
var dbChildren []entities.Title
_, err = tx.Select(&dbChildren, "select * from title where idparent = $1 order by name", u.Id)
if err != nil {
return
}
if len(dbChildren) != 0 {
u.Items = make([]Title, 0)
for i := range dbChildren {
currItem := &dbChildren[i]
item := &Title{Id: currItem.Id, Name: currItem.Name}
err = db.TitleChildrenRecursive(tx, item)
if err != nil {
return
}
u.Items = append(item.Items, *item)
}
}
return
}
I have logged the values on each recursive call and they are being filled inside the function, but when it bubbles up to the parent, slices are empty.
I wouldn't like to use pointers to slices, is it possible to implement?
Edit: here is the struct I am trying to fill
type Title struct {
Id string `json:"id"`
Name string `json:"name"`
Items []Title `json:"items"`
}
You won't need a pointer to a slice, as long you are passing around a pointer to the struct which contains your slice.
Each time you call TitleChildrenRecursive, you're replacing your slice with a new one before anything is appended:
u.Items = make([]Title, 0)
There's no need to make a new slice, since append works correctly with a nil slice.
You should also change []Title to []*Title, so that if any append operations happen to children items after they are added to the slice, it's reflected throughout the tree.

Resources