Using Google Go, I'm trying to sync multiple threads performing an iterative filter on an image. My code basically works like outlined here:
func filter(src *image.Image, dest *image.Image, start, end, runs int, barrier ??) {
for i:= 0; i < runs; i++ {
// ... do image manipulation ...
// barrier.Await() would work here
if start == 1 {
// the first thread switches the images for the next iteration step
switchImgs(src, dest)
}
// barrier.Await() again
}
}
func main() {
//...
barrier := sync.BarrierNew(numberOfThreads)
for i := 0; i < numberOfThreads; i++ {
go filter(..., barrier)
}
The problem is that I would need a reusable barrier quite like Java's CyclicBarrier, setting the number of threads as its counter value. Unfortunately, the only implementation similar to a barrier I have found is sync.WaitGroup. The WaitGroup however cannot be reset atomically to it's previous counter value. It only offers a normal Wait() function that does not reset the counter value.
Is there any "Go idiomatic" way of achieving what I want or should I rather implement my own CyclicBarrier? Thanks a lot for your help!
You can use sync.Cond to implement CyclicBarrier, see source code of java's CyclicBarrier
Here is a minimized go version of CyclicBarrier (no timeout, no thread interrupts):
http://play.golang.org/p/5JSNTm0BLe
type CyclicBarrier struct {
generation int
count int
parties int
trip *sync.Cond
}
func (b *CyclicBarrier) nextGeneration() {
// signal completion of last generation
b.trip.Broadcast()
b.count = b.parties
// set up next generation
b.generation++
}
func (b *CyclicBarrier) Await() {
b.trip.L.Lock()
defer b.trip.L.Unlock()
generation := b.generation
b.count--
index := b.count
//println(index)
if index == 0 {
b.nextGeneration()
} else {
for generation == b.generation {
//wait for current generation complete
b.trip.Wait()
}
}
}
func NewCyclicBarrier(num int) *CyclicBarrier {
b := CyclicBarrier{}
b.count = num
b.parties = num
b.trip = sync.NewCond(&sync.Mutex{})
return &b
}
I don't fully understand how CyclicBarrier works, so excuse me if I'm way off.
A very simple wrapper around SyncGroup should do the job, for example:
type Barrier struct {
NumOfThreads int
wg sync.WaitGroup
}
func NewBarrier(num int) (b *Barrier) {
b = &Barrier{NumOfThreads: num}
b.wg.Add(num)
return
}
func (b *Barrier) Await() {
b.wg.Wait()
b.wg.Add(b.NumOfThreads)
}
func (b *Barrier) Done() {
b.wg.Done()
}
func filter(src *image.Image, dest *image.Image, start, end, runs int, barrier *Barrier) {
for i := 0; i < runs; i++ {
// ... do image manipulation ...
//this filter is done, say so by using b.Done()
b.Done()
b.Await()
if start == 1 {
// the first thread switches the images for the next iteration step
//switchImgs(src, dest)
}
b.Done()
b.Await()
}
}
func main() {
barrier := NewBarrier(5)
for i := 0; i < barrier.NumOfThreads; i++ {
go filter(1, barrier)
}
}
Related
The following program never prints "Full". With fmt.Println(len(choke)) uncommented, the program outputs "Full" when the channel is full.
package main
import (
"fmt"
)
func main() {
choke := make(chan string, 150000)
go func() {
for i := 0; i < 10000000; i++ {
choke <- string(i)
fmt.Println("i=", i)
}
}()
for {
//fmt.Println(len(choke))
if len(choke) >= 150000 {
fmt.Println("Full")
}
}
}
#tim-heckman explained the cause of this behavior in OP.
How do I detect a channel is full without using a hot loop?
Use a select statement on the write side. It will write to the channel if there is buffer available or a receiver waiting; it will fallthrough to the default case if the channel is full.
func main() {
choke := make(chan string, 150000)
var i int
for {
select {
case choke <- string(i):
i++
default:
fmt.Println("Full")
return
}
}
}
I know I can do like this:
func randomFunc() {
// do stuff
go destroyObjectAfterXHours(4, "idofobject")
// do other stuff
}
func destroyObjectAfterXHours(hours int, id string) {
time.Sleep(hours * time.Hour)
destroyObject(id)
}
but if we imagine destroyObjectAfterXHours is called a million times within a few minutes, this solution will be very bad.
I was hoping someone could share a more efficient solution to this problem.
I've been thinking about a potential solution where destruction time and object id was stored somewhere, and then there would be one func that would traverse through the list every X minutes, destroy the objects that had to be destroyed and remove their id and time info from wherever that info was stored. Would this be a good solution?
I worry it would also be bad solution since you will then have to traverse through a list with millions of items all the time, and then have to efficiently remove some of the items, etc.
The time.AfterFunc function is designed for this use case:
func randomFunc() {
// do stuff
time.AfterFunc(4*time.Hour, func() { destroyObject("idofobject") })
// do other stuff
}
time.AfterFunc is efficient and simple to use.
As the documentation states, the function is called in a goroutine after the duration elapses. The goroutine is not created up front as in the question.
So I'd agree with your solution #2 than number 1.
Traversing through a list of a million numbers is much easier than having a million separate Go Routines
Go Routines are expensive (compared to loops) and take memory and processing time. For eg. a million Go Routines take about 4GB of RAM.
Traversing through a list on the other hand takes very little space and is done in O(n) time.
A good example of this exact function is Go Cache which deletes its expired elements in a Go Routine it runs periodically
https://github.com/patrickmn/go-cache/blob/master/cache.go#L931
This is a more detailed example of how they did it:
type Item struct {
Object interface{}
Expiration int64
}
func (item Item) Expired() bool {
if item.Expiration == 0 {
return false
}
return time.Now().UnixNano() > item.Expiration
}
func RemoveItem(s []Item, index int) []int {
return append(s[:index], s[index+1:]...)
}
func deleteExpired(items []Item){
var deletedItems []int
for k, v := range items {
if v.Expired(){
deletedItems = append(deletedItems, k)
}
}
for key, deletedIndex := range deleteditems{
items = RemoveItem(items, deletedIndex)
}
}
The above implementation could definitely be improved with a linked list instead of an array but this is the general idea
This is an interesting question. I come to a solution where it use a heap to maintain the queue of items to be destroyed and sleep for exactly time until the next item is up for destruction. I think it is more efficient, but the gain might be slim on some cases. Nonetheless, you can see the code here:
package main
import (
"container/heap"
"fmt"
"time"
)
type Item struct {
Expiration time.Time
Object interface{} // It would make more sence to be *interface{}, but not as convinient
}
//MINIT is the minimal interval for delete to run. In most cases, it is better to be set as 0
const MININT = 1 * time.Second
func deleteExpired(addCh chan Item) (quitCh chan bool) {
quitCh = make(chan bool)
go func() {
h := make(ExpHeap, 0)
var t *time.Timer
item := <-addCh
heap.Push(&h, &item)
t = time.NewTimer(time.Until(h[0].Expiration))
for {
//Check unfinished incoming first
for incoming := true; incoming; {
select {
case item := <-addCh:
heap.Push(&h, &item)
default:
incoming = false
}
}
if delta := time.Until(h[0].Expiration); delta >= MININT {
t.Reset(delta)
} else {
t.Reset(MININT)
}
select {
case <-quitCh:
return
//New Item incoming, break the timer
case item := <-addCh:
heap.Push(&h, &item)
if item.Expiration.After(h[0].Expiration) {
continue
}
if delta := time.Until(item.Expiration); delta >= MININT {
t.Reset(delta)
} else {
t.Reset(MININT)
}
//Wait until next item to be deleted
case <-t.C:
for !h[0].Expiration.After(time.Now()) {
item := heap.Pop(&h).(*Item)
destroy(item.Object)
}
if delta := time.Until(h[0].Expiration); delta >= MININT {
t.Reset(delta)
} else {
t.Reset(MININT)
}
}
}
}()
return quitCh
}
type ExpHeap []*Item
func (h ExpHeap) Len() int {
return len(h)
}
func (h ExpHeap) Swap(i, j int) {
h[i], h[j] = h[j], h[i]
}
func (h ExpHeap) Less(i, j int) bool {
return h[i].Expiration.Before(h[j].Expiration)
}
func (h *ExpHeap) Push(x interface{}) {
item := x.(*Item)
*h = append(*h, item)
}
func (h *ExpHeap) Pop() interface{} {
old, n := *h, len(*h)
item := old[n-1]
*h = old[:n-1]
return item
}
//Auctural destroy code.
func destroy(x interface{}) {
fmt.Printf("%v # %v\n", x, time.Now())
}
func main() {
addCh := make(chan Item)
quitCh := deleteExpired(addCh)
for i := 30; i > 0; i-- {
t := time.Now().Add(time.Duration(i) * time.Second / 2)
addCh <- Item{t, t}
}
time.Sleep(7 * time.Second)
quitCh <- true
}
playground: https://play.golang.org/p/JNV_6VJ_yfK
By the way, there are packages like cron for job management, but I am not familiar with them so I cannot speak for their efficiency.
Edit:
Still I haven't enough reputation to comment :(
About performance: this code basically has less CPU usage as it only wake it self when necessary and only traverse items that is up for destruction instead of the whole list. Based on personal (actually ACM experience), roughly a mordern CPU can process a loop of 10^9 in 1.2 seconds or so, which means on a scale of 10^6, traversing the whole list takes about over 1 millisecond excluding auctual destruction code AND data copy (which will cost a lot on average on more than thousands of runs, to a scale of 100 millisecond or so). My code's approach is O(lg N) which on 10^6 scale is at least a thousand times faster (considering constant). Please note again all these calculation is based on experience instead of benchmarks (there was but I am not able to provide them).
Edit 2:
With a second thought, I think the plain solution can use a simple optimization:
func deleteExpired(items []Item){
tail = len(items)
for index, v := range items { //better naming
if v.Expired(){
tail--
items[tail],items[index] = v,items[tail]
}
}
deleteditems := items[tail:]
items:=items[:tail]
}
With this change, it no longer copy data unefficiently and will not allocate extra space.
Edit 3:
changing code from here
I tested the memoryuse of afterfunc. On my laptop it is 250 bytes per call, while on palyground it is 69 (I am curious at the reason). With my code, A pointer + a time.Time is 28 byte. At a scale of a million, the difference is slim. Using After Func is a much better option.
If it is a one-shot, this can be easily achieved with
// Make the destruction cancelable
cancel := make(chan bool)
go func(t time.Duration, id int){
expired := time.NewTimer(t).C
select {
// destroy the object when the timer is expired
case <-expired:
destroyObject(id)
// or cancel the destruction in case we get a cancel signal
// before it is destroyed
case <-cancel:
fmt.Println("Cancelled destruction of",id)
return
}
}(time.Hours * 4, id)
if weather == weather.SUNNY {
cancel <- true
}
If you want to do it every 4 h:
// Same as above, though since id may have already been destroyed
// once, I name the channel different
done := make(chan bool)
go func(t time.Duration,id int){
// Sends to the channel every t
tick := time.NewTicker(t).C
// Wrap, otherwise select will only execute the first tick
for{
select {
// t has passed, so id can be destroyed
case <-tick:
destroyObject(id)
// We are finished destroying stuff
case <-done:
fmt.Println("Ok, ok, I quit destroying...")
return
}
}
}()
if weather == weather.RAINY {
done <- true
}
The idea behind it is to run a single goroutine per destruction job which can be cancelled. Say, you have a session and the user did something, so you want to keep the session alive. Since goroutines are extremely cheap, you can simply fire off another goroutine.
Here is a simple concurrent map that I wrote for learning purpose
package concurrent_hashmap
import (
"hash/fnv"
"sync"
)
type ConcurrentMap struct {
buckets []ThreadSafeMap
bucketCount uint32
}
type ThreadSafeMap struct {
mapLock sync.RWMutex
hashMap map[string]interface{}
}
func NewConcurrentMap(bucketSize uint32) *ConcurrentMap {
var threadSafeMapInstance ThreadSafeMap
var bucketOfThreadSafeMap []ThreadSafeMap
for i := 0; i <= int(bucketSize); i++ {
threadSafeMapInstance = ThreadSafeMap{sync.RWMutex{}, make(map[string]interface{})}
bucketOfThreadSafeMap = append(bucketOfThreadSafeMap, threadSafeMapInstance)
}
return &ConcurrentMap{bucketOfThreadSafeMap, bucketSize}
}
func (cMap *ConcurrentMap) Put(key string, val interface{}) {
bucketIndex := hash(key) % cMap.bucketCount
bucket := cMap.buckets[bucketIndex]
bucket.mapLock.Lock()
bucket.hashMap[key] = val
bucket.mapLock.Unlock()
}
// Helper
func hash(s string) uint32 {
h := fnv.New32a()
h.Write([]byte(s))
return h.Sum32()
}
I am trying to write a simple benchmark and I find that synchronize access will work correctly but concurrent access will get
fatal error: concurrent map writes
Here is my benchmark run with go test -bench=. -race
package concurrent_hashmap
import (
"testing"
"runtime"
"math/rand"
"strconv"
"sync"
)
// Concurrent does not work
func BenchmarkMyFunc(b *testing.B) {
var wg sync.WaitGroup
runtime.GOMAXPROCS(runtime.NumCPU())
my_map := NewConcurrentMap(uint32(4))
for n := 0; n < b.N; n++ {
go insert(my_map, wg)
}
wg.Wait()
}
func insert(my_map *ConcurrentMap, wg sync.WaitGroup) {
wg.Add(1)
var rand_int int
for element_num := 0; element_num < 1000; element_num++ {
rand_int = rand.Intn(100)
my_map.Put(strconv.Itoa(rand_int), rand_int)
}
defer wg.Done()
}
// This works
func BenchmarkMyFuncSynchronize(b *testing.B) {
my_map := NewConcurrentMap(uint32(4))
for n := 0; n < b.N; n++ {
my_map.Put(strconv.Itoa(123), 123)
}
}
The WARNING: DATA RACE is saying that bucket.hashMap[key] = val is causing the problem, but I am confused on why that is possible, since I lock that logic whenever write is happening.
I think I am missing something basic, can someone point out my mistake?
Thanks
Edit1:
Not sure if this helps but here is what my mutex looks like if I don't lock anything
{{0 0} 0 0 0 0}
Here is what it looks like if I lock the write
{{1 0} 0 0 -1073741824 0}
Not sure why my readerCount is a low negative number
Edit:2
I think I find where the issue is at, but not sure why I have to code that way
The issue is
type ThreadSafeMap struct {
mapLock sync.RWMutex // This is causing problem
hashMap map[string]interface{}
}
it should be
type ThreadSafeMap struct {
mapLock *sync.RWMutex
hashMap map[string]interface{}
}
Another weird thing is that in Put if I put print statement inside lock
bucket.mapLock.Lock()
fmt.Println("start")
fmt.Println(bucket)
fmt.Println(bucketIndex)
fmt.Println(bucket.mapLock)
fmt.Println(&bucket.mapLock)
bucket.hashMap[key] = val
defer bucket.mapLock.Unlock()
The following prints is possible
start
start
{0x4212861c0 map[123:123]}
{0x4212241c0 map[123:123]}
Its weird because each start printout should be follow with 4 lines of bucket info since you cannot have start back to back because that would indicate that multiple thread is access the line inside lock
Also for some reason each bucket.mapLock have different address even if I make the bucketIndex static, that indicate that I am not even accessing the same lock.
But despite the above weirdness changing mutex to pointer solves my problem
I would love to find out why I need pointers for mutex and why the prints seem to indicate multiple thread is accessing the lock and why each lock has different address.
The problem is with the statement
bucket := cMap.buckets[bucketIndex]
bucket now contains copy of the ThreadSafeMap at that index. As sync.RWMutex is stored as value, a copy of it is made while assigning. But map maps hold references to an underlying data structure, so the copy of the pointer or the same map is passed. The code locks a copy of the lock while writing to a single map, which cause the problem.
Thats why you don't face any problem when you change sync.RWMutex to *sync.RWMutex. It's better to store reference to structure in map as shown.
package concurrent_hashmap
import (
"hash/fnv"
"sync"
)
type ConcurrentMap struct {
buckets []*ThreadSafeMap
bucketCount uint32
}
type ThreadSafeMap struct {
mapLock sync.RWMutex
hashMap map[string]interface{}
}
func NewConcurrentMap(bucketSize uint32) *ConcurrentMap {
var threadSafeMapInstance *ThreadSafeMap
var bucketOfThreadSafeMap []*ThreadSafeMap
for i := 0; i <= int(bucketSize); i++ {
threadSafeMapInstance = &ThreadSafeMap{sync.RWMutex{}, make(map[string]interface{})}
bucketOfThreadSafeMap = append(bucketOfThreadSafeMap, threadSafeMapInstance)
}
return &ConcurrentMap{bucketOfThreadSafeMap, bucketSize}
}
func (cMap *ConcurrentMap) Put(key string, val interface{}) {
bucketIndex := hash(key) % cMap.bucketCount
bucket := cMap.buckets[bucketIndex]
bucket.mapLock.Lock()
bucket.hashMap[key] = val
bucket.mapLock.Unlock()
}
// Helper
func hash(s string) uint32 {
h := fnv.New32a()
h.Write([]byte(s))
return h.Sum32()
}
It's possible to validate the scenario by modifying the function Put as follows
func (cMap *ConcurrentMap) Put(key string, val interface{}) {
//fmt.Println("index", key)
bucketIndex := 1
bucket := cMap.buckets[bucketIndex]
fmt.Printf("%p %p\n", &(bucket.mapLock), bucket.hashMap)
}
I am making a cache wrapper around a database. To account for possibly slow database calls, I was thinking of a mutex per key (pseudo Go code):
mutexes = map[string]*sync.Mutex // instance variable
mutexes[key].Lock()
defer mutexes[key].Unlock()
if value, ok := cache.find(key); ok {
return value
}
value = databaseCall(key)
cache.save(key, value)
return value
However I don't want my map to grow too much. My cache is an LRU and I want to have a fixed size for some other reasons not mentioned here. I would like to do something like
delete(mutexes, key)
when all the locks on the key are over but... that doesn't look thread safe to me... How should I do it?
Note: I found this question
In Go, can we synchronize each key of a map using a lock per key? but no answer
A map of mutexes is an efficient way to accomplish this, however the map itself must also be synchronized. A reference count can be used to keep track of entries in concurrent use and remove them when no longer needed. Here is a working map of mutexes complete with a test and benchmark.
(UPDATE: This package provides similar functionality: https://pkg.go.dev/golang.org/x/sync/singleflight )
mapofmu.go
// Package mapofmu provides locking per-key.
// For example, you can acquire a lock for a specific user ID and all other requests for that user ID
// will block until that entry is unlocked (effectively your work load will be run serially per-user ID),
// and yet have work for separate user IDs happen concurrently.
package mapofmu
import (
"fmt"
"sync"
)
// M wraps a map of mutexes. Each key locks separately.
type M struct {
ml sync.Mutex // lock for entry map
ma map[interface{}]*mentry // entry map
}
type mentry struct {
m *M // point back to M, so we can synchronize removing this mentry when cnt==0
el sync.Mutex // entry-specific lock
cnt int // reference count
key interface{} // key in ma
}
// Unlocker provides an Unlock method to release the lock.
type Unlocker interface {
Unlock()
}
// New returns an initalized M.
func New() *M {
return &M{ma: make(map[interface{}]*mentry)}
}
// Lock acquires a lock corresponding to this key.
// This method will never return nil and Unlock() must be called
// to release the lock when done.
func (m *M) Lock(key interface{}) Unlocker {
// read or create entry for this key atomically
m.ml.Lock()
e, ok := m.ma[key]
if !ok {
e = &mentry{m: m, key: key}
m.ma[key] = e
}
e.cnt++ // ref count
m.ml.Unlock()
// acquire lock, will block here until e.cnt==1
e.el.Lock()
return e
}
// Unlock releases the lock for this entry.
func (me *mentry) Unlock() {
m := me.m
// decrement and if needed remove entry atomically
m.ml.Lock()
e, ok := m.ma[me.key]
if !ok { // entry must exist
m.ml.Unlock()
panic(fmt.Errorf("Unlock requested for key=%v but no entry found", me.key))
}
e.cnt-- // ref count
if e.cnt < 1 { // if it hits zero then we own it and remove from map
delete(m.ma, me.key)
}
m.ml.Unlock()
// now that map stuff is handled, we unlock and let
// anything else waiting on this key through
e.el.Unlock()
}
mapofmu_test.go:
package mapofmu
import (
"math/rand"
"strconv"
"strings"
"sync"
"testing"
"time"
)
func TestM(t *testing.T) {
r := rand.New(rand.NewSource(42))
m := New()
_ = m
keyCount := 20
iCount := 10000
out := make(chan string, iCount*2)
// run a bunch of concurrent requests for various keys,
// the idea is to have a lot of lock contention
var wg sync.WaitGroup
wg.Add(iCount)
for i := 0; i < iCount; i++ {
go func(rn int) {
defer wg.Done()
key := strconv.Itoa(rn)
// you can prove the test works by commenting the locking out and seeing it fail
l := m.Lock(key)
defer l.Unlock()
out <- key + " A"
time.Sleep(time.Microsecond) // make 'em wait a mo'
out <- key + " B"
}(r.Intn(keyCount))
}
wg.Wait()
close(out)
// verify the map is empty now
if l := len(m.ma); l != 0 {
t.Errorf("unexpected map length at test end: %v", l)
}
// confirm that the output always produced the correct sequence
outLists := make([][]string, keyCount)
for s := range out {
sParts := strings.Fields(s)
kn, err := strconv.Atoi(sParts[0])
if err != nil {
t.Fatal(err)
}
outLists[kn] = append(outLists[kn], sParts[1])
}
for kn := 0; kn < keyCount; kn++ {
l := outLists[kn] // list of output for this particular key
for i := 0; i < len(l); i += 2 {
if l[i] != "A" || l[i+1] != "B" {
t.Errorf("For key=%v and i=%v got unexpected values %v and %v", kn, i, l[i], l[i+1])
break
}
}
}
if t.Failed() {
t.Logf("Failed, outLists: %#v", outLists)
}
}
func BenchmarkM(b *testing.B) {
m := New()
b.ResetTimer()
for i := 0; i < b.N; i++ {
// run uncontended lock/unlock - should be quite fast
m.Lock(i).Unlock()
}
}
I wrote a simple similar implementation: mapmutex
But instead of a map of mutexes, in this implementation, a mutex is used to guard the map and each item in the map is used like a 'lock'. The map itself is just simple ordinary map.
I want a set of code to be executed until user explicitly wants to exit the function. For eg: when a user runs the program, he will see 2 options:
Run again
Exit
this will be achieved using switch case structure. Here if user presses 1, set of functions associated with 1 will execute and if user presses 2, the program will exit. How should i achieve this scenario in golang ? In java, i believe this could be done using do while structure but go doesn't support do while loop. Following is my code which i tried but this goes in a infinite loop:
func sample() {
var i = 1
for i > 0 {
fmt.Println("Press 1 to run")
fmt.Println("Press 2 to exit")
var input string
inpt, _ := fmt.Scanln(&input)
switch inpt {
case 1:
fmt.Println("hi")
case 2:
os.Exit(2)
default:
fmt.Println("def")
}
}
}
The program irrespective of the input, prints only "hi". Could someone please correct me what wrong i am doing here ?
Thanks.
A do..while can more directly be emulated in Go with a for loop using a bool loop variable seeded with true.
for ok := true; ok; ok = EXPR { }
is more or less directly equivalent to
do { } while(EXPR)
So in your case:
var input int
for ok := true; ok; ok = (input != 2) {
n, err := fmt.Scanln(&input)
if n < 1 || err != nil {
fmt.Println("invalid input")
break
}
switch input {
case 1:
fmt.Println("hi")
case 2:
// Do nothing (we want to exit the loop)
// In a real program this could be cleanup
default:
fmt.Println("def")
}
}
Edit: Playground (with a dummied-out Stdin)
Though, admittedly, in this case it's probably overall clearer to just explicitly call (labelled) break, return, or os.Exit in the loop.
When this question was asked this was a better answer for this specific scenario (little did I know this would be the #1 result when searching Google for "do while loop golang"). For answering this question generically please see #LinearZoetrope's answer below.
Wrap your function in a for loop:
package main
import (
"fmt"
"os"
)
func main() {
fmt.Println("Press 1 to run")
fmt.Println("Press 2 to exit")
for {
sample()
}
}
func sample() {
var input int
n, err := fmt.Scanln(&input)
if n < 1 || err != nil {
fmt.Println("invalid input")
return
}
switch input {
case 1:
fmt.Println("hi")
case 2:
os.Exit(2)
default:
fmt.Println("def")
}
}
A for loop without any declarations is equivalent to a while loop in other C-like languages. Check out the Effective Go documentation which covers the for loop.
The do...while in go can be this:
func main() {
var value int
for {
value++
fmt.Println(value)
if value%6 != 0 {
break
}
}
}
a while loop in Go can be as easy as this:
package main
import `fmt`
func main() {
for {
var number float64
fmt.Print(`insert an Integer eq or gr than 10!!!`)
fmt.Scanf(`%f`, &number)
if number >= 10 { break }
fmt.Println(`sorry the number is lower than 10....type again!!!`)
}
Conside to use "for-break" as "do-while".
foo.go
package main
import (
"fmt"
)
func main() {
i := 0
for {
i++
if i > 10 {
break
}
fmt.Printf("%v ", i)
}
fmt.Println()
}
shell
$ go run foo.go
1 2 3 4 5 6 7 8 9 10
Maybe not what you're looking for, but if you're trying to do something like this:
int i = 0;
while (i < 10) {
cout << "incrementing i now" << endl;
i++
}
cout << "done"
You'll have to do something like this in go:
var i = 0
fmt.Println(i)
for {
if i < 10 {
fmt.Println("incrementing i now")
i++
} else {
break
}
}
fmt.Println("done")
sum := 1
for sum < 1000 {
sum += sum
}
Explanation :
The basic for loop has three components separated by semicolons:
-the init statement: executed before the first iteration.
-the condition expression: evaluated before every iteration
-the post statement: executed at the end of every iteration
The init and post statements are optional.
So you can just put in the condition expression.
// While (CONDITION = true){
//code to execute ....}
//In go :
for CONDITION = true {
//code to execute}
This is one of the cleanest ways:
num := 10
for num > 0 {
// do stuff here
num--
}