Golang error in terminal - macos

I'm getting Golang errors in my OSX terminal when I press enter multiple times. There's a known issue with OSX Sierra and Golang version 1.4.2. I don't have Golang version 1.4.2 but the stacktrace is referring to that so my assumption is that there is an application somewhere using the old version.
Is there a way I can enable verbose logging or find out the util/daemon that is causing the error? The stacktrace is below:
failed MSpanList_Insert 0x370000 0x672c197df0740 0x0
fatal error: MSpanList_Insert
runtime stack:
runtime.throw(0x279a4b)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/panic.go:491 +0xad fp=0x7fff5fbff000 sp=0x7fff5fbfefd0
runtime.MSpanList_Insert(0x298e28, 0x370000)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mheap.c:692 +0x8f fp=0x7fff5fbff028 sp=0x7fff5fbff000
MHeap_FreeSpanLocked(0x295a20, 0x370000, 0x100)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mheap.c:583 +0x163 fp=0x7fff5fbff068 sp=0x7fff5fbff028
MHeap_Grow(0x295a20, 0x8, 0x0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mheap.c:420 +0x1a8 fp=0x7fff5fbff0a8 sp=0x7fff5fbff068
MHeap_AllocSpanLocked(0x295a20, 0x1, 0x0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mheap.c:298 +0x365 fp=0x7fff5fbff0e8 sp=0x7fff5fbff0a8
mheap_alloc(0x295a20, 0x1, 0x12, 0x0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mheap.c:190 +0x121 fp=0x7fff5fbff110 sp=0x7fff5fbff0e8
runtime.MHeap_Alloc(0x295a20, 0x1, 0x10000000012, 0x1f2e9)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mheap.c:240 +0x66 fp=0x7fff5fbff148 sp=0x7fff5fbff110
MCentral_Grow(0x29d798, 0x0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mcentral.c:197 +0x8b fp=0x7fff5fbff1b0 sp=0x7fff5fbff148
runtime.MCentral_CacheSpan(0x29d798, 0x0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mcentral.c:85 +0x167 fp=0x7fff5fbff1e8 sp=0x7fff5fbff1b0
runtime.MCache_Refill(0x36c000, 0x12, 0x0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/mcache.c:90 +0xa0 fp=0x7fff5fbff210 sp=0x7fff5fbff1e8
runtime.mcacheRefill_m()
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/malloc.c:368 +0x57 fp=0x7fff5fbff230 sp=0x7fff5fbff210
runtime.onM(0x1f3ff8)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/asm_amd64.s:273 +0x9a fp=0x7fff5fbff238 sp=0x7fff5fbff230
runtime.mallocgc(0x120, 0x198de0, 0x0, 0x0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/malloc.go:178 +0x849 fp=0x7fff5fbff2e8 sp=0x7fff5fbff238
runtime.newobject(0x198de0, 0x36c000)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/malloc.go:353 +0x49 fp=0x7fff5fbff310 sp=0x7fff5fbff2e8
runtime.newG(0x3645a)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/proc.go:233 +0x2a fp=0x7fff5fbff328 sp=0x7fff5fbff310
allocg(0x288380)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/proc.c:925 +0x1f fp=0x7fff5fbff338 sp=0x7fff5fbff328
runtime.malg(0x8000, 0x288420)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/proc.c:2106 +0x1f fp=0x7fff5fbff368 sp=0x7fff5fbff338
runtime.mpreinit(0x2887e0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/os_darwin.c:137 +0x27 fp=0x7fff5fbff380 sp=0x7fff5fbff368
mcommoninit(0x2887e0)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/proc.c:201 +0xc9 fp=0x7fff5fbff3a8 sp=0x7fff5fbff380
runtime.schedinit()
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/proc.c:138 +0x55 fp=0x7fff5fbff3d0 sp=0x7fff5fbff3a8
runtime.rt0_go(0x7fff5fbff408, 0x3, 0x7fff5fbff408, 0x0, 0x0, 0x3, 0x7fff5fbff608, 0x7fff5fbff60f, 0x7fff5fbff616, 0x0, ...)
/usr/local/Cellar/go/1.4.2/libexec/src/runtime/asm_amd64.s:95 +0x116 fp=0x7fff5fbff3d8 sp=0x7fff5fbff3d0

Give brew upgrade go a try as this seems to be resolved in later versions, even if you didn't install with brew originally, the install path is the same so brew should solve it for you.

(Moving my comment as an answer to mark this question as resolved).
Ended up fixing it. Manually went through my utilities and isolated direnv (github.com/direnv/direnv) as the problematic utility. I must have compiled it using old Go version.

Related

Rancher 2.5.15 - Panic due to boundsError in eks-operator causing constant restarts

I'm facing an issue with Rancher 2.5.15 where the main Rancher pod and eks-config-operator are constantly crashing and rebooting. This started around 2 weeks ago without any change in the deployed Rancher application, which we confirmed by restoring a snapshot 5 days prior to this issue appearing, with no change.
I assume the issue is due to a registered EKS cluster having issues when returning the cluster information, as it's failing on a function called BuildUpstreamClusterState, however it doesn't indicate which cluster it may be. Deleting the clusters from Rancher is not an option, as there are several EKS clusters added, and since some of them were provisioned from Rancher itself, deleting the clusters from the Rancher UI will also delete them from AWS.
The clusters themselves are working correctly, including the cluster where Rancher is running.
Full log of the panic event:
E0722 12:59:58.631866 7 runtime.go:78] Observed a panic: runtime.boundsError{x:0, y:0, signed:true, code:0x0} (runtime error: index out of range [0] with length 0)
goroutine 1523 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic(0x3658a80, 0xc09820cba0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/runtime/runtime.go:74 +0x95
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/runtime/runtime.go:48 +0x86
panic(0x3658a80, 0xc09820cba0)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/rancher/eks-operator/controller.BuildUpstreamClusterState(0xc037b89ae0, 0x20, 0xc0631001c8, 0x14, 0xc0453237d0, 0xc0818104b8, 0x1, 0x1, 0xc0818104c8, 0xc062c43f00, ...)
/go/pkg/mod/github.com/rancher/eks-operator#v1.0.9/controller/eks-cluster-config-handler.go:865 +0x1931
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.BuildEKSUpstreamSpec(0x3f3d538, 0xc04cdf3050, 0xc04a2ad100, 0x30, 0xc0d89d79a0, 0x1)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/eks_upstream_spec.go:80 +0xc92
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.getComparableUpstreamSpec(0x3f3d538, 0xc04cdf3050, 0xc04a2ad100, 0x30, 0xc0d89d79a0, 0x1)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/cluster_upstream_refresher.go:263 +0x14c
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.(*clusterRefreshController).refreshClusterUpstreamSpec(0xc04cdb7740, 0xc04a2ad100, 0x38cc6bf, 0x3, 0xa303efde8, 0x59d1700, 0x1)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/cluster_upstream_refresher.go:147 +0xef
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.(*clusterRefreshController).onClusterChange(0xc04cdb7740, 0xc0065a5200, 0x7, 0xc04a2ad100, 0x37761a0, 0x383d380, 0x7ff83d307a98)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/cluster_upstream_refresher.go:75 +0x225
github.com/rancher/rancher/pkg/generated/controllers/management.cattle.io/v3.FromClusterHandlerToHandler.func1(0xc0065a5200, 0x7, 0x3ef5738, 0xc04a2ad100, 0xc04a2ad100, 0x7ff83d307a98, 0xc04a2ad100, 0x1)
/go/src/github.com/rancher/rancher/pkg/generated/controllers/management.cattle.io/v3/cluster.go:105 +0x6b
github.com/rancher/lasso/pkg/controller.SharedControllerHandlerFunc.OnChange(0xc04d858010, 0xc0065a5200, 0x7, 0x3ef5738, 0xc04a2ad100, 0x0, 0xc04a2ad100, 0x0, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/sharedcontroller.go:29 +0x4e
github.com/rancher/lasso/pkg/controller.(*SharedHandler).OnChange(0xc001b351a0, 0xc0065a5200, 0x7, 0x3ef5738, 0xc04a2ad100, 0xc022e42c01, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/sharedhandler.go:66 +0x123
github.com/rancher/lasso/pkg/controller.(*controller).syncHandler(0xc000dbd340, 0xc0065a5200, 0x7, 0xc032d3be58, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:210 +0xd1
github.com/rancher/lasso/pkg/controller.(*controller).processSingleItem(0xc000dbd340, 0x3089ba0, 0xc04da70400, 0x0, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:192 +0xe7
github.com/rancher/lasso/pkg/controller.(*controller).processNextWorkItem(0xc000dbd340, 0x203001)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:169 +0x54
github.com/rancher/lasso/pkg/controller.(*controller).runWorker(...)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:158
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc010473450)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc010473450, 0x3ed54c0, 0xc022aa9050, 0x1, 0xc0019563c0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc010473450, 0x3b9aca00, 0x0, 0x1, 0xc0019563c0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc010473450, 0x3b9aca00, 0xc0019563c0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:90 +0x4d
created by github.com/rancher/lasso/pkg/controller.(*controller).run
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:129 +0x33b
panic: runtime error: index out of range [0] with length 0 [recovered]
panic: runtime error: index out of range [0] with length 0
goroutine 1523 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/runtime/runtime.go:55 +0x109
panic(0x3658a80, 0xc09820cba0)
/usr/local/go/src/runtime/panic.go:965 +0x1b9
github.com/rancher/eks-operator/controller.BuildUpstreamClusterState(0xc037b89ae0, 0x20, 0xc0631001c8, 0x14, 0xc0453237d0, 0xc0818104b8, 0x1, 0x1, 0xc0818104c8, 0xc062c43f00, ...)
/go/pkg/mod/github.com/rancher/eks-operator#v1.0.9/controller/eks-cluster-config-handler.go:865 +0x1931
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.BuildEKSUpstreamSpec(0x3f3d538, 0xc04cdf3050, 0xc04a2ad100, 0x30, 0xc0d89d79a0, 0x1)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/eks_upstream_spec.go:80 +0xc92
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.getComparableUpstreamSpec(0x3f3d538, 0xc04cdf3050, 0xc04a2ad100, 0x30, 0xc0d89d79a0, 0x1)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/cluster_upstream_refresher.go:263 +0x14c
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.(*clusterRefreshController).refreshClusterUpstreamSpec(0xc04cdb7740, 0xc04a2ad100, 0x38cc6bf, 0x3, 0xa303efde8, 0x59d1700, 0x1)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/cluster_upstream_refresher.go:147 +0xef
github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher.(*clusterRefreshController).onClusterChange(0xc04cdb7740, 0xc0065a5200, 0x7, 0xc04a2ad100, 0x37761a0, 0x383d380, 0x7ff83d307a98)
/go/src/github.com/rancher/rancher/pkg/controllers/management/clusterupstreamrefresher/cluster_upstream_refresher.go:75 +0x225
github.com/rancher/rancher/pkg/generated/controllers/management.cattle.io/v3.FromClusterHandlerToHandler.func1(0xc0065a5200, 0x7, 0x3ef5738, 0xc04a2ad100, 0xc04a2ad100, 0x7ff83d307a98, 0xc04a2ad100, 0x1)
/go/src/github.com/rancher/rancher/pkg/generated/controllers/management.cattle.io/v3/cluster.go:105 +0x6b
github.com/rancher/lasso/pkg/controller.SharedControllerHandlerFunc.OnChange(0xc04d858010, 0xc0065a5200, 0x7, 0x3ef5738, 0xc04a2ad100, 0x0, 0xc04a2ad100, 0x0, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/sharedcontroller.go:29 +0x4e
github.com/rancher/lasso/pkg/controller.(*SharedHandler).OnChange(0xc001b351a0, 0xc0065a5200, 0x7, 0x3ef5738, 0xc04a2ad100, 0xc022e42c01, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/sharedhandler.go:66 +0x123
github.com/rancher/lasso/pkg/controller.(*controller).syncHandler(0xc000dbd340, 0xc0065a5200, 0x7, 0xc032d3be58, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:210 +0xd1
github.com/rancher/lasso/pkg/controller.(*controller).processSingleItem(0xc000dbd340, 0x3089ba0, 0xc04da70400, 0x0, 0x0)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:192 +0xe7
github.com/rancher/lasso/pkg/controller.(*controller).processNextWorkItem(0xc000dbd340, 0x203001)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:169 +0x54
github.com/rancher/lasso/pkg/controller.(*controller).runWorker(...)
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:158
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc010473450)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:155 +0x5f
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc010473450, 0x3ed54c0, 0xc022aa9050, 0x1, 0xc0019563c0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:156 +0x9b
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc010473450, 0x3b9aca00, 0x0, 0x1, 0xc0019563c0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:133 +0x98
k8s.io/apimachinery/pkg/util/wait.Until(0xc010473450, 0x3b9aca00, 0xc0019563c0)
/go/pkg/mod/k8s.io/apimachinery#v0.20.6/pkg/util/wait/wait.go:90 +0x4d
created by github.com/rancher/lasso/pkg/controller.(*controller).run
/go/pkg/mod/github.com/rancher/lasso#v0.0.0-20210408231703-9ddd9378d08d/pkg/controller/controller.go:129 +0x33b

gofabric8 deploy commands fails

I'm trying to set up gofabric8 env locally on my MAC on top of minikube.
The installation of both is done using the homebrew packages manager.
Unfortunately, I'm stuck at the very first step.
while executing gofabric8 deploy command, I get
panic: assignment to entry in nil map
goroutine 1 [running]:
reflect.mapassign(0x1012a8f80, 0x0, 0x140003400a0, 0x140003400b0)
/opt/homebrew/Cellar/go/1.17.2/libexec/src/runtime/map.go:1328 +0x38
reflect.Value.SetMapIndex({0x1012a8f80, 0x14000050900, 0x195}, {0x10122db40, 0x140003400a0, 0x98}, {0x1012cd440, 0x140003400b0, 0x94})
/opt/homebrew/Cellar/go/1.17.2/libexec/src/reflect/value.go:2051 +0x1e8
github.com/fabric8io/gofabric8/vendor/github.com/imdario/mergo.deepMerge({0x1012a8f80, 0x14000050900, 0x195}, {0x1012a8f80, 0x14000050720, 0x195}, 0x140008ec498, 0x1)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/github.com/imdario/mergo/merge.go:58 +0x7c4
github.com/fabric8io/gofabric8/vendor/github.com/imdario/mergo.deepMerge({0x10139d2a0, 0x140000508c0, 0x199}, {0x10139d2a0, 0x140000506e0, 0x199}, 0x140008ec498, 0x0)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/github.com/imdario/mergo/merge.go:38 +0x4e4
github.com/fabric8io/gofabric8/vendor/github.com/imdario/mergo.Merge({0x1011f6500, 0x140000508c0}, {0x1011f6500, 0x140000506e0})
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/github.com/imdario/mergo/merge.go:98 +0x190
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd.(*DirectClientConfig).getContext(0x1400068ee60)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd/client_config.go:382 +0x18c
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd.(*DirectClientConfig).getAuthInfoName(0x1400068ee60)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd/client_config.go:360 +0x74
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd.(*DirectClientConfig).getAuthInfo(0x1400068ee60)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd/client_config.go:394 +0x44
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd.(*DirectClientConfig).ClientConfig(0x1400068ee60)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd/client_config.go:109 +0x44
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd.(*DeferredLoadingClientConfig).ClientConfig(0x140002d62d0)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/client/unversioned/clientcmd/merged_client_builder.go:105 +0x4c
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/kubectl/cmd/util.(*ClientCache).getDefaultConfig(0x1400007e7e0)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/kubectl/cmd/util/clientcache.go:70 +0x14c
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/kubectl/cmd/util.(*ClientCache).ClientConfigForVersion(0x1400007e7e0, 0x0)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/kubectl/cmd/util/clientcache.go:92 +0x5c
github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/kubectl/cmd/util.(*ring0Factory).ClientConfig(0x140002f2720)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/k8s.io/kubernetes/pkg/kubectl/cmd/util/factory_client_access.go:180 +0x30
github.com/fabric8io/gofabric8/client.NewClient({0x1014c1800, 0x140002f2750})
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/client/client.go:29 +0x30
github.com/fabric8io/gofabric8/cmds.deploy({0x1014c1800, 0x140002f2750}, {{0x0, 0x0}, {0x0, 0x0}, {0x0, 0x0}, {0x100f417a4, 0x5}, ...})
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/cmds/deploy.go:316 +0x38
github.com/fabric8io/gofabric8/cmds.NewCmdDeploy.func2(0x14000273440, {0x1020ffaf0, 0x0, 0x0})
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/cmds/deploy.go:246 +0x1030
github.com/fabric8io/gofabric8/vendor/github.com/spf13/cobra.(*Command).execute(0x14000273440, {0x1020ffaf0, 0x0, 0x0})
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/github.com/spf13/cobra/command.go:603 +0x3bc
github.com/fabric8io/gofabric8/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x14000254000)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/github.com/spf13/cobra/command.go:689 +0x298
github.com/fabric8io/gofabric8/vendor/github.com/spf13/cobra.(*Command).Execute(...)
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/vendor/github.com/spf13/cobra/command.go:648
main.main()
/private/tmp/gofabric8-20211027-33316-1hyodaj/gofabric8-0.4.176/src/github.com/fabric8io/gofabric8/gofabric8.go:155 +0x94
Here is my env details.
gofabric8 version
gofabric8, version 0.4.176 (branch: 'unknown', revision: 'homebrew')
build date: '20211027-19:08:10'
go version: '1.17.2'
Minikube: 'v1.25.2
commit: 362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7'
Any assistance will be appreciated

proto.Marshal out of memory

we use github.com/gogo/protobuf, I get service crash at proto.Marshal(), this happens almost once a day,But I did not receive a memory alarm,This looks like a memory leak,has anyone encountered this situation?
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0xe25648, 0x16)
/usr/local/go/src/runtime/panic.go:617 +0x72
runtime.sysMap(0xc5cc000000, 0xc490000000, 0x18671b8)
/usr/local/go/src/runtime/mem_linux.go:170 +0xc7
runtime.(*mheap).sysAlloc(0x184e180, 0xc48d3d8000, 0x184e190, 0x62469ec)
/usr/local/go/src/runtime/malloc.go:633 +0x1cd
runtime.(*mheap).grow(0x184e180, 0x62469ec, 0x0)
/usr/local/go/src/runtime/mheap.go:1222 +0x42
runtime.(*mheap).allocSpanLocked(0x184e180, 0x62469ec, 0x18671c8, 0x7f213328e6d8)
/usr/local/go/src/runtime/mheap.go:1150 +0x37f
runtime.(*mheap).alloc_m(0x184e180, 0x62469ec, 0x7f21bed20101, 0x184e190)
/usr/local/go/src/runtime/mheap.go:977 +0xc2
runtime.(*mheap).alloc.func1()
/usr/local/go/src/runtime/mheap.go:1048 +0x4c
runtime.(*mheap).alloc(0x184e180, 0x62469ec, 0xc000000101, 0x7f21bed2a0e0)
/usr/local/go/src/runtime/mheap.go:1047 +0x8a
runtime.largeAlloc(0xc48d3d8000, 0xffffffffffff0100, 0x7f21bed2a0e0)
/usr/local/go/src/runtime/malloc.go:1055 +0x99
runtime.mallocgc.func1()
/usr/local/go/src/runtime/malloc.go:950 +0x46
runtime.systemstack(0x0)
/usr/local/go/src/runtime/asm_amd64.s:351 +0x66
runtime.mstart()
/usr/local/go/src/runtime/proc.go:1153
goroutine 521107 [running]:
runtime.systemstack_switch()
/usr/local/go/src/runtime/asm_amd64.s:311 fp=0xc0982eae30 sp=0xc0982eae28 pc=0x45cdf0
runtime.mallocgc(0xc48d3d8000, 0x0, 0x1700, 0xc492070000)
/usr/local/go/src/runtime/malloc.go:949 +0x872 fp=0xc0982eaed0 sp=0xc0982eae30 pc=0x40e6d2
runtime.growslice(0xca6f00, 0xc492070000, 0x1c43, 0x2000, 0xc48d3d6e73, 0x0, 0x0, 0x0)
/usr/local/go/src/runtime/slice.go:175 +0x151 fp=0xc0982eaf38 sp=0xc0982eaed0 pc=0x4462a1
xxxx/vendor/github.com/gogo/protobuf/proto.(*Buffer).EncodeStringBytes(...)
xxxx/vendor/github.com/gogo/protobuf/proto.Marshal(0xf58840, 0xc45eda2a00, 0x2d, 0x1, 0xc1e72e7801, 0x1, 0xc4860713e0)
/opt/go/src/xxxx/vendor/github.com/gogo/protobuf/proto/encode.go:236 +0x92 fp=0xc0982eb9a8 sp=0xc0982eb960 pc=0x79dd12
xxxx/feature_server_user/get_feature.(*GetUserFeatureAction).Handler(0x1864ca8, 0xc46a746a00, 0x17, 0x200, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)

runtime: out of memory with still memory

an extremely weird problem happened recently. my program was running on a docker which has 8GB memory and 4GB was used when the panic happened. there were more than 4GB memory available, why could this happen? below is the panic stack output.
# ulimit -m
unlimited
# go version
1.6.2
i tried to run the code line getAllCombinationComplex repeatedly, and the memory-used grew to about 5GB but the panic did not happen again.
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0xd28a90, 0x16)
/usr/local/go1.6.2/src/runtime/panic.go:547 +0x90
runtime.sysMap(0xf409e80000, 0x3ecc390000, 0x434a00, 0x1080918)
/usr/local/go1.6.2/src/runtime/mem_linux.go:206 +0x9b
runtime.(*mheap).sysAlloc(0x1066280, 0x3ecc390000, 0xdfc4d6d680)
/usr/local/go1.6.2/src/runtime/malloc.go:429 +0x191
runtime.(*mheap).grow(0x1066280, 0x1f661c8, 0x0)
/usr/local/go1.6.2/src/runtime/mheap.go:651 +0x63
runtime.(*mheap).allocSpanLocked(0x1066280, 0x1f661c4, 0x10677f0)
/usr/local/go1.6.2/src/runtime/mheap.go:553 +0x4f6
runtime.(*mheap).alloc_m(0x1066280, 0x1f661c4, 0xffffff0100000000, 0x7ff85dbfddd0)
/usr/local/go1.6.2/src/runtime/mheap.go:437 +0x119
runtime.(*mheap).alloc.func1()
/usr/local/go1.6.2/src/runtime/mheap.go:502 +0x41
runtime.systemstack(0x7ff85dbfdde8)
/usr/local/go1.6.2/src/runtime/asm_amd64.s:307 +0xab
runtime.(*mheap).alloc(0x1066280, 0x1f661c4, 0x10100000000, 0x41587c)
/usr/local/go1.6.2/src/runtime/mheap.go:503 +0x63
runtime.largeAlloc(0x3ecc386800, 0x7ff800000000, 0x6)
/usr/local/go1.6.2/src/runtime/malloc.go:766 +0xb3
runtime.mallocgc.func3()
/usr/local/go1.6.2/src/runtime/malloc.go:664 +0x33
runtime.systemstack(0xc820028000)
/usr/local/go1.6.2/src/runtime/asm_amd64.s:291 +0x79
runtime.mstart()
/usr/local/go1.6.2/src/runtime/proc.go:1051
goroutine 63798322149 [running]:
runtime.systemstack_switch()
/usr/local/go1.6.2/src/runtime/asm_amd64.s:245 fp=0xea27f3c798 sp=0xea27f3c790
runtime.mallocgc(0x3ecc386800, 0xa572e0, 0x0, 0xec6c1e8000)
/usr/local/go1.6.2/src/runtime/malloc.go:665 +0x9eb fp=0xea27f3c870 sp=0xea27f3c798
runtime.newarray(0xa572e0, 0x3ecc38680, 0x2)
/usr/local/go1.6.2/src/runtime/malloc.go:798 +0xc9 fp=0xea27f3c8b0 sp=0xea27f3c870
runtime.makeslice(0xa38840, 0x3ecc38680, 0x3ecc38680, 0x0, 0x0, 0x0)
/usr/local/go1.6.2/src/runtime/slice.go:32 +0x165 fp=0xea27f3c900 sp=0xea27f3c8b0
indexdb/db_template.XCludeList.getAllCombinationComplex(0xea4382c480, 0x8, 0x9, 0xe4ec1dc460, 0x4, 0x4, 0x0, 0x0, 0x0)
/home/scmbuild/workspaces_cluster/index/src/indexdb/db_template/db.go:145 +0x371 fp=0xea27f3ca88
sp=0xea27f3c900
indexdb/db_template.XCludeList.getAllCombinationComplex(0xea4382c480, 0x8, 0x9, 0xe4ec1dc438, 0x5, 0x5, 0x0, 0x0, 0x0)
/home/scmbuild/workspaces_cluster/index/src/indexdb/db_template/db.go:141 +0x2ee fp=0xea27f3cc10
sp=0xea27f3ca88
indexdb/db_template.XCludeList.getAllCombinationComplex(0xea4382c480, 0x8, 0x9, 0xe4ec1dc410, 0x6, 0x6, 0x0, 0x0, 0x0)
/home/scmbuild/workspaces_cluster/index/src/indexdb/db_template/db.go:141 +0x2ee fp=0xea27f3cd98
sp=0xea27f3cc10
indexdb/db_template.XCludeList.getAllCombinationComplex(0xea4382c480, 0x8, 0x9, 0xe4ec1dc3e8, 0x7, 0x7, 0x0, 0x0, 0x0)
/home/scmbuild/workspaces_cluster/index/src/indexdb/db_template/db.go:141 +0x2ee fp=0xea27f3cf20
sp=0xea27f3cd98
indexdb/db_template.XCludeList.getAllCombinationComplex(0xea4382c480, 0x8, 0x9, 0xe4ec1dc3c0, 0x8, 0x8, 0x0, 0x0, 0x0)
/home/scmbuild/workspaces_cluster/index/src/indexdb/db_template/db.go:141 +0x2ee fp=0xea27f3d0a8
sp=0xea27f3cf20
indexdb/db_template.XCludeList.GetAllCombinationString(0xea4382c480, 0x8, 0x9, 0x0, 0x0, 0x0)
/home/scmbuild/workspaces_cluster/index/src/indexdb/db_template/db.go:114 +0x530 fp=0xea27f3d398
sp=0xea27f3d0a8
indexdb/mem.MemDB.QueryCountersFullMatchByTags(0xead2bf0003, 0x38, 0xead2bf0043, 0x10, 0xea4382c480, 0x8, 0x9, 0x0, 0x0, 0x0, ...)
/home/scmbuild/workspaces_cluster/index/src/indexdb/mem/mem.go:391 +0x1f5 fp=0xea27f3d670 sp=0xe
a27f3d398
indexdb/mem.(*MemDB).QueryCountersFullMatchByTags(0x107e7b8, 0xead2bf0003, 0x38, 0xead2bf0043, 0x10, 0xea4382c480, 0x8, 0x9, 0x0, 0x0, ...)
<autogenerated>:10 +0x112 fp=0xea27f3d6d8 sp=0xea27f3d670
router.QCountersFullMatchByTags(0xd2745f5ef0)
/home/scmbuild/workspaces_cluster/index/src/router/http.go:505 +0x54a fp=0xea27f3d8d8 sp=0xea27f
3d6d8
github.com/gin-gonic/gin.(*Context).Next(0xd2745f5ef0)
/home/scmbuild/workspaces_cluster/index/deps/src/github.com/gin-gonic/gin/context.go:110 +0x7a f
p=0xea27f3d908 sp=0xea27f3d8d8
github.com/gin-gonic/gin.RecoveryWithWriter.func1(0xd2745f5ef0)
/home/scmbuild/workspaces_cluster/index/deps/src/github.com/gin-gonic/gin/recovery.go:45 +0x51 f
p=0xea27f3d930 sp=0xea27f3d908
github.com/gin-gonic/gin.(*Context).Next(0xd2745f5ef0)
/home/scmbuild/workspaces_cluster/index/deps/src/github.com/gin-gonic/gin/context.go:110 +0x7a f
p=0xea27f3d960 sp=0xea27f3d930
github.com/gin-gonic/gin.(*Engine).handleHTTPRequest(0xd4d0bd2360, 0xd2745f5ef0)
/home/scmbuild/workspaces_cluster/index/deps/src/github.com/gin-gonic/gin/gin.go:337 +0x2fd fp=0
xea27f3dae8 sp=0xea27f3d960
github.com/gin-gonic/gin.(*Engine).ServeHTTP(0xd4d0bd2360, 0x7ff7611f00b0, 0xdb6777f1e0, 0xd6e25a0700)
/home/scmbuild/workspaces_cluster/index/deps/src/github.com/gin-gonic/gin/gin.go:301 +0x197 fp=0
xea27f3db60 sp=0xea27f3dae8
net/http.serverHandler.ServeHTTP(0xd129858080, 0x7ff7611f00b0, 0xdb6777f1e0, 0xd6e25a0700)
/usr/local/go1.6.2/src/net/http/server.go:2081 +0x19e fp=0xea27f3dbc0 sp=0xea27f3db60
net/http.(*conn).serve(0xce774f1400)
/usr/local/go1.6.2/src/net/http/server.go:1472 +0xf2e fp=0xea27f3df88 sp=0xea27f3dbc0
runtime.goexit()
/usr/local/go1.6.2/src/runtime/asm_amd64.s:1998 +0x1 fp=0xea27f3df90 sp=0xea27f3df88
created by net/http.(*Server).Serve
/usr/local/go1.6.2/src/net/http/server.go:2137 +0x44e
Analyzing the stack trace, it looks you are attempting to allocate a far-too-large slice here: db_template.XCludeList.getAllCombinationComplex()
indexdb/db_template.XCludeList.getAllCombinationComplex(0xea4382c480, 0x8, 0x9, 0xe4ec1dc460, 0x4, 0x4, 0x0, 0x0, 0x0)
/home/scmbuild/workspaces_cluster/index/src/indexdb/db_template/db.go:145 +0x371 fp=0xea27f3ca88 sp=0xea27f3c900
This calls into runtime.makeslice():
runtime.makeslice(0xa38840, 0x3ecc38680, 0x3ecc38680, 0x0, 0x0, 0x0)
/usr/local/go1.6.2/src/runtime/slice.go:32 +0x165 fp=0xea27f3c900 sp=0xea27f3c8b0
The source code for runtime.makeslice() for Go 1.6 is here: slice.go:
func makeslice(t *slicetype, len64, cap64 int64) slice {
...
}
And it is called with params:
runtime.makeslice(0xa38840, 0x3ecc38680, 0x3ecc38680, 0x0, 0x0, 0x0)
The second value is the length, which is
0x3ecc38680 = 16857138816
You attempt to create a slice with more than 16*109 elements. If the element type of the slice would require the minimum of 1 byte (excluding the zero-sized types), this would be roughly 16 GB! And this is just a lower estimation. Obviously this operation cannot succeed with 8 GB of RAM available.
Also, please update your Go, 1.6.2 is almost 3 years old, not supported anymore (doesn't even receive security patches).

Why does go v1.8 fail to allocate heap bitmap on armv7 when importing zmq4?

I'm using go v1.8 on an armv7 target and everything works until I import zeromq, and then not even this works:
package main
import (
zmq "github.com/pebbe/zmq4"
"fmt"
)
func main() {
fmt.Println(zmq.Version())
}
Running it on the target gives the following:
./zmq
fatal error: runtime: out of memory
runtime stack:
runtime.throw(0xc8d3d, 0x16)
/usr/lib/go/src/runtime/panic.go:596 +0x70 fp=0x7eb018a0 sp=0x7eb01894
runtime.sysMap(0x6d752000, 0x74d4000, 0x1, 0x146b60)
/usr/lib/go/src/runtime/mem_linux.go:227 +0xb0 fp=0x7eb018c4 sp=0x7eb018a0
runtime.(*mheap).mapBits(0x139d58, 0x74d26000)
/usr/lib/go/src/runtime/mbitmap.go:159 +0x94 fp=0x7eb018dc sp=0x7eb018c4
runtime.(*mheap).sysAlloc(0x139d58, 0x100000, 0x30414)
/usr/lib/go/src/runtime/malloc.go:428 +0x2dc fp=0x7eb0191c sp=0x7eb018dc
runtime.(*mheap).grow(0x139d58, 0x8, 0x0)
/usr/lib/go/src/runtime/mheap.go:774 +0xd0 fp=0x7eb0194c sp=0x7eb0191c
runtime.(*mheap).allocSpanLocked(0x139d58, 0x1, 0x0)
/usr/lib/go/src/runtime/mheap.go:678 +0x468 fp=0x7eb0196c sp=0x7eb0194c
runtime.(*mheap).alloc_m(0x139d58, 0x1, 0x10, 0x0, 0x0)
/usr/lib/go/src/runtime/mheap.go:562 +0xdc fp=0x7eb019a4 sp=0x7eb0196c
runtime.(*mheap).alloc.func1()
/usr/lib/go/src/runtime/mheap.go:627 +0x3c fp=0x7eb019c0 sp=0x7eb019a4
runtime.systemstack(0x7eb019d4)
/usr/lib/go/src/runtime/asm_arm.s:278 +0xa8 fp=0x7eb019c4 sp=0x7eb019c0
runtime.(*mheap).alloc(0x139d58, 0x1, 0x10, 0x100, 0x1)
/usr/lib/go/src/runtime/mheap.go:628 +0x60 fp=0x7eb019ec sp=0x7eb019c4
runtime.(*mcentral).grow(0x13ab98, 0x0)
/usr/lib/go/src/runtime/mcentral.go:212 +0x84 fp=0x7eb01a18 sp=0x7eb019ec
runtime.(*mcentral).cacheSpan(0x13ab98, 0x146b68)
/usr/lib/go/src/runtime/mcentral.go:93 +0x104 fp=0x7eb01a5c sp=0x7eb01a18
runtime.(*mcache).refill(0x649e5000, 0x10, 0x146b68)
/usr/lib/go/src/runtime/mcache.go:122 +0x7c fp=0x7eb01a70 sp=0x7eb01a5c
runtime.(*mcache).nextFree.func1()
/usr/lib/go/src/runtime/malloc.go:525 +0x24 fp=0x7eb01a80 sp=0x7eb01a70
runtime.systemstack(0x7eb01aa4)
/usr/lib/go/src/runtime/asm_arm.s:278 +0xa8 fp=0x7eb01a84 sp=0x7eb01a80
runtime.(*mcache).nextFree(0x649e5000, 0x649e5010, 0x7eb01acc, 0x1ebd4, 0x21c40)
/usr/lib/go/src/runtime/malloc.go:526 +0x9c fp=0x7eb01ab0 sp=0x7eb01a84
runtime.mallocgc(0xf0, 0xc2548, 0x1, 0x64a26001)
/usr/lib/go/src/runtime/malloc.go:678 +0x8c0 fp=0x7eb01b08 sp=0x7eb01ab0
runtime.newobject(0xc2548, 0x1384a0)
/usr/lib/go/src/runtime/malloc.go:807 +0x2c fp=0x7eb01b1c sp=0x7eb01b08
runtime.malg(0x8000, 0x2710)
/usr/lib/go/src/runtime/proc.go:2821 +0x1c fp=0x7eb01b38 sp=0x7eb01b1c
runtime.mpreinit(0x138730)
/usr/lib/go/src/runtime/os_linux.go:302 +0x1c fp=0x7eb01b44 sp=0x7eb01b38
runtime.mcommoninit(0x138730)
/usr/lib/go/src/runtime/proc.go:540 +0x94 fp=0x7eb01b5c sp=0x7eb01b44
runtime.schedinit()
/usr/lib/go/src/runtime/proc.go:476 +0x40 fp=0x7eb01b78 sp=0x7eb01b5c
runtime.rt0_go(0x7eb01d14, 0x76ee4000, 0x7eb01d14, 0x1, 0x5cd10, 0x7eb01c58, 0x76f8e000, 0xcef6f281, 0xc6a63e08, 0x128, ...)
/usr/lib/go/src/runtime/asm_arm.s:61 +0x84 fp=0x7eb01bb8 sp=0x7eb01b78
Tracing shows the offending mmap:
mmap2(0x74c54000, 1048576, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x74c54000
mmap2(0x6d77e000, 122511360, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
The first one is right, that's the chunk size, but the second is where it allocates the bitmap. It should be probably 16x smaller.
I tried debugging but it works under GDB. Maybe GDB does some virtualization?
gdbserver :1000 zmq
Process zmq created; pid = 817
Listening on port 1000
Remote debugging from host 10.0.0.36
4 2 1
Child exited with status 0
So far, all pure go programs I've tested work as well.
Why does importing zeromq cause it to fail? Is there a bug in the arm port of go?
Fixed by https://github.com/golang/go/commit/bb6309cd63b35a81a8527efaad58847a83039947
libzmq must contain some global constructors which adjust the process memory map before the go runtime starts.

Resources