Using Golang with Gin, pgxpool and issue when connecting from docker container - cockroachdb

I have a written a simple golang CRUD example connecting to cockroachdb using pgxpool/pgx.
All the CRUD operations are exposed as REST api using Gin framework.
By using curl command or Postman, the operations (GET/POST/DELETE) are working good and the data reflect in the database.
Next I dockerized this simple app and trying to run. The application seems to get struck in the below code
func Connection(conn_string string) gin.HandlerFunc {
log.Println("Connection: 0", conn_string)
config, err := pgxpool.ParseConfig(conn_string)
log.Println("Connection: 1", config.ConnString())
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 2")
pool, err := pgxpool.ConnectConfig(context.Background(), config) // gets struck here
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 3")
return func(c *gin.Context) {
c.Set("pool", pool)
c.Next()
}
}
The code seems to get frozen after printing Connection: 2 at the line
pool, err := pgxpool.ConnectConfig(context.Background(), config)
After few minutes, I am getting a error
FATA[0120] failed to connect to host=192.165.xx.xxx user=user_name database=dbname`: dial error (timeout: dial tcp 192.165.xx.xxx:5432: i/o timeout).
Below is my docker file
FROM golang as builder
WORKDIR /catalog
COPY main.go ./
COPY go.mod ./
COPY go.sum ./
RUN go get .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o catalog .
# deployment image
FROM scratch
#FROM alpine:3.17.1
# copy ca-certificates from builder
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
WORKDIR /bin/
COPY --from=builder /catalog .
CMD [ "./catalog" ]
#CMD go run /catalog/main.go
EXPOSE 8080
Note, I tried getting into the container bash shell and could ping the target ip 192.165.xx.xxx.
Please let me know why the pgxpool fails to connect to DB in the docker container but work in host (ubuntu) without any issue.

Update-2 : The real issue is passing the arguments while starting the application. When the arguments are passed correctly, this is started working.
Update-1: I still see issues while running the query and could produce it outside the docker as well.
I could fix it with upgraded pgxpool v5 instead of v4.
All I did is
go get -u github.com/jackc/pgx/v5/pgxpool, used it in the code as well
and it worked as expected.
This could be a known bug but could not find any related issue to include it in this post.
Below is the final code that is working
func Connection(conn_string string) gin.HandlerFunc {
log.Println("Connection: 0", conn_string)
config, err := pgxpool.ParseConfig(conn_string)
log.Println("Connection: 1", config.ConnString())
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 2")
//pool, err := pgxpool.ConnectConfig(context.Background(), config)
pool, err := pgxpool.NewWithConfig(context.Background(), config)
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 3")
return func(c *gin.Context) {
c.Set("pool", pool)
c.Next()
}
}

Related

Datastore calls in Trace Golang

when I was using go111, I had traces of all my Datastore calls (similar to image below). But as soon as I upgraded to go115 and started using cloud.google.com/go/datastore, I lost this information completely. I tried to set up telemetry by adding in my main:
projectID := os.Getenv("GOOGLE_CLOUD_PROJECT")
exporter, err := texporter.NewExporter(texporter.WithProjectID(projectID))
if err != nil {
log.Fatalf(bgCtx, "texporter.NewExporter of '%v': %v", projectID, err)
}
tp := sdktrace.NewTracerProvider(sdktrace.WithBatcher(exporter))
defer tp.ForceFlush(bgCtx)
otel.SetTracerProvider(tp)
But this didn't work. Am I missing anything to tell the datastore library to export those calls?
Thank you!
I finally found https://github.com/GoogleCloudPlatform/golang-samples/blob/master/trace/trace_quickstart/main.go
and realized I was missing the following:
trace.RegisterExporter(exporter)
This solved my problem. Then I also added the following on localhost
trace.ApplyConfig(trace.Config{DefaultSampler: trace.AlwaysSample()})
To make sure all requests are traced:
httpHandler := &ochttp.Handler{
// Use the Google Cloud propagation format.
Propagation: &propagation.HTTPFormat{},
}
if err := http.ListenAndServe(":"+port, httpHandler); err != nil {

dial tcp i/o timeout with HTTP GET request

Running into some error, I must be overlooking something.
How can I debug this? Dropping connections?
I read the following:
golang - Why net.DialTimeout get timeout half of the time?
Go. Get error i/o timeout in server program
golang get massive read tcp ip:port i/o timeout in ubuntu 14.04 LTS
Locating the "read tcp" error in the Go source code
Getting sporadic "http: proxy error: read tcp: i/o timeout" on Heroku
Error created here:
https://github.com/golang/go/blob/b115207baf6c2decc3820ada4574ef4e5ad940ec/src/net/net.go#L179
Goal:
Send a Get request to a url.
Expected result:
return body in JSON.
Encountered problem:
I/O timeout
It works in Postman
Edit:
I added a modified timeout...
Edit2: traced error
Postman request:
GET /v2/XRP-EUR/candles?interval=1h HTTP/1.1
Host: api.bitvavo.com
Postman Result (1440 rows):
[
[
1609632000000,
"0.17795",
"0.17795",
"0.17541",
"0.17592",
"199399.874013"
],
[
1609628400000,
"0.17937",
"0.18006",
"0.17622",
"0.17852",
"599402.631894"
],
[
1609624800000,
"0.18167",
"0.18167",
"0.17724",
"0.17984",
"579217.962574"
],.....
Code:
package main
import (
"fmt"
"net/http"
"io/ioutil"
"time"
)
func main() {
url := "https://api.bitvavo.com/v2/XRP-EUR/candles?interval=1h"
method := "GET"
client := &http.Client {
}
client.Timeout = time.Second * 60
req, err := http.NewRequest(method, url, nil)
if err != nil {
fmt.Println(err)
return
}
res, err := client.Do(req)
if err != nil {
fmt.Println(err)
return
}
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
if err != nil {
fmt.Println(err)
return
}
fmt.Println(string(body))
}
result:
Get "https://api.bitvavo.com/v2/XRP-EUR/candles?interval=1h": dial tcp 65.9.73.10:443: i/o timeout
I was with this issue when building inside docker containers.
Not sure why, but after a docker swarm leave --force and a systemctl restart docker the build worked.
Local environment, firewall not allowing golang to dial tcp..
It still allowed the url to be resolved to an ip though (DNS)
Solution:
Change firewall settings locally,
Check Docker/kubernetes/reverse proxy settings

OpenShift API - cannot use config

I am trying to connect to an OpenShift/K8s cluster from inside a running pod via the Go API. Therfore, i am following the tutorial from here.
Currently i have a problem with creating the OpenShift build client, whose constructor gets a previously created rest.InClusterConfig() as an argument. This should work, since it is shown in the example, but i get this error:
cannot use restconfig (type *"k8s.io/client-go/rest".Config) as type *"github.com/openshift/client-go/vendor/k8s.io/client-go/rest".Config in argument to "github.com/openshift/client-go/build/clientset/versioned/typed/build/v1".NewForConfig
I am a little confused, since rest.InClusterConfig() returns a *Config. This is accepted in corev1client.NewForConfig() which expects a *rest.Config. But buildv1client.NewForConfig() also expects a *rest.Config - but not exactly the restconfig i am creating with rest.InClusterConfig().
Where is my mistake? Bonus points for: I am taking my first steps with the API, and all it should do is to generate a second pod, from an image where some parameters are applied. Do i need the buildv1client client? This is pretty much Kubernetes core functionality.
The problem is that the package exists in the vendored folder in vendor/ and also on your $GOPATH. Vendoring "github.com/openshift/client-go" should solve your problem.
To answer your second question, for the use case you have described, not really. If you want to create an OpenShift build then yes you need to use the client as this API object does not exist in Kubernetes. If you want to simply create a Pod then you don't need the build client. A simple example for the API reference might look as follows:
package main
import (
"k8s.io/api/core/v1"
"k8s.io/client-go/tools/clientcmd"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
corev1client "k8s.io/client-go/kubernetes/typed/core/v1"
)
func main() {
kubeconfig := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(
clientcmd.NewDefaultClientConfigLoadingRules(),
&clientcmd.ConfigOverrides{},
)
namespace, _, err := kubeconfig.Namespace()
if err != nil {
panic(err)
}
restconfig, err := kubeconfig.ClientConfig()
if err != nil {
panic(err)
}
coreclient, err := corev1client.NewForConfig(restconfig)
if err != nil {
panic(err)
}
_, err = coreclient.Pods(namespace).Create(&v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "example",
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "ubuntu",
Image: "ubuntu:trusty",
Command: []string{"echo"},
Args: []string{"Hello World"},
},
},
},
})
if err != nil {
panic(err)
}
}

go: randomly terminate pods in kubernetes cluster

I would like to randomly shut down pods in a kubernetes cluster with go. I already wrote code, which enables to login to the server and run code.
Now I would need to read all the available pods in the cluster, choose some randomly and terminate them. (I am new to go)
Could you please help me doing this?
This is how I am running commands on the cluster/server
cli.ExecuteCmd("kubectl get pods")
// Use one connection per command.
// Catch in the client when required.
func (cli *SSHClient)ExecuteCmd(command string){
conn, err := ssh.Dial("tcp", cli.Hostname+":22", cli.Config)
if err!=nil {
logrus.Infof("%s#%s", cli.Config.User, cli.Hostname)
logrus.Info("Hint: Add you key to the ssh agent: 'ssh-add ~/.ssh/id_rsa'")
logrus.Fatal(err)
}
session, _ := conn.NewSession()
defer session.Close()
var stdoutBuf bytes.Buffer
session.Stdout = &stdoutBuf
err = session.Run(command)
if err != nil {
logrus.Fatalf("Run failed:%v", err)
}
logrus.Infof(">%s", stdoutBuf.Bytes())
}
Use k8s.io/client-go (Github Link) client package to list kubernetes pods, and then delete them randomly.
Use client.CoreV1().Pods() methods to list and delete pods.

Travis CI failing when trying to test Golang HTTP Server

I'm very new to travis and Go. I have a test for a https server and it runs fine with I run go test -v ./... on my local machine but it will fail most of the time on Travis due to a getsocketopt: connection refused error when trying to connect to the server. It should be listening on https://localhost:8081. Is there something in my .travis.yml I can do to prevent this from happening?
Here is my .travis.yml
language: go
go:
- 1.6
- tip
matrix:
allow_failures:
- go: tip
before_install:
- go get -v github.com/golang/lint/golint
install:
- go get -v -d -t ./...
Here's my server creation code:
func (webserver *WebServer) Start(keyLocation string, certLocation string) <-chan error {
errors := make(chan error, 1)
go func() {
defer close(errors)
errors <- http.ListenAndServeTLS(fmt.Sprintf(":%v", webserver.config.WebServerPort), certLocation, keyLocation, nil)
}()
return errors
}
And the client code:
func createHTTPClient(t *testing.T) *http.Client {
t.Log("Creating a test client...")
tr := &http.Transport {
TLSClientConfig: &tls.Config {InsecureSkipVerify: true},
}
t.Log("Created a test client")
return &http.Client {Transport: tr}
}
Sample request with client
request, _ := http.NewRequest(httpmethod, fmt.Sprintf("https://localhost:%d/token", port), nil)
client.Do(request)
Sample starting the server in a test
errors := server.Start(testKeyLocation, testCertLocation)
//Handle errors from server
go func() {
select {
case err := <-errors:
if err != nil {
t.Fatalf("Error with server: %s", err.Error())
}
}
}()
You have no synchronization between starting the server and trying to connect. Adding a time.Sleep after starting the server should highlight the issue.
One way to reduce the window where the server isn't ready is to create the net.Listener synchronously, and then add the open listener to the http.Server config before starting the server. The httptest.Server can do this for you, as well as bind to random ports to prevent conflicts during tests, and using local test TLS certificates.

Resources