I'm very new to travis and Go. I have a test for a https server and it runs fine with I run go test -v ./... on my local machine but it will fail most of the time on Travis due to a getsocketopt: connection refused error when trying to connect to the server. It should be listening on https://localhost:8081. Is there something in my .travis.yml I can do to prevent this from happening?
Here is my .travis.yml
language: go
go:
- 1.6
- tip
matrix:
allow_failures:
- go: tip
before_install:
- go get -v github.com/golang/lint/golint
install:
- go get -v -d -t ./...
Here's my server creation code:
func (webserver *WebServer) Start(keyLocation string, certLocation string) <-chan error {
errors := make(chan error, 1)
go func() {
defer close(errors)
errors <- http.ListenAndServeTLS(fmt.Sprintf(":%v", webserver.config.WebServerPort), certLocation, keyLocation, nil)
}()
return errors
}
And the client code:
func createHTTPClient(t *testing.T) *http.Client {
t.Log("Creating a test client...")
tr := &http.Transport {
TLSClientConfig: &tls.Config {InsecureSkipVerify: true},
}
t.Log("Created a test client")
return &http.Client {Transport: tr}
}
Sample request with client
request, _ := http.NewRequest(httpmethod, fmt.Sprintf("https://localhost:%d/token", port), nil)
client.Do(request)
Sample starting the server in a test
errors := server.Start(testKeyLocation, testCertLocation)
//Handle errors from server
go func() {
select {
case err := <-errors:
if err != nil {
t.Fatalf("Error with server: %s", err.Error())
}
}
}()
You have no synchronization between starting the server and trying to connect. Adding a time.Sleep after starting the server should highlight the issue.
One way to reduce the window where the server isn't ready is to create the net.Listener synchronously, and then add the open listener to the http.Server config before starting the server. The httptest.Server can do this for you, as well as bind to random ports to prevent conflicts during tests, and using local test TLS certificates.
Related
I have a written a simple golang CRUD example connecting to cockroachdb using pgxpool/pgx.
All the CRUD operations are exposed as REST api using Gin framework.
By using curl command or Postman, the operations (GET/POST/DELETE) are working good and the data reflect in the database.
Next I dockerized this simple app and trying to run. The application seems to get struck in the below code
func Connection(conn_string string) gin.HandlerFunc {
log.Println("Connection: 0", conn_string)
config, err := pgxpool.ParseConfig(conn_string)
log.Println("Connection: 1", config.ConnString())
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 2")
pool, err := pgxpool.ConnectConfig(context.Background(), config) // gets struck here
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 3")
return func(c *gin.Context) {
c.Set("pool", pool)
c.Next()
}
}
The code seems to get frozen after printing Connection: 2 at the line
pool, err := pgxpool.ConnectConfig(context.Background(), config)
After few minutes, I am getting a error
FATA[0120] failed to connect to host=192.165.xx.xxx user=user_name database=dbname`: dial error (timeout: dial tcp 192.165.xx.xxx:5432: i/o timeout).
Below is my docker file
FROM golang as builder
WORKDIR /catalog
COPY main.go ./
COPY go.mod ./
COPY go.sum ./
RUN go get .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o catalog .
# deployment image
FROM scratch
#FROM alpine:3.17.1
# copy ca-certificates from builder
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
WORKDIR /bin/
COPY --from=builder /catalog .
CMD [ "./catalog" ]
#CMD go run /catalog/main.go
EXPOSE 8080
Note, I tried getting into the container bash shell and could ping the target ip 192.165.xx.xxx.
Please let me know why the pgxpool fails to connect to DB in the docker container but work in host (ubuntu) without any issue.
Update-2 : The real issue is passing the arguments while starting the application. When the arguments are passed correctly, this is started working.
Update-1: I still see issues while running the query and could produce it outside the docker as well.
I could fix it with upgraded pgxpool v5 instead of v4.
All I did is
go get -u github.com/jackc/pgx/v5/pgxpool, used it in the code as well
and it worked as expected.
This could be a known bug but could not find any related issue to include it in this post.
Below is the final code that is working
func Connection(conn_string string) gin.HandlerFunc {
log.Println("Connection: 0", conn_string)
config, err := pgxpool.ParseConfig(conn_string)
log.Println("Connection: 1", config.ConnString())
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 2")
//pool, err := pgxpool.ConnectConfig(context.Background(), config)
pool, err := pgxpool.NewWithConfig(context.Background(), config)
if err != nil {
log.Fatal(err)
}
log.Println("Connection: 3")
return func(c *gin.Context) {
c.Set("pool", pool)
c.Next()
}
}
Running into some error, I must be overlooking something.
How can I debug this? Dropping connections?
I read the following:
golang - Why net.DialTimeout get timeout half of the time?
Go. Get error i/o timeout in server program
golang get massive read tcp ip:port i/o timeout in ubuntu 14.04 LTS
Locating the "read tcp" error in the Go source code
Getting sporadic "http: proxy error: read tcp: i/o timeout" on Heroku
Error created here:
https://github.com/golang/go/blob/b115207baf6c2decc3820ada4574ef4e5ad940ec/src/net/net.go#L179
Goal:
Send a Get request to a url.
Expected result:
return body in JSON.
Encountered problem:
I/O timeout
It works in Postman
Edit:
I added a modified timeout...
Edit2: traced error
Postman request:
GET /v2/XRP-EUR/candles?interval=1h HTTP/1.1
Host: api.bitvavo.com
Postman Result (1440 rows):
[
[
1609632000000,
"0.17795",
"0.17795",
"0.17541",
"0.17592",
"199399.874013"
],
[
1609628400000,
"0.17937",
"0.18006",
"0.17622",
"0.17852",
"599402.631894"
],
[
1609624800000,
"0.18167",
"0.18167",
"0.17724",
"0.17984",
"579217.962574"
],.....
Code:
package main
import (
"fmt"
"net/http"
"io/ioutil"
"time"
)
func main() {
url := "https://api.bitvavo.com/v2/XRP-EUR/candles?interval=1h"
method := "GET"
client := &http.Client {
}
client.Timeout = time.Second * 60
req, err := http.NewRequest(method, url, nil)
if err != nil {
fmt.Println(err)
return
}
res, err := client.Do(req)
if err != nil {
fmt.Println(err)
return
}
defer res.Body.Close()
body, err := ioutil.ReadAll(res.Body)
if err != nil {
fmt.Println(err)
return
}
fmt.Println(string(body))
}
result:
Get "https://api.bitvavo.com/v2/XRP-EUR/candles?interval=1h": dial tcp 65.9.73.10:443: i/o timeout
I was with this issue when building inside docker containers.
Not sure why, but after a docker swarm leave --force and a systemctl restart docker the build worked.
Local environment, firewall not allowing golang to dial tcp..
It still allowed the url to be resolved to an ip though (DNS)
Solution:
Change firewall settings locally,
Check Docker/kubernetes/reverse proxy settings
I am trying to make a telnet client using the go-telnet library. I am able to connect to the server, but I was expecting to receive some data to login in with user and password.
But I don't get any message. What I could do until now is just send a message to the server and the server prints it.
If I connect using a regular telnet client first thing I have to do is to log in with user and password. I want to replicate this using my own client.
I don't see any examples on GitHub on how to send or receive messages so I am a little confused.
Here is my current code:
func main() {
err = telnet.DialToAndCall("192.168.206.226:23", caller{})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
type caller struct {}
func (c caller) CallTELNET(ctx telnet.Context, w telnet.Writer, r telnet.Reader) {
scanner := bufio.NewScanner(os.Stdin)
for scanner.Scan() {
fmt.Println(scanner.Text())
}
}
Are there any other steps I need to do when connecting? Or am I doing something wrong?
edit (reading part):
//var data []byte
for {
//numBytes, err := conn.Read(data)
reader := bufio.NewReader(os.Stdin)
fmt.Println(reader.ReadString('\n'))
}
I would like to randomly shut down pods in a kubernetes cluster with go. I already wrote code, which enables to login to the server and run code.
Now I would need to read all the available pods in the cluster, choose some randomly and terminate them. (I am new to go)
Could you please help me doing this?
This is how I am running commands on the cluster/server
cli.ExecuteCmd("kubectl get pods")
// Use one connection per command.
// Catch in the client when required.
func (cli *SSHClient)ExecuteCmd(command string){
conn, err := ssh.Dial("tcp", cli.Hostname+":22", cli.Config)
if err!=nil {
logrus.Infof("%s#%s", cli.Config.User, cli.Hostname)
logrus.Info("Hint: Add you key to the ssh agent: 'ssh-add ~/.ssh/id_rsa'")
logrus.Fatal(err)
}
session, _ := conn.NewSession()
defer session.Close()
var stdoutBuf bytes.Buffer
session.Stdout = &stdoutBuf
err = session.Run(command)
if err != nil {
logrus.Fatalf("Run failed:%v", err)
}
logrus.Infof(">%s", stdoutBuf.Bytes())
}
Use k8s.io/client-go (Github Link) client package to list kubernetes pods, and then delete them randomly.
Use client.CoreV1().Pods() methods to list and delete pods.
I'm developing an api for blog or online publishing website to develop a recommendation engine for their content.
Since my api returns same json for same url request, I decided to use Redis as a cache for high traffic websites by passing the url as key and json as value. I am developing this api in go-lang recently and have been using redigo to talk to our Redis instance. The way I decided to architect my system is to check the url of the query sent by the client (blog) and search for it in redis. If however, the url response in not cached I do a 301 redirect to another api that applied the logic to generate the json response for that particular url and also set the redis cache. However, while I'm testing if my Redis is working properly, I realised that it is missing cache far too often than what I would like. It's definitely caching the json response mapped to the url as confirmed by doing a simple GET in Redis-cli but after 3-4 hits I could see Redis missing cache again. I'm still very new to go-lang and caching world so I'm not sure if I'm missing something in my implementation. Also, I would like to know under what circumstances can Redis instance miss caches ? It can't be timeout because Redis docs says "By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever." so I'm not sure what exactly is happening with my setup. Relevant part of my code is below:
package main
import (
"flag"
"fmt"
"github.com/garyburd/redigo/redis"
"log"
"net/http"
"time"
)
var (
port int
folder string
pool *redis.Pool
redisServer = flag.String("redisServer", "redisip:22121", "")
redisPassword = flag.String("redisPassword", "", "")
)
func init() {
flag.IntVar(&port, "port", 80, "HTTP Server Port")
flag.StringVar(&folder, "folder", "www", "Serve this folder")
}
func newPool(server, password string) *redis.Pool {
return &redis.Pool{
MaxIdle: 3,
MaxActive: 25000,
IdleTimeout: 30 * time.Second,
Dial: func() (redis.Conn, error) {
c, err := redis.Dial("tcp", server)
if err != nil {
return nil, err
}
return c, err
},
TestOnBorrow: func(c redis.Conn, t time.Time) error {
_, err := c.Do("PING")
return err
},
}
}
func main() {
flag.Parse()
pool = newPool(*redisServer, *redisPassword)
httpAddr := fmt.Sprintf(":%v", port)
log.Printf("Listening to %v", httpAddr)
http.HandleFunc("/api", api)
http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.Dir(folder))))
log.Fatal(http.ListenAndServe(httpAddr, nil))
}
func api(w http.ResponseWriter, r *http.Request) {
link := r.URL.Query().Get("url")
fmt.Println(link)
heading := r.URL.Query().Get("heading")
conn := pool.Get()
reply, err := redis.String(conn.Do("GET", link))
defer conn.Close()
if err != nil {
fmt.Println("Error for link %v:%v", heading, err)
http.Redirect(w, r, "json-producing-api", 301)
}
fmt.Fprint(w, reply)
}
I must also mention here that in the above code, my redis instance is actually a twemproxy client built by twitter which proxies three different redis client running behind on three different ports. Everything seemed to worked normal yesterday and I did a successful load test for 5k concurrent reuquests. However, when I checked the log today some queries were being missed by redis and were being redirected to my json-producing-api and I could see redigo:nil error. I'm totally confused as to what exactly is going wrong? Any help will be greatly appreciated.
EDIT: As per discussions below, I'm detailing the code that I use to set the data in Redis
func SetIntoRedis(key string, value string) bool {
// returns true if successfully set, returns false in case of an error
conn := pool.Get()
_, err := conn.Do("SET", key, value)
if err != nil {
log.Printf("Error Setting %v : %v", key, err)
return false
}
return true
}
Configuration of my twemproxy client
leaf:
listen: 0.0.0.0:22121
hash: fnv1a_64
distribution: ketama
redis: true
auto_eject_hosts: true
server_retry_timeout: 3000
server_failure_limit: 3
servers:
- 127.0.0.1:6379:1
- 127.0.0.1:6380:1
- 127.0.0.1:6381:1