Libcontainer - Running multiple processes in the container - go

I am trying to implement something to the effect of docker run and docker exec using libcontainer.
I have been able to create a container and run a process inside it with the following code:
func Run(id string, s *specs.LinuxSpec, f *Factory) (int, error) {
...
container, err := f.CreateContainer(id, config)
if err != nil {
return -1, err
}
process := newProcess(s.Process)
tty, err := newTty(s.Process.Terminal, process, rootuid)
defer tty.Close()
if err != nil {
return -1, err
}
defer func() {
if derr := Destroy(container); derr != nil {
err = derr
}
}()
handler := NewSignalHandler(tty)
defer handler.Close()
if err := container.Start(process); err != nil {
return -1, err
}
return handler.forward(process)
}
This works great (I believe), but the problem comes when I have to run another process(es) inside the same container. For example, a container is already running (the main process is running in foreground mode): How can I achieve what Docker allows you to do with docker exec?
I have the following code:
func Exec(container libcontainer.Container, process *libcontainer.Process, onData func(data []byte), onErr func(err error)) (int, error) {
reader, writer := io.Pipe()
process.Stdin = os.Stdin
rootuid, err := container.Config().HostUID()
if err != nil {
return -1, err
}
tty, err := newTty(true, process, rootuid)
defer tty.Close()
if err != nil {
return -1, err
}
handler := NewSignalHandler(tty)
defer handler.Close()
// Redirect process output
process.Stdout = writer
process.Stderr = writer
// Todo: Fix this, it waits for the main process to exit before it starts
if err := container.Start(process); err != nil {
return -1, err
}
go func(reader io.Reader) {
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
onData(scanner.Bytes())
}
if err := scanner.Err(); err != nil {
onErr(err)
}
}(reader)
return handler.forward(process)
}
This also works, but the problem is: It waits for the main process to exit before it runs. Sometimes it runs, but my memory goes to 100% after calling that function 5 - 7 times running a simple whoami command.
I'm pretty sure I am doing something(s) wrong, I just don't know what. Or is my understanding of containers failing me?
I used this project as a reference:
https://github.com/opencontainers/runc

It's probably better to use docker as reference for your case, because it uses same libcontainer.Container objects for starting and exec new process in container. You can find code interacting with libcontainer here:
https://github.com/docker/docker/tree/master/daemon/execdriver/native
Also it's better to post whole code, so people could try and debug it to help you.
EDIT:
Here is example code for running multiple containers: https://gist.github.com/anonymous/407eb530c0cb6c87ec9f
I runned it like
go run procs.go path-to-busybox
You can see with ps that there are indeed multiple processes in container.
Feel free to ask if you have any questions.

Related

execute functions in specific order

I have the following function
func (c *Connection) ConnectVpn() error {
cmd := exec.Command(c.Command, c.Args...)
var password bytes.Buffer
password.WriteString(os.Getenv("PASSWORD"))
cmd.Stdin = &password
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
err := cmd.Start()
if err != nil {
return err
}
return err
}
This function call the openconnect binary and connects in a private vpn to be able to reach a specific server (it works fine).
The problem is that when I call cmd.Start() it creates a thread and allows me to execute the another function named checkCertificate() but then this function is called before the vpn connects so it fails.
When I try to let the VPN connects and use cmd.RUN() this process do not run on background so the process never finish and it never tells to cmd.Wait() it finished because it shouldn't finish.
days, err := domain.CheckCertificate()
if err != nil {
log.Fatalln(err)
}
I have tried to use channels to try to sync the results between them but when I do this the checkCertificate() function keeps being executed before the VPN executes and I can't reach the server I need.
Any idea in how I could let the ConnectVPN() function be running on foreground and even so send some signal to my other function to say vpn is connected now, please run?
I have tried to send the openconnect to background with cmd.Process.Signal(syscall.SIGTSTP) but when I bring it back it breaks the main function.
I have implemented the function to check if this is connected as was suggested before.
Now it only triggers the other functions if this VPN is connected.
package main
import (
"fmt"
"gitlabCertCheck/request"
"gitlabCertCheck/teams"
"gitlabCertCheck/vpn"
"net/http"
"os"
"strconv"
)
func checker() {
client := http.Client{}
_, err := client.Get(os.Getenv("HTTPS_HOST"))
if err != nil {
checker()
}
return
}
func main() {
args := []string{"--user", os.Getenv("USERNAME"), "--authgroup", "default", "--background",os.Getenv("VPN_SERVER")}
conn := vpn.NewConnection("openconnect", args)
err := conn.ConnectVpn()
if err != nil {
fmt.Println(err)
os.Exit(1)
}
checker()
client := request.Request{}
domain := client.NewRequest(os.Getenv("HOST"), os.Getenv("PORT"))
days, err := domain.CheckCertificate()
if err != nil {
fmt.Println(err)
}
card := teams.Card{}
card.CustomCard.Title = "Certificate Alert"
card.CustomCard.Text = "Certificate will expire in " + strconv.Itoa(days) + " days"
card.NewCard(card.CustomCard)
err = card.SendMessageCard(os.Getenv("TEAMS_WEBHOOK"))
if err != nil {
fmt.Println(err)
}
}
Thank you everyone

How to understand if exec.cmd was canceled

I'm trying to return specific error when the command was canceled by context.
After investigating ProcessState understood that if got -1 in exitCode the process got terminate signal
https://golang.org/pkg/os/#ProcessState.ExitCode
but maybe we have more elegant way?
Maybe I can put this error from cancel function?
Maybe it isn't good enough exitCode for understanding if the command was canceled?
var (
CmdParamsErr = errors.New("failed to get params for execution command")
ExecutionCanceled = errors.New("command canceled")
)
func execute(m My) error {
filePath, args, err := cmdParams(m)
err = nil
if err != nil {
log.Infof("cmdParams: err: %v\n, m: %v\n", err, m)
return CmdParamsErr
}
var out bytes.Buffer
var errStd bytes.Buffer
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, filePath, args...)
cmd.Stdout = &out
cmd.Stderr = &errStd
err = cmd.Run()
if err != nil {
if cmd.ProcessState.ExitCode() == -1 {
log.Warnf("execution was canceled by signal, err: %v\n", err)
err = ExecutionCanceled
return err
} else {
log.Errorf("run failed, err: %v, filePath: %v, args: %v\n", err, filePath, args)
return err
}
}
return err
}
exec.ExitError doesn't provide any reason for the exit code (there is no relevant struct field nor an Unwrap method), so you have to check the context directly:
if ctx.Err() != nil {
log.Println("canceled")
}
Note that this is a slight race because the context may be canceled just after the command failed for a different reason, but there is nothing you can do about that.
There is no straightforward or elegant way to figure out if a process was killed because a context was canceled. The closest you can come is this:
func run() error {
ctx, cancel := context.WithTimeout(context.Background(), 1*time.Second)
defer cancel()
cmd := exec.CommandContext(ctx, "bash", "-c", "exit 1")
// Start() returns an error if the process can't be started. It will return
// ctx.Err() if the context is expired before starting the process.
if err := cmd.Start(); err != nil {
return err
}
if err := cmd.Wait(); err != nil {
if e, ok := err.(*exec.ExitError); ok {
// If the process exited by itself, just return the error to the
// caller.
if e.Exited() {
return e
}
// We know now that the process could be started, but didn't exit
// by itself. Something must have killed it. If the context is done,
// we can *assume* that it has been killed by the exec.Command.
// Let's return ctx.Err() so our user knows that this *might* be
// the case.
select {
case <-ctx.Done():
return ctx.Err()
default:
return e
}
}
return err
}
return nil
}
The problem here is that there might be a race condition, so returning ctx.Err() might be misleading. For example, imagine the following scenario:
The process starts.
The process is killed by an external actor.
The context is canceled.
You check the context.
At this point, the function above would return ctx.Err(), but this might be misleading because the reason why the process was killed is not because the context was canceled. If you decide to use a code similar to the function above, keep in mind this approximation.

script command execution hung forever in go program

func Run() error {
log.Info("In Run Command")
cmd := exec.Command("bash", "/opt/AlterKafkaTopic.sh")
stdout, err := cmd.StdoutPipe()
if err != nil {
return err
}
if err = cmd.Start(); err != nil {
return err
}
f, err := os.Create(filepath.Join("/opt/log/", "execution.log"))
if err != nil {
return err
}
if _, err := io.Copy(f, stdout); err != nil {
return err
}
if err := cmd.Wait(); err != nil {
return err
}
return f.Close()
}
I am trying to execute a bash script from go code. The script changes some kafka topic properties. But the execution get hung io.Copy(f, stdout) and does not continue after it.
This program is running on RHEL7.2 server.
Could someone suggest where I am going wrong
From the docs:
Wait will close the pipe after seeing the command exit.
In other words, io.Copy exits when Wait() is called, but Wait is never called because it's blocked by Copy. Either run Copy in a goroutine, or simply assign f to cmd.Stdout:
f, err := os.Create(filepath.Join("/opt/log/", "execution.log"))
// TODO: Handle error
defer f.Close()
cmd := exec.Command("bash", "/opt/AlterKafkaTopic.sh")
cmd.Stdout = f
err = cmd.Run()

Is there a good way to cancel a blocking read?

I've got a command I have to run via the OS, let's call it 'login', that is interactive and therefore requires me to read from the stdin and pass it to the command's stdin pipe in order for it to work correctly. The only problem is the goroutine blocks on a read from stdin and I haven't been able to find a way to cancel a Reader in Go in order to get it to not hang on the blocking call. For example, from the perspective of the user, after the command looks as if it completed, you still have the opportunity to write to stdin once more (then the goroutine will move past the blocking read and exit)
Ideally I would like to avoid having to parse output from the command's StdoutPipe as that makes my code frail and prone to error if the strings of the login command were to change.
loginCmd := exec.Command("login")
stdin , err := loginCmd.StdinPipe()
if err != nil {
return err
}
out, err := loginCmd.StdoutPipe()
if err != nil {
return err
}
if err := loginCmd.Start(); err != nil {
return err
}
ctx, cancel := context.WithCancel(context.TODO())
var done sync.WaitGroup
done.Add(1)
ready := make(chan bool, 1)
defer cancel()
go func(ctx context.Context) {
reader := bufio.NewReader(os.Stdin)
for {
select {
case <- ctx.Done():
done.Done()
return
default:
//blocks on this line, if a close can unblock the read, then it should exit normally via the ctx.Done() case
line, err :=reader.ReadString('\n')
if err != nil {
fmt.Println("Error: ", err.Error())
}
stdin.Write([]byte(line))
}
}
}(ctx)
var bytesRead = 4096
output := make([]byte, bytesRead)
reader := bufio.NewReader(out)
for err == nil {
bytesRead, err = reader.Read(output)
if err != nil && err != io.EOF {
return err
}
fmt.Printf("%s", output[:bytesRead])
}
if err := loginCmd.Wait(); err != nil {
return err
}
cancel()
done.Wait()

How to create Redis Transaction in Go using go-redis/redis package?

I want to execute multiple redis commmand with transaction using MULTI and EXEC, so I can DISCARD it if something bad happens.
I've looking for the example of how to do redis transaction using
go-redis/redis package and found nothing.
And I also look into the documentation here and I got nothing related to how to do redis transaction like this for example using that package. Or maybe I missing something from the documentation because yeah you know that godoc is only explain every function in package mostly using one liner.
Even though I found some example to do redis transaction using other Go Redis library, I won't modify my programs to use another library since the effort will be much larger to port whole application using another library.
Can anyone help me to do that using go-redis/redis package?
You get a Tx value for a transaction when you use Client.Watch
err := client.Watch(func(tx *redis.Tx) error {
n, err := tx.Get(key).Int64()
if err != nil && err != redis.Nil {
return err
}
_, err = tx.Pipelined(func(pipe *redis.Pipeline) error {
pipe.Set(key, strconv.FormatInt(n+1, 10), 0)
return nil
})
return err
}, key)
You can find an example how to create a Redis transaction here:
Code:
pipe := rdb.TxPipeline()
incr := pipe.Incr("tx_pipeline_counter")
pipe.Expire("tx_pipeline_counter", time.Hour)
// Execute
//
// MULTI
// INCR pipeline_counter
// EXPIRE pipeline_counts 3600
// EXEC
//
// using one rdb-server roundtrip.
_, err := pipe.Exec()
fmt.Println(incr.Val(), err)
Output:
1 <nil>
And if you prefer to use the watch (optimistic locking)
You can see an example here
Code:
const routineCount = 100
// Transactionally increments key using GET and SET commands.
increment := func(key string) error {
txf := func(tx *redis.Tx) error {
// get current value or zero
n, err := tx.Get(key).Int()
if err != nil && err != redis.Nil {
return err
}
// actual opperation (local in optimistic lock)
n++
// runs only if the watched keys remain unchanged
_, err = tx.TxPipelined(func(pipe redis.Pipeliner) error {
// pipe handles the error case
pipe.Set(key, n, 0)
return nil
})
return err
}
for retries := routineCount; retries > 0; retries-- {
err := rdb.Watch(txf, key)
if err != redis.TxFailedErr {
return err
}
// optimistic lock lost
}
return errors.New("increment reached maximum number of retries")
}
var wg sync.WaitGroup
wg.Add(routineCount)
for i := 0; i < routineCount; i++ {
go func() {
defer wg.Done()
if err := increment("counter3"); err != nil {
fmt.Println("increment error:", err)
}
}()
}
wg.Wait()
n, err := rdb.Get("counter3").Int()
fmt.Println("ended with", n, err)
Output:
ended with 100 <nil>

Resources