How can I "SCRIPT FLUSH" with Redigo? - go

I try to flush the scripts using the command:" SCRIPT FLUSH" running the code like this:
c.Send("SCRIPT FLUSH")
c.Flush()
spew.Dump(c.Receive())
But I get this output:
(interface {}) <nil>
(redis.Error) (len=33) ERR unknown command 'SCRIPT FLUSH'
When I run the command from the command line I get an OK response:
How can I solve this problem?

Use two arguments:
c.Send("SCRIPT", "FLUSH")
c.Flush()
spew.Dump(c.Receive())
Also, use Do instead of the Send/Flush/Receive calls:
spew.Dump(c.Do("SCRIPT", "FLUSH"))

Related

Cloud Run Golang container issue/missunderstanding

I'm trying to do a report of all the objects in all the projects we have in Cloud Storage of our Org. I'm using this repo from the Google Professionnal Services as it's doing exactly what we want: https://github.com/GoogleCloudPlatform/professional-services/tree/main/tools/gcs2bq
We want to use containers instead of just the go code on a Cloud Function for portability mainly.
Locally everything is good and the program behave as expected but when I try in Cloud Run things get tricky. From what I understand, the go part needs to listen to a port, which I added at the beginning of the main so the container can be deployed, which it is:
// Determine port for HTTP service
port := os.Getenv("PORT")
if port == "" {
port = "8080"
log.Printf("defaulting to port %s", port)
}
Start HTTP server.
log.Printf("listening on port %s", port)
if err := http.ListenAndServe(":"+port, nil); err != nil {
log.Fatal(err)
}
But as you can see in the repo, the first file called is the run.sh one. Which set environment variables and then call the main.go. It sucessfully complete it's task, which is get all the size of the different files. But after that the run.sh doesnt "resume" and go to the part where it uploads the data in a BigQuery table, which locally work.
Here is the part in the run.sh file where I have a problem. Note : I don't have errors from executing the ./gcs2bq Note 2 : Every environment variable has a correct value
./gcs2bq $GCS2BQ_FLAGS || error "Export failed!" 2 <- doesnt get past this line
gsutil mb -p "${GCS2BQ_PROJECT}" -c standard -l "${GCS2BQ_LOCATION}" -b on "gs://${GCS2BQ_BUCKET}" || echo "Info: Storage bucket already exists: ${GCS2BQ_BUCKET}"
gsutil cp "${GCS2BQ_FILE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed copying ${GCS2BQ_FILE} to gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 3
bq mk --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" "${GCS2BQ_DATASET}" || echo "Info: BigQuery dataset already exists: ${GCS2BQ_DATASET}"
bq load --project_id="${GCS2BQ_PROJECT}" --location="${GCS2BQ_LOCATION}" --schema bigquery.schema --source_format=AVRO --use_avro_logical_types --replace=true "${GCS2BQ_DATASET}.${GCS2BQ_TABLE}" "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FIL$
error "Failed to load gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME} to BigQuery table ${GCS2BQ_DATASET}.${GCS2BQ_TABLE}!" 4
gsutil rm "gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}" || error "Failed deleting gs://${GCS2BQ_BUCKET}/${GCS2BQ_FILENAME}!" 5
rm -f "${GCS2BQ_FILE}"
I'm kinda new to containers and Cloud Run and even after reading projects and documentation, I'm not sure what I'm doing wrong, Is it normal that the .sh is "stuck" when calling the main.go? I can provide more details/explaination if needed.
Okay so for anyone who encounter similar situation this is how I made it work for me.
The container isn't supposed to stop so no exit, it will just go back to the main function.
That means that when I called executable it just looped and never exited and completed the task. So the solution here is to "recode" everything past the call in golang directly into the main.go
Here the run.sh is then useless so I used another .go file that listen for http request and then call the code that gather data and send it to Bigquery.

Bash HTTPie works when calling STRAVA script through command line but not through crontab

I'm new to shell scripting and I have a Bash script pulling in data from the Strava API and manipulating/reading it using jq.
When I copy and paste in the first line of code (the one calling in data) into the command line, it works. When I run bash strava.sh the entire program works. But when I execute the program through crontab, I'm getting the following error:
usage: http [--json] [--form] [--pretty {all,colors,format,none}]
[--style STYLE] [--print WHAT] [--headers] [--body] [--verbose]
[--all] [--history-print WHAT] [--stream] [--output FILE]
[--download] [--continue]
[--session SESSION_NAME_OR_PATH | --session-read-only SESSION_NAME_OR_PATH]
[--auth USER[:PASS]] [--auth-type {basic,digest}]
[--proxy PROTOCOL:PROXY_URL] [--follow]
[--max-redirects MAX_REDIRECTS] [--timeout SECONDS]
[--check-status] [--verify VERIFY]
[--ssl {ssl2.3,tls1,tls1.1,tls1.2}] [--cert CERT]
[--cert-key CERT_KEY] [--ignore-stdin] [--help] [--version]
[--traceback] [--default-scheme DEFAULT_SCHEME] [--debug]
[METHOD] URL [REQUEST_ITEM [REQUEST_ITEM ...]]
http: error: unrecognized arguments: https://www.strava.com/oauth/token client_id=xxx client_secret=xxx refresh_token=xxx grant_type=refresh_token
Here's what the line looks like in my script:
access_token=$(http POST "https://www.strava.com/oauth/token" client_id="xxx" client_secret="xxx" refresh_token="xxx" grant_type="refresh_token" | jq -r '.access_token')
When running through crontab, the above error is printed on the first line (i.e. line given above), so I'm fairly certain the problem lies in that line. What am I doing wrong?
The httpie manual (https://httpie.io/docs/cli/best-practices) advises to use of:
--ignore-stdin
For "non-interactive invocations".
Possibly a path issue - are there multiple copies of http installed?
Is there a "%" anywhere in your parameters? Crontab interprets % as a newline, so if you'll have to escape it - "%%".
As an aside - please put your subshell inside "s, lest one day strava returns something like "AC0f4;rm * 0cd-4b203"
access_token="$( http POST ...

How to pass single quote to the exec.Command on Go lang

When I type kubectl run my-release-kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.7.0-debian-10-r68 --namespace default --command -- sleep infinity on bash, it works perfectly. However, if I run through the exec.Command(), it says invalid restart policy. The workaround is make it --restart=Never. However, I'd like to know why it happens.
out, _ := exec.Command("kubectl", "run", "my-release-kafka-client", "--restart='Never'", "--image", "docker.io/bitnami/kafka:2.7.0-debian-10-r68", "--namespace", "default", "--command", "--", "sleep", "infinity").CombinedOutput()
fmt.Println(string(out))
result
error: invalid restart policy: 'Never'
See 'kubectl run -h' for help and examples
This is because when you put something in double quote " ", it is considered as string. So when you give "--restart='Never'" then the final value will be --restart='Never' not --restart=Never which is also pretty clear from your error message .error: invalid restart policy: 'Never'
That's why kubectl is looking for restartPolicy 'Never' instead of Never. That's the reason of your error message.

shell script echo back fatal error

I am using Elixir's porcelain to invoke shell script, in there I have command like:
#!/usr/bin/env bash
aws s3 sync frontend/dist s3://$S3_BUCKET --delete
echo
Now, if command fails(because of wrong bucket) it displays:
fatal error: An error occurred (InvalidBucketName) when calling the
ListObjects operation: The specified bucket is not valid.
But doesn't return this fatal "fatal error" message back to porcelain. How can I echo this error back?
Edit:
Porcelain code:
Porcelain.shell(". #{Path.join(:code.priv_dir(:hub), "scripts/copy_site_to_s3.sh")}")
I know the possible solution would be to use exec instead of shell but this is more of an example, I have a couple of slightly more complicated but similar shell scripts, facing the same problem.
Another script/example(I am testing failures):
Invoking with:
result = Porcelain.shell(". #{Path.join(:code.priv_dir(:hub),
"scripts/git_clone_pull.sh")} #{github}"
)
IO.inspect result
Script:
if cd frontend; then git reset --hard && git pull; else git clone $1 frontend; fi
It properly fails with:
fatal: Authentication failed for
'https://github.com/x/frontend.git/'
But porcelain result fails to capture message:
%Porcelain.Result{err: nil, out: "", status: 128}
If you’ll check the documentation for Porcelain.{exec,shell}/3 options, you’ll see:
:err — specify the way stderr will be passed back to Elixir.
Possible values are the same as for :out. In addition, it accepts the atom :out which denotes redirecting stderr to stdout.
Caveat: when using Porcelain.Driver.Basic, the only supported values are nil (stderr will be printed to the terminal) and :out.
Emphasis is mine. That caveat might be easily proven in the less cumbersome environment, without involving AWS and any other 3rd parties:
iex|1 ▶ Porcelain.shell("ls --gg", err: {:append, "error.log"})
#⇒ ls: unrecognized option '--gg'
# Try 'ls --help' for more information.
# %Porcelain.Result{err: {:append, "error.log"}, out: "", status: 2}
iex|2 ▶ ls "error.log"
# [ERROR] No such file or directory error.log
But we still have :out option!
iex|3 ▶ Porcelain.shell(">&2 echo 'error'", err: :out)
%Porcelain.Result{err: :out, out: "error\n", status: 0}
iex|4 ▶ Porcelain.shell("ls --gg", err: :out)
%Porcelain.Result{
err: :out,
out: "ls: unrecognized option '--gg'\nTry 'ls --help' for more information.\n",
status: 2
}
Naya, luckily even Basic driver might redirect :err to :out. That said, you have two options:
use err: :out parameter, pattern match to when status > 0 and examine standard output, or:
use Porcelain.Driver.Goon driver and deal with your stderr stream like a profi.

golang exec incorrect behavior

I'm using following code segment to get the XML definition of a virtual machine running on XEN Hypervisor. The code is trying to execute the command virsh dumpxml Ubutnu14 which will give the XML of the VM named Ubuntu14
virshCmd := exec.Command("virsh", "dumpxml", "Ubuntu14")
var virshCmdOutput bytes.Buffer
var stderr bytes.Buffer
virshCmd.Stdout = &virshCmdOutput
virshCmd.Stderr = &stderr
err := virshCmd.Run()
if err != nil {
fmt.Println(err)
fmt.Println(stderr.String())
}
fmt.Println(virshCmdOutput.String())
This code always goes into the error condition for the given domain name and I get the following output.
exit status 1
error: failed to get domain 'Ubuntu14'
error: Domain not found: no domain with matching name 'Ubuntu14'
But if I run the standalone command virsh dumpxml Ubuntu14, I get the correct XML definition.
I would appreciate if someone could give me some hints on what I'm doing wrong. My host machine is Ubuntu-16.04 and golang version is go1.6.2 linux/amd64
I expect you are running virsh as a different user in these two scenarios, and since you don't provide any URI, it is connecting to a different libvirtd instance. If you run virsh as non-root, then it'll usually connect to qemu:///session, but if you run virsh as root, then it'll usually connect to qemu:///system. VMs registered against one URI, will not be visible when connecting to the other URI.
BTW, if you're using go, you'd be much better off using the native Go library bindings for libvirt instead of exec'ing virsh. Your "virsh dumpxml" invokation is pretty much equivalent to this:
import (
"github.com/libvirt/libvirt-go"
)
conn, err := libvirt.NewConnect("qemu:///system")
dom, err := conn.LookupDomainByName("Ubuntu14")
xml, err := dom.GetXMLDesc(0)
(obviously do error handling too)

Resources