How to use tarantool command <cartridge replicasets join> in docker with multi container? - tarantool

I have a test stand with Cartridge cluster.
Stand start with docker-compose (use tarantool 2.10.3 docker-image with cartridge-cli inside).
container-1:
instance-1-1
instance-1-2
container-2:
instance-2-1
instance-2-2
After starting all instances on the container-1, the BASH script execute commands:
sh# cartridge replicasets join --replicaset group-1 instance-1-1
sh# cartridge replicasets join --replicaset group-2 instance-1-2
All OK
But after starting container-2 and calling the same commands, an error occurs:
sh# cartridge replicasets join --replicaset group-1 instance-2-1
• Join instance(s) instance-2-1 to replica set group-1
⨯ Failed to connect to Tarantool instance: Failed to dial: dial unix /opt/tarantool/tmp/run/test.instance-1-1.control: connect: no such file or directory
In WEB all OK, but I want use CLI for it or something like this (for automatization)

The problem seems to be that the cartridge-cli only works with the local instances.yml file.
If after starting all containers, I am in the container-1 change instance.yml (adding instances from container-2) then everything works fine.
But this is a strange decision.

As correctly noted, cartridge-cli works locally (on the host where it was running). There are plans to fix this in tt cli, which is currently under development (release scheduled for 2023 Q1) to replace cartridge-cli and tarantoolctl by combining and extending their functionality.
See:
https://github.com/tarantool/tt#working-with-tt-daemon-experimental

Related

Not able to forward DB connection to host.minikube.internal

Problem
I'm trying to forward an Oracle database connection to a pod on minikube. I'm using a kubectl port-forward command to connect from my local machine to Oracle successfully, and I'm trying to use the host.minikube.internal as I would use localhost on my local machine to try to connect to the database.
Essentially, pod -> host.minikube.internal:1526 -> localhost:1526 -> Oracle DB: 1526. However, when I try to run a query on the pod using this connection information, I receive the error: ORA-12541: TNS:no listener.
What should I do to connect?
Context
It previously worked using Docker Desktop. All the connection information and forwarding information worked the same, except I changed host.docker.internal to host.minikube.internal following some advice in the second answer to this question that pointed to this documentation in minikube. I don't know why this simple swap would not work.
I'm on MacOS, Ventura 13.0 (22A380), using Kubernetes 1.26.1 and Minikube v1.28.0.

Sequelize migrations - Google cloud build trigger

I am currently trying to host a typescript/sequelize project in Google cloud build.
I am connecting through a unix socket and cloud sql proxy.
The app is deployed and a test running "sequelize.authenticate()" seems to be working.
Migrations to localhost seems to be working aswell.
I have written a cloud build trigger that does the following:
-builds a simple docker image
-pushes the simple docker image
-npm install
-downloads the cloud_sql_proxy
-initiates the cloud_sql_proxy
the next step would be to migrate a simple table to my gcloud database.
please check out my drawing for further details: https://excalidraw.com/#json=LnvpSjngbk7h1F0RzBgUP,HPwtVWgh-sFgrmvfU9JK0A
If i try to run "npx sequelize-cli db:migrate" gcloud gives the following message: [31mERROR:[39m connect ENOENT /cloudsql/xxxxxxx/.s.PGSQL.5432
but if i replace the command with npx sequelize-cli --version, it simply prints out the version and moves on with the rest of the trigger operations.

Error syncing pod on starting Beam - Dataflow pipeline from docker

We are constantly getting an error while starting our Beam Golang SDK pipeline (driver program) from a docker image which works when started from local / VM instance. We are using Dataflow runner for our pipeline and Kubernetes to deploy.
LOCAL SETUP:
We have GOOGLE_APPLICATION_CREDENTIALS variable set with service account for our GCP cluster. When running the job from local, job gets submitted to dataflow and completes successfully.
DOCKER SETUP:
Build image used is FROM golang:1.14-alpine. When we pack the same program with Dockerfile and try to run, it fails with error
User program exited: fork/exec /bin/worker: no such file or directory
On checking Stackdriver logs for more details, we see this:
Error syncing pod 00014c7112b5049966a4242e323b7850 ("dataflow-go-job-1-1611314272307727-
01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"),
skipping: failed to "StartContainer" for "sdk" with CrashLoopBackOff:
"back-off 2m40s restarting failed container=sdk pod=dataflow-go-job-1-
1611314272307727-01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"
Found reference to this error in Dataflow common errors doc, but it is too generic to figure out whats failing. After multiple retries, we were able to eliminate any permission / access related issues from pods. Not sure what else could be the problem here.
After multiple attempts, we decided to start the job manually from a new Debian 10 based VM instance and it worked. This brought to our notice that we are using alpine based golang image in Docker which may not have all the required dependencies installed to start the job.
On golang docker hub, we found a golang:1.14-buster where buster is codename for Debian 10. Using that for docker build helped us solve the issue. Self answering here to help anyone else facing the same issues.

Cassandra Detected unreadable sstables(data not caches)

ERROR [main] 2017-08-04 13:24:21,949 CassandraDaemon.java:638 - Detected unreadable sstables /opt/cassandra/data/some_key_space/ep_lc_events-adc44160dbe611e6953689bcd3ed73aa/mc-547-big-Summary.db, and many others...
That has happened after I upgraded Cassandra to 3 version and after a while downgraded it to 2nd version.
When I run this command: sudo service cassandra status
I have got such message:
could not access pidfile for Cassandra
In /var/log/cassandra/system.log I have logs which I wrote at the beginning.
PS: let me pay your attention that everything is happening on EC2 Amazon instance.
Well, I have just upgraded back to 3rd version, used cassandra-unloader to export all data, then downgraded back to 2nd version and used cassandra-loader to import all data. But if you were lucky and had backups and snapshots it would not be an obstacle for you.
PS. Afterwards, I had to run this command nodetool resetlocalschema to reset local schema and resynchronize.
PPS. This you can find how to do that.
https://github.com/brianmhess/cassandra-loader
I also got the same error, but it was due to switching between cassandra 4.0.0 and version 3.11 and back again while using docker.
Update the version to the right one for the ssltable, or delete the data volume:
docker-compose logs cassandra
docker volumes ls
docker ps
docker-compose down
docker volume rm testapp_cassandra
docker volume ls
docker-compose up

MariaDB clashing with MySQL on Travis-CI

I have a test suite that runs on Travis-CI and requires MariaDB (but it breaks on MySQL). The pre-test scripts call the mysql command, but run commands against MariaDB, as the command is the same for both.
echo "CREATE DATABASE test1" | mysql -u travis
The tests on worker v2.5.0 were passing just fine (https://travis-ci.org/stems/join-monster/jobs/256751422). However, the tests started running on a later version of the worker v2.9.3 and failing without any changes to the code (https://travis-ci.org/stems/join-monster/jobs/260001701). According to the system build information, this new version seems to be installing MySQL in addition to MariaDB. Now when I run my mysql command, it's running against MySQL instead of MariaDB and breaking the build.
I need either:
to go back to a previous version of the worker (can't find any info on how to do this in the Travis docs.
to specify that I want to run commands and connect to MariaDB, NOT MySQL.
to tell Travis to not install MySQL to avoid the clashing
Any of these solutions would be appreciated.
Fixed it by switching the Ubuntu version back to 12 rather than 14, which had become the new default.
In the .travis.yml
dist: precise

Resources