Error adding a node to Docker UCP - docker-ucp

I'm trying to add a node to a fresh UCP install using the copy-paste from the web UI and getting this error on the node:
FATA[0000] The join command is no longer used. To join a node to the swarm, go to the UCP web UI,
or run the 'docker swarm join' command in the node you want to join.
I'm unable to find any reference to this error or any documentation about 'join' being deprecated.

The join command has been deprecated.
After you have installed UCP, go to Resources -> Nodes -> Create a Node
There you will get a docker swarm join with a Swarm token and manager url. Paste this in the worker/manager node you want to join to the main UCP master.
Otherwise, using the CLI type docker swarm init --advertise-addr <HostName of the manager> command. It will generate a swarm token. Use that in the worker/manager node you want to join to the main UCP master

Related

How to use tarantool command <cartridge replicasets join> in docker with multi container?

I have a test stand with Cartridge cluster.
Stand start with docker-compose (use tarantool 2.10.3 docker-image with cartridge-cli inside).
container-1:
instance-1-1
instance-1-2
container-2:
instance-2-1
instance-2-2
After starting all instances on the container-1, the BASH script execute commands:
sh# cartridge replicasets join --replicaset group-1 instance-1-1
sh# cartridge replicasets join --replicaset group-2 instance-1-2
All OK
But after starting container-2 and calling the same commands, an error occurs:
sh# cartridge replicasets join --replicaset group-1 instance-2-1
• Join instance(s) instance-2-1 to replica set group-1
⨯ Failed to connect to Tarantool instance: Failed to dial: dial unix /opt/tarantool/tmp/run/test.instance-1-1.control: connect: no such file or directory
In WEB all OK, but I want use CLI for it or something like this (for automatization)
The problem seems to be that the cartridge-cli only works with the local instances.yml file.
If after starting all containers, I am in the container-1 change instance.yml (adding instances from container-2) then everything works fine.
But this is a strange decision.
As correctly noted, cartridge-cli works locally (on the host where it was running). There are plans to fix this in tt cli, which is currently under development (release scheduled for 2023 Q1) to replace cartridge-cli and tarantoolctl by combining and extending their functionality.
See:
https://github.com/tarantool/tt#working-with-tt-daemon-experimental

Can't create external initiators from chainlink CLI

We're trying to set external initiators to our chainlink containers deployed in GKE cluster according to the docs: https://docs.chain.link/docs/external-initiators-in-nodes/
I log into the the pod:
kubectl exec -it -n chainlink chainlink-75dd5b6bdf-b4kwr -- /bin/bash
And there I attempt to create external initiators:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink initiators create xxx xxx
No help topic for 'initiators'
I don’t even see initiators in chainlink cli options:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink
NAME:
chainlink - CLI for Chainlink
USAGE:
chainlink [global options] command [command options] [arguments...]
VERSION:
0.9.10#7cd042c1a94c57219ed826a6eab46752d63fa67a
COMMANDS:
admin Commands for remotely taking admin related actions
attempts, txas Commands for managing Ethereum Transaction Attempts
bridges Commands for Bridges communicating with External Adapters
config Commands for the node's configuration
job_specs Commands for managing Job Specs (jobs V1)
jobs Commands for managing Jobs (V2)
keys Commands for managing various types of keys used by the Chainlink node
node, local Commands for admin actions that must be run locally
runs Commands for managing Runs
txs Commands for handling Ethereum transactions
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--json, -j json output as opposed to table
--help, -h show help
--version, -v print the version
Chainlink version 0.9.10.
Could you please clarify what am I doing wrong?
You need to make sure you have the FEATURE_EXTERNAL_INITIATORS environment variable set to true in your .env file as such:
FEATURE_EXTERNAL_INITIATORS=true
This will open up access to the initiators command in the Chainlink CLI and you can resume the instructions from there.

Jenkins through docker: How to configure own host as agent for jenkins?

I'm using Jenkins with pipelines on a mac-mini. All builds are working fine with docker agents (backend, frontend, android app, etc)
The only thing I haven't been able to achieve is to use my own mac-mini as build-agent/slave for the IOS app (I need to build on OSX). Jenkins itself runs through docker as well, so I would need to connect to the host (the OS of the mac-mini) and use that as an agent...
I know one option would be to install jenkins instead of using docker, but I would prefer to keep Jenkins running in a docker container.
Does someone has experience with this or knows any good documentation on how to set this up?
Go to Manage Jenkins > Manage Nodes > New Node.
Configure a node.
Go to the list of nodes.
Select your newly configured node. It should be offline at this moment.
Run the java command displayed on the interface on your host machine.
Your Host machine is now a slave.

New servers aren't displayed in docker weblogic 12.2.1

I'm trying to create new servers linked to the adminServer in weblogic. So I followed this documentation.
I managed to create successful the wlsadmin (container name) server and then I try to create other to Servers:
docker run -d --link wlsadmin:wlsadmin -p 7002:7001 -p 7003:7002 1221-domain createServer.sh
docker run -d --link wlsadmin:wlsadmin -p 7004:7001 1221-domain createServer.sh
they get created successfuly, but in the admin-console under the Environment/Servers they aren't displayed at all, but in the Environment\Machines two new Machines are created.
Docker network inspect bridge shows me that the containers are in the same network and docker ps shows me that the containers are running (also I can get inside them).
The docker logs doesn't show any error.
This means I cannot install any .war-s
Any idea what is wrong with the setup?
Weblogic Version: 12.2.1
Docker Version: 17.03.1-ce
I found an alternative solution: to add the servers manually (before the servers were added automatically).
Steps:
go to Environment\Machines
select a machine and go to server tab
add new server (check ip with docker inspect container)
click on server and add it to the Docker cluster
go on Environment\Servers and click on Control tab
start the new server

Join a RethinkDB Cluster using ReQL instead of command line argument

I'm using Docker, and by default runs the RethinkDB process with only the --bind all argument.
To join the cluster requires the use of the --join argument, or a configuration file. To do this with Docker would now require a new Docker image to be made for this purpose.
How can I join a cluster using ReQL directly (thus eliminating the need to create a new Docker image). I could simply connect to the lone instance, add a row to a system table (like server_status), and the instance would connect to the newly entered external instance.
I could repeat this process for each node in the cluster. And makes things simpler for when nodes come up and go down, otherwise I would have to restart each RethinkDB process.
In Docker, we can override the CMD which invokes RethinkDB process with a custom command for customization the executing RethinkDB process. Instead of simply call docker run rethinkdb, we can pass an the rethinkdb command for joining to the first node.
Example using the official RethinkDB docker
docker run --rm -it -p 9080:8080 rethinkdb
Then we can inspect its IP address, assume it's 172.17.0.2, we can start second one:
docker run --rm -it -p 9081:8080 rethinkdb rethinkdb --join 172.17.0.2:29015 --bind all
Visit RethinkDB dashboard and you should see two nodes now.

Resources