heroku and rabbitmq - unable to run multiple worker dynos - heroku

I am using CloudAMPQ addon for Heroku. As RabbitMQ needs a unique node name for each of its process, I run into warning when I scale my worker dynos from 1 to 2 or more:
/app/.heroku/python/lib/python3.6/site-packages/kombu/pidbox.py:71: UserWarning: A node named coworker#fstrk.io is already using this process mailbox!
Maybe you forgot to shutdown the other node or did not do so properly?
Or if you meant to start multiple nodes on the same host please make sure
you give each node a unique node name!
My Procfile line looks like this
coworker: celery -l info -A getmybot worker -Q slack -c ${COWORKER_PROCESSES:-4} --hostname coworker#fstrk.io --without-gossip --without-mingle --without-heartbeat
how do I go about it?

Try change --hostname coworker#fstrk.io to --hostname coworker#%%h
More details in official docs:
http://docs.celeryproject.org/en/latest/reference/celery.bin.worker.html

Related

Can't create external initiators from chainlink CLI

We're trying to set external initiators to our chainlink containers deployed in GKE cluster according to the docs: https://docs.chain.link/docs/external-initiators-in-nodes/
I log into the the pod:
kubectl exec -it -n chainlink chainlink-75dd5b6bdf-b4kwr -- /bin/bash
And there I attempt to create external initiators:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink initiators create xxx xxx
No help topic for 'initiators'
I don’t even see initiators in chainlink cli options:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink
NAME:
chainlink - CLI for Chainlink
USAGE:
chainlink [global options] command [command options] [arguments...]
VERSION:
0.9.10#7cd042c1a94c57219ed826a6eab46752d63fa67a
COMMANDS:
admin Commands for remotely taking admin related actions
attempts, txas Commands for managing Ethereum Transaction Attempts
bridges Commands for Bridges communicating with External Adapters
config Commands for the node's configuration
job_specs Commands for managing Job Specs (jobs V1)
jobs Commands for managing Jobs (V2)
keys Commands for managing various types of keys used by the Chainlink node
node, local Commands for admin actions that must be run locally
runs Commands for managing Runs
txs Commands for handling Ethereum transactions
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--json, -j json output as opposed to table
--help, -h show help
--version, -v print the version
Chainlink version 0.9.10.
Could you please clarify what am I doing wrong?
You need to make sure you have the FEATURE_EXTERNAL_INITIATORS environment variable set to true in your .env file as such:
FEATURE_EXTERNAL_INITIATORS=true
This will open up access to the initiators command in the Chainlink CLI and you can resume the instructions from there.

Near Mainnet Archivel Node Set up

I tried setting up the NEAR mainnet archival node using docker by following this documentation - https://github.com/near/nearup#building-the-docker-image. The docker run command does not specify any port in the document.
So I also ran the docker run without any port, but when I tried to check the port by docker ps it does not show any port but the neard node runs.
I did not find any docs on the node APIs, can we use the archival APIs - https://docs.near.org/docs/api/rpc to query the node.
Docker run command used to set up archival mainnet node:
sudo docker run -d -v $PWD:/root/.near --name nearup nearprotocol/nearup run mainnet
JSON RPC on nearcore is explosed on port 3030
As for the running an archival node you might be interested in this doc page https://docs.near.org/docs/roles/integrator/exchange-integration#steps-to-start-archive-node
P. S. nearup is considered oldish though still in use.
I have updated the documentation for nearup to specify the port binding for RPC now: https://github.com/near/nearup#building-the-docker-image
You can use the following command:
docker run -v $HOME/.near:/root/.near -p 3030:3030 --name nearup nearprotocol/nearup run mainnet
And you can validate nearup is running and the RPC /status endpoint is available by running:
docker exec nearup nearup logs
and
curl 0.0.0.0:3030/status
Also please make sure that you have changed the ~/.near/mainnet/config.json to contain the variable:
{
...
"archive": true,
...
}

Could not delete DC/OS service that was failed to deploy

I deployed a service in DC/OS (the service is cassandra). The deployment failed and it kept retrying. Under DC/OS > Services > Tasks I could see a new task was created every a few minutes, but they all had the status of "Failed". Under the Debug tab I could see the TASK_FAILED state with a error message about how I misconfigured the service (I picked a user that does not exist).
So I wanted to destroy the service and start over again.
Under Services, I clicked on the menu on the service and selected "Delete". The command was taken, and the Status changed to "Deleting" But then it stayed there forever.
If I checked the Tasks tab, I could see that DC/OS was still attempting to start the server every a few minutes.
Now how do I delete the service? Thanks!
As per latest DCOS cassandra servicce docs, you should uninstall it using dcos cli :
dcos package uninstall --app-id=<service-name> cassandra
If you are using DCOS 1.9 or older version, then follow below steps to uninstall service :
$ MY_SERVICE_NAME=<service-name>
$ dcos package uninstall --app-id=$MY_SERVICE_NAME cassandra`.
$ dcos node ssh --master-proxy --leader "docker run mesosphere/janitor /janitor.py \
-r $MY_SERVICE_NAME-role \
-p $MY_SERVICE_NAME-principal \
-z dcos-service-$MY_SERVICE_NAME"

Hooking up into running heroku phoenix application

Previous night I was tinkering with Elixir running code on my both machines at home, but when I woke up, I asked myself Can I actually do the same using heroku run command?
I think theoretically it should be entirely possible if setup properly. Obviously heroku run iex --sname name executes and gives me access to shell (without functioning backspace which is irritating) but i haven't accessed my app yet.
Each time I executed the command it gave me different machine. I guess it's how Heroku achieve sandbox. I also was trying to find a way to determine address of my app's machine but haven't got any luck yet.
Can I actually connect with the dyno running the code to evaluate expressions on it like you would do iex -S mix phoenix.server locally ?
Unfortunately it's not possible.
To interconnect Erlang VM nodes you'd need EPMD port (4369) to be open.
Heroku doesn't allow opening custom ports so it's not possible.
In case You'd want to establish a connection between your Phoenix server and Elixir node You'd have to:
Two nodes on the same machine:
Start Phoenix using iex --name phoenix#127.0.0.1 -S mix phoenix.server
Start iex --name other_node#127.0.0.1
Establish a connection using Node.ping from other_node:
iex(other_node#127.0.0.1)1> Node.ping(:'phoenix#127.0.0.1')
(should return :pong not :pang)
Two nodes on different machines
Start Phoenix using some external address
iex --name phoenix#195.20.2.2 --cookie someword -S mix phoenix.server
Start second node
iex --name other_node#195.20.2.10 --cookie someword
Establish a connection using Node.ping from other_node:
iex(other_node#195.20.2.10)1> Node.ping(:'phoenix#195.20.2.2')
(should return :pong not :pang)
Both nodes should contact each other on the addresses they usually see each other on the network. (Full external IP when different networks, 192.168.X.X when in the same local network, 127.0.0.1 when on the same machine)
If they're on different machines they also must have set the same cookie value, because by default it takes automatically generated cookie in your home directory. You can check it out by running:
cat ~/.erlang.cookie
What's last you've got to make sure that your EPMD port 4369 is open, because Erlang VM uses it for internode data exchange.
As a sidenote if you will leave it open make sure to make your cookie as private as possible, because if someone knows it, he can have absolute power over your machine.
When you execute heroku run it will start a new one-off dyno which is a temporary instance that is deprovisioned when you finish the heroku run session. This dyno is not a web dyno and cannot receive inbound HTTP requests through Heroku's routing layer.
From the docs:
One-off dynos can never receive HTTP traffic, since the routers only route traffic to dynos named web.N.
https://devcenter.heroku.com/articles/one-off-dynos#formation-dynos-vs-one-off-dynos
If you want your phoenix application to receive HTTP requests you will have to set it up to run on a web dyno.
It has been a while since you've asked the question, but someone might find this answer valuable, though.
As of 2021 Heroku allows forwarding multiple ports, which allows to remsh into a running ErlangVM node. It depends on how you deploy your application, but in general, you will need to:
Give your node a name and a cookie (i.e. --name "myapp#127.0.0.1" --cookie "secret")
Tell exactly which port a node should bind to, so you know which pot to forward (i.e. --erl "-kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000")
Forward EPMD and Node ports by running heroku ps:forward 9001:4369,9000
Remsh into your node: ERL_EPMD_PORT=9001 iex --cookie "secret" --name console#127.0.0.1 --remsh "myapp#127.0.0.1"
Eventually you should start your server with something like this (if you are still using Mix tool): MIX_ENV=prod elixir --name "myapp#127.0.0.1" --cookie "secret" --erl "-kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000" -S mix phx.server --no-halt
If you are using Releases, most of the setup has already been done for you by the Elixir team.
To verify that EPMD port has been forwarded correctly, try running epmd -port 9001 -names. The output should be:
epmd: up and running on port 4369 with data:
name myapp#127.0.0.1 at port 9000
You may follow my notes on how I do it for Dockerized releases (there is a bit more hustle): https://paveltyk.medium.com/elixir-remote-shell-to-a-dockerized-release-on-heroku-cc6b1196c6ad

Kafka on EC2 instance for integration testing

I'm trying to set up some integration tests for part of our project that makes use of Kafka. I've chosen to use the spotify/kafka docker image which contains both kafka and Zookeeper.
I can run my tests (and they pass!) on a local machine if I run the kafka container as described at that project site. When I try to run it on my ec2 build server, however, the container dies. The final fatal error is "INFO gave up: kafka entered FATAL state, too many start retries too quickly".
My suspicion is that it doesn't like the address passed in. I've tried using both the public and the private ip address that ec2 provides, but the results are the same either way, just as with localhost.
Any help would be appreciated. Thanks!
It magically works now even though I'm still doing exactly what I was doing before. However, in order to help others who might come along, I will post what I did to get it to work.
I created the following batch file and have jenkins run this as a build step.
#!/bin/bash
if ! docker inspect -f 1 test_kafka &>/dev/null
then docker run -d --name test_kafka -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=localhost --env ADVERTISED_PORT=9092 spotify/kafka
fi
even though the localhost resolves to the private ip address, it seems to take it now. The if block is just to test if the container already exists and reuse it otherwise.

Resources