Can't create external initiators from chainlink CLI - chainlink

We're trying to set external initiators to our chainlink containers deployed in GKE cluster according to the docs: https://docs.chain.link/docs/external-initiators-in-nodes/
I log into the the pod:
kubectl exec -it -n chainlink chainlink-75dd5b6bdf-b4kwr -- /bin/bash
And there I attempt to create external initiators:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink initiators create xxx xxx
No help topic for 'initiators'
I don’t even see initiators in chainlink cli options:
root#chainlink-75dd5b6bdf-b4kwr:/home/root# chainlink
NAME:
chainlink - CLI for Chainlink
USAGE:
chainlink [global options] command [command options] [arguments...]
VERSION:
0.9.10#7cd042c1a94c57219ed826a6eab46752d63fa67a
COMMANDS:
admin Commands for remotely taking admin related actions
attempts, txas Commands for managing Ethereum Transaction Attempts
bridges Commands for Bridges communicating with External Adapters
config Commands for the node's configuration
job_specs Commands for managing Job Specs (jobs V1)
jobs Commands for managing Jobs (V2)
keys Commands for managing various types of keys used by the Chainlink node
node, local Commands for admin actions that must be run locally
runs Commands for managing Runs
txs Commands for handling Ethereum transactions
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--json, -j json output as opposed to table
--help, -h show help
--version, -v print the version
Chainlink version 0.9.10.
Could you please clarify what am I doing wrong?

You need to make sure you have the FEATURE_EXTERNAL_INITIATORS environment variable set to true in your .env file as such:
FEATURE_EXTERNAL_INITIATORS=true
This will open up access to the initiators command in the Chainlink CLI and you can resume the instructions from there.

Related

How to configure docker swarm using jenkins?

I have got an assignment. The assignment is "Write a shell script to install and configure docker swarm(one master/leader and one node) and automate the process using Jenkins." I am new to this technology and finding it difficult to proceed. Can anyone help me in explaining step-by-step process of how to proceed?
#Rajnish Kumar Singh, Have you tried to check resources online? I understand you are very new to this technology, but googling some key words like
what is docker swarm
what is jenkins , etc would definitely helps
Having said that, Basically you need to do below set of steps to complete your assignment
Pre-requisites
2 or more - Ubuntu 20.04 Server
(You can use any linux distros like ubuntu, Redhat etc, But make sure your install and execute commands change accordingly.
Here we need two nodes mainly to configure the master and worker node cluster)
Eg :
manager --- 132.92.41.4
worker --- 132.92.41.5
You can create these nodes in any of public cloud providers like AWS EC2 instances or GCP VMs etc
Next, You need to do below set of steps
Configure Hosts
Install Docker-ce
Docker Swarm Initialization
You can refer this article for more info https://www.howtoforge.com/tutorial/ubuntu-docker-swarm-cluster/
This completes first part of your assignment.
Next, You can create one small shell script and include all those install and configuration commands in that script. Basically shell script is collection of set of linux commands. Instead of running each commands separately , you will run script alone and all set up will be done for you.
You can create small script using touch command
touch docker-swarm-install.sh
Specify proper privileges to script to make it executable
chmod +x docker-swarm-install.sh
Next include all your install + configure commands, which you have used earlier to do docker swarm set up in scripts (You can refer above shared link)
Now, when your script is ready, you can configure this script in jenkins job and whenever jenkins job is run, script will get execute and docker swarm cluster will be created
You need a jenkins server. Jenkins is open source software, you can install it in any of public cloud instance (Aws EC2)
Reference : https://devopsarticle.com/how-to-install-jenkins-on-aws-ec2-ubuntu-20-04/
Next once installation is completed. You need to configure job in jenkins
Reference : https://www.toolsqa.com/jenkins/jenkins-build-jobs/
Add your 'docker-swarm-install.sh' as build step in created job
Reference : https://faun.pub/jenkins-jobs-hands-on-for-the-different-use-cases-devops-b153efb483c7
If all set up is successful and now when you run your jenkins job, your docker swarm cluster must be get created.

Run a PowerShell script on Azure AKS nodes,

I have a PowerShell script that I want to run on some Azure AKS nodes (running Windows) to deploy a security tool. There is no daemon set for this by the software vendor. How would I get it done?
Thanks a million
Abdel
Similar question has been asked here. User philipwelz has written:
Hey,
although there could be ways to do this, i would recommend that you dont. The reason is that your AKS setup should not allow execute scripts inside container directly on AKS nodes. This would imply a huge security issue IMO.
I suggest to find a way the execute your script directly on your nodes, for example with PowerShell remoting or any way that suits you.
BR,
Philip
This user is right. You should avoid executing scripts on your AKS nodes. In your situation if you want to deploy Prisma cloud you need to go with the following doc. You are right that install scripts work only on Linux:
Install scripts work on Linux hosts only.
But, for the Windows and Mac software you have specific yaml files:
For macOS and Windows hosts, use twistcli to generate Defender DaemonSet YAML configuration files, and then deploy it with kubectl, as described in the following procedure.
The entire procedure is described in detail in the document I have quoted. Pay attention to step 3 and step 4. As you can see, there is no need to run any powershell script:
STEP 3:
Generate a defender.yaml file, where:
The following command connects to Console (specified in [--address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)) as user <ADMIN> (specified in [--user](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#)), and generates a Defender DaemonSet YAML config file according to the configuration options passed to [twistcli](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#). The [--cluster-address](https://docs.paloaltonetworks.com/prisma/prisma-cloud/prisma-cloud-admin-compute/install/install_kubernetes.html#) option specifies the address Defender uses to connect to Console.
$ <PLATFORM>/twistcli defender export kubernetes \
--user <ADMIN_USER> \
--address <PRISMA_CLOUD_COMPUTE_CONSOLE_URL> \
--cluster-address <PRISMA_CLOUD_COMPUTE_HOSTNAME>
- <PLATFORM> can be linux, osx, or windows.
- <ADMIN_USER> is the name of a Prisma Cloud user with the System Admin role.
and then STEP 4:
kubectl create -f ./defender.yaml
I think that the above answer is not completely correct.
The twistcli command, does not export daemonset for Windows Nodes. The "PLATFORM" option, is for choosing the OS of the computer that the command will run.
After testing, I have made the conclusion that there is no Docker Image for Prisma Cloud for Windows Kubernetes Nodes, as it is deployed as a service at Windows OS, and not Container (as in Linux). Wrapping up, the Daemonset is not working at the Windows Hosts
I believe the only solution is this -> Windows
This is the Powershell script that Wytrzymały Wiktor has mentioned.
Unfortunately this cannot be automated easily, as you have to deploy an Azure VM per AKS Cluster (at the same network), and RDP to the AKS Windows Node and run the script.
If anyone has another suggestion or solution, feel free to share.

Near Mainnet Archivel Node Set up

I tried setting up the NEAR mainnet archival node using docker by following this documentation - https://github.com/near/nearup#building-the-docker-image. The docker run command does not specify any port in the document.
So I also ran the docker run without any port, but when I tried to check the port by docker ps it does not show any port but the neard node runs.
I did not find any docs on the node APIs, can we use the archival APIs - https://docs.near.org/docs/api/rpc to query the node.
Docker run command used to set up archival mainnet node:
sudo docker run -d -v $PWD:/root/.near --name nearup nearprotocol/nearup run mainnet
JSON RPC on nearcore is explosed on port 3030
As for the running an archival node you might be interested in this doc page https://docs.near.org/docs/roles/integrator/exchange-integration#steps-to-start-archive-node
P. S. nearup is considered oldish though still in use.
I have updated the documentation for nearup to specify the port binding for RPC now: https://github.com/near/nearup#building-the-docker-image
You can use the following command:
docker run -v $HOME/.near:/root/.near -p 3030:3030 --name nearup nearprotocol/nearup run mainnet
And you can validate nearup is running and the RPC /status endpoint is available by running:
docker exec nearup nearup logs
and
curl 0.0.0.0:3030/status
Also please make sure that you have changed the ~/.near/mainnet/config.json to contain the variable:
{
...
"archive": true,
...
}

How run a azure container job under a specific user in the container

running a container-job in an azure pipeline, I use a docker image ( conan ) which expects the build commands to be run under conan.
While I'm able to bootstrap the container in azure with --user root without issues using options
resources:
containers:
- container: builder
image: conanio/clang8
options: --user root
When I run a job
jobs:
- job: do_that
container: builder
steps:
- task: Bash#3
inputs:
targetType: inline
script: whoami
noProfile: false
noRc: false
I see that the user is 1001 which has been craeted by the azure bootstrap. I cannot use sudo / su since the user is not allowed to use sudo. I ask myself how would I ever run as a different user? The user has a specific ENV setup due to python shims for conan, special setups in ~/.conan, and all those kind of things.
This exact steps in azp runs automatically during the "container initialization" (right after docker create) in az using docker exec are:
# Grant user 'conan' SUDO privilege and allow it run any command without authentication.
groupadd azure_pipelines_sudo
usermod -a -G azure_pipelines_sudo conan
su -c "echo '%azure_pipelines_sudo ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers"
# Allow user 'conan' run any docker command without SUDO.
stat -c %g /var/run/docker.sock
bash -c "cat /etc/group"
groupadd -g 117 azure_pipelines_docker
usermod -a -G azure_pipelines_docker conan
The semantic idea is:
extract which user the image is designed to run on by default ( in our case conan / 1000
create a group azure_pipelines_sudo
grant this user sudo permissions without password requirements
grant this user conan permissions to access the docker socket aka run docker in docker commands
Seeing this setup I really wonder, why effectively then the docker exec statement is run using something along the lines as
docker exec -u 1001 ..
So effectively when the actual job is run, it is not using the user conan (1000) - so the one being configured to have all the capabilities like sudo / docker access - if that is by design, why doing the setup 2-4 at all?
Somewhat this looks like either a design flaw, a bug, or just a huge misunderstanding on my side.
I have seen this question but even though the title assumes, it is a very different question
Right now, this is not possible. I'am not sure what this whole concept is about, but for me that is not only an issue, it is an showstopper because one cannot workaround this issue.
Even though it is a simple answer - at least it is one.
Update:
This is not available at present.
For your concern:
So effectively when the actual job is run, it is not using the user conan (1000) - so the one being configured to have all the capabilities like sudo / docker access - if that is by design, why doing the setup 2-4 at all?
There is some related info in our official doc may related to this: since Azure Pipelines will docker create an awaiting container and docker exec a series of commands which expect the container is always up and running. https://learn.microsoft.com/en-us/azure/devops/pipelines/process/container-phases?view=azure-devops&tabs=yaml#linux-based-containers
There are three different authentication tokens used by an agent:
Agent registration token: used only when registering the agent in the agent pool
Listener OAuth token: used by the agent when listening for new jobs
Job-specific OAuth token: used by the agent when running an individual job
Even though that link question is not related to yours. But there is a comment is correct:
The agent itself is setup to run as a user. When the build runs it
runs in a container as user "containeradministrator"(root) which is a
docker user.
What you would like seems to be running a Docker container as a non-root user. This is actually not related to Azure DevOps Service side, more related to Docker.
Kindly check if this helps Connect to docker container as user other than root & this blog.

Start AWS EC2 instance, run commands, stream logs to console and terminate

Trying to run few steps of CI/CD in a EC2 instance. Please don't ask for reasons.
Need to:
1) Start an instance using AWS CLI. Set few environment variables.
2) Run few bash commands.
3) Stream the command from the above commands into the console of the caller script.
4) If any of the commands fail, need to fail the calling script as well.
5) Terminate the instance.
There is a SO thread which indicates that streaming the output is not as easy. [1]
What I would do, if I had to implement this task:
Start the instance using the cli command aws ec2 run-instances and using an AMI which has the AWS SSM agent preinstalled. [2]
Run your commands using AWS SSM. [3] This has the benefit that you can run any number of commands you want - whenever you want (i.e. the commands must not be specified at instance launch, but can be chosen afterwards). You also get the status code of each command.[4]
Use the CloudWatch integration in SSM to stream your command output to CloudWatch logs. [5]
Stream the logs from CloudWatch to your own instance. [6]
Note: Instead of streaming the command output via CloudWatch, you could also periodically poll the SSM API by using aws ssm get-command-invocation. [7]
Reference
[1] How to check whether my user data passing to EC2 instance working or not?
[2] Working with SSM Agent - AWS Systems Manager
[3] Walkthrough: Use the AWS CLI with Run Command - AWS Systems Manager
[4] Understanding Command Statuses - AWS Systems Manager
[5] Streaming AWS Systems Manager Run Command output to Amazon CloudWatch Logs | AWS Management Tools Blog
[6] how to view aws log real time (like tail -f)
[7] get-command-invocation — AWS CLI 1.16.200 Command Reference
Approach 1.
Start an instance using AWS CLI.
aws ec2 start-instances --instance-ids i-1234567890abcdef0
Set few environment variables.
Use user dat of ec2 to set env. & run commands
..
Run your other logic / scripts
To terminate the instance run below command in the same instance.
instanceid=`curl http://169.254.169.254/latest/meta-data/instance-id`
aws ec2 terminate-instances --instance-ids $instanceid
Approach 2.
Use python boto3 or kitchen chef ci.

Resources