Error in communiction between salt-minion and salt-master - amazon-ec2

I have installed a minion and a master on 2 ec2 instances and started configuring them using the file doc provided in saltstack (below link)
https://docs.saltstack.com/en/latest/ref/configuration/index.html
then i came up with an error like
enter image description here

Related

DataHub installation on Minikube failing: "no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"" on elasticsearch setup

Im following the deployement guide of DataHub with Kubernetes present on the documentation: https://datahubproject.io/docs/deploy/kubernetes
Settin up the local clusten with Minikube I've started following the prerequisites session of the guide.
At first I tried to change some of the default values to try it locally (I've already installed it sucessfully on Google Kubernetes Engine, so I was trying different set ups)
But on the first step of the installation I've received the error:
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: resource mapping not found for name: "elasticsearch-master-pdb" namespace: "" from "": no matches for kind "PodDisruptionBudget" in version "policy/v1beta1"
ensure CRDs are installed first
The steps I've followed after installing Minikube where the exact steps presented on the page:
helm repo add datahub https://helm.datahubproject.io/
helm install prerequisites datahub/datahub-prerequisites
With the error happening on step 2
At first I've changed to the default configuration to see if it wasnt a mistake on the new values, but the error remained.
Ive expected that after followint the exact default steps the installation would be successfull locally, just like it was on the GKE
I got help browsing the DataHub slack community and figured out a way to fix this error.
It was simply a matter of a version error with Kubernetes, I was able to fix it by forcing minikube to start with the 1.19.0 version of Kubernetes:
minikube start --kubernetes-version=v1.19.0

Keystone exception during Openstack's Freezer installation

I'm trying to install Openstack on a Virtual Machine for a project, but I'm having issues with the last two steps described in https://docs.openstack.org/freezer/latest/install/install-ubuntu.html#finalize-installation.
First I installed Devstack following the instructions in https://docs.openstack.org/devstack/latest/#install-linux and now I'm trying to add the Freezer plugin. I followed the guide, but when I run the command to start the freezer scheduler I get the following error:
stack#node1:~/freezer/freezer$ sudo freezer-scheduler --config-file scheduler.conf start
2021-09-09 18:10:38.213 2377889 ERROR freezer.scheduler.freezer_scheduler [-] Could not find requested endpoint in Service Catalog.: keystoneauth1.exceptions.catalog.EndpointNotFound: Could not find requested endpoint in Service Catalog.
Could not find requested endpoint in Service Catalog.
Using openstack endpoint list shows these endpoints
.
I believe the exception's meaning is that no Keystone endpoint is found, but there is one in the list (which was automatically added during the devstack installation).
sudo freezer-scheduler --config-file /etc/freezer/scheduler.conf --os-auth-url http://vip:5000/v3 --os-project-name xxxxxx --os-username xxxxxx --os-password xxxxxx --os-user-domain-name default --os-project-domain-name default start

Error syncing pod on starting Beam - Dataflow pipeline from docker

We are constantly getting an error while starting our Beam Golang SDK pipeline (driver program) from a docker image which works when started from local / VM instance. We are using Dataflow runner for our pipeline and Kubernetes to deploy.
LOCAL SETUP:
We have GOOGLE_APPLICATION_CREDENTIALS variable set with service account for our GCP cluster. When running the job from local, job gets submitted to dataflow and completes successfully.
DOCKER SETUP:
Build image used is FROM golang:1.14-alpine. When we pack the same program with Dockerfile and try to run, it fails with error
User program exited: fork/exec /bin/worker: no such file or directory
On checking Stackdriver logs for more details, we see this:
Error syncing pod 00014c7112b5049966a4242e323b7850 ("dataflow-go-job-1-1611314272307727-
01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"),
skipping: failed to "StartContainer" for "sdk" with CrashLoopBackOff:
"back-off 2m40s restarting failed container=sdk pod=dataflow-go-job-1-
1611314272307727-01220317-27at-harness-jv3l_default(00014c7112b5049966a4242e323b7850)"
Found reference to this error in Dataflow common errors doc, but it is too generic to figure out whats failing. After multiple retries, we were able to eliminate any permission / access related issues from pods. Not sure what else could be the problem here.
After multiple attempts, we decided to start the job manually from a new Debian 10 based VM instance and it worked. This brought to our notice that we are using alpine based golang image in Docker which may not have all the required dependencies installed to start the job.
On golang docker hub, we found a golang:1.14-buster where buster is codename for Debian 10. Using that for docker build helped us solve the issue. Self answering here to help anyone else facing the same issues.

Sandbox IP mapping not working on HDP Sandbox

I have downloaded the latest HDP 2.6.5 from Hortonworks website. Following the instructions in the section 'MAP SANDBOX IP TO YOUR DESIRED HOSTNAME IN THE HOSTS FILE
' from the link -
https://hortonworks.com/tutorial/learning-the-ropes-of-the-hortonworks-sandbox/#determine-ip-address-of-your-sandbox
I added the details to the hosts file. When I try the site sandbox.hortonworks.com or any of the ones mentioned in the mapping, I get the error 'Site can't be reached'.
I am using Macbook and running the Sandbox on VirtualBox. I am able to log on to the command line using the url http://localhost:4200 and Ambari via http://localhost:8080/#/login.
I just want to know why I am not able to get it working using sandbox.hortonworks.com
Just add the port after your link:
sandbox.hortonworks.com:4200
or
sandbox.hortonworks.com:8080

Setting up a three tier environment in puppet

These are my files:
Nodes.pp file
site.pp file
I need to setup the infrastructure in the diagram, and I would like to use Puppet Automation in order to do so. I would need to, 
Create 4 VMs, one for DB, 1 web server, 1 load balancer, 1 master
Set them up with Puppet Agent
Find the appropriate modules/cookbooks from the community site
(Puppet Forge/ Chef Supermarket)
Configure the nodes using recipes/classes fetched from the community
sites.
Provide configuration parameters in order to have all these nodes
connect to each other.
 
End goal is to have a working Wordpress setup.
I got stuck with the master agent configuration process. I have a Puppet master and 3 agents up and running. But, but whenever I run #puppet agent --test in the agent, It throws an error. I look forward to the community's help.
The error I am getting is...
[root#agent1 vagrant]# puppet agent --noop --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
First take a look at the puppet master logs.
Second: The error message is to short. There is missing something after the
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could The text after the "Could" can be helpful ;)

Resources