Setting up a three tier environment in puppet - vagrant

These are my files:
Nodes.pp file
site.pp file
I need to setup the infrastructure in the diagram, and I would like to use Puppet Automation in order to do so. I would need to, 
Create 4 VMs, one for DB, 1 web server, 1 load balancer, 1 master
Set them up with Puppet Agent
Find the appropriate modules/cookbooks from the community site
(Puppet Forge/ Chef Supermarket)
Configure the nodes using recipes/classes fetched from the community
sites.
Provide configuration parameters in order to have all these nodes
connect to each other.
 
End goal is to have a working Wordpress setup.
I got stuck with the master agent configuration process. I have a Puppet master and 3 agents up and running. But, but whenever I run #puppet agent --test in the agent, It throws an error. I look forward to the community's help.
The error I am getting is...
[root#agent1 vagrant]# puppet agent --noop --test
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run

First take a look at the puppet master logs.
Second: The error message is to short. There is missing something after the
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Could The text after the "Could" can be helpful ;)

Related

ambari 2.7.4 Installation Wizard problems

Ambari built. Network on virtual machines set. Trying to install cluster with the installation wizard of ambari UI. Could not get passed from "Get Started" to "Select version".
There is this error in the logs:
Could not load repo results
java.io.IOException: Server returned HTTP response code: 403 for URL: http://s3.amazonaws.com/dev.hortonworks.com/HDP/hdp_urlinfo.json
Found question with same problem which was not resolved
Screenshot from UI:
#eGs
It looks like you missed some steps and prerequisites before installing ambari. 127.0.0.1 should not be used to access the UI. Ambari docs require use to use FQDNs for all nodes and hosts.
Additionally, the 403 error above is result from using versions of ambari/hdp which cloudera has moved behind a paywall. User/password is required to access these assets now.
You should try with Ambari 2.7.4 and repos and artifacts that are not behind a paywall.

Laravel AWS Elastic Beanslack deployment error- Out of memory error

I deployed my Application in AWS Elastic Beanslack. Intially I deployed my application directly in Aws console. After configuring all the things. I zipped my code and upload it in a console. At that time its working perfect.
But now I tried to deploy with cli, its shows error. I put eb deploy command
Creating application version archive "app-xxxxxxxxx".
Uploading: [##################################################] 100% Done...
2020-03-14 18:51:49 INFO Environment update is starting.
2020-03-14 18:51:55 INFO Deploying new version to instance(s).
2020-03-14 18:52:22 ERROR [Instance: i-xxxxxxxxx] Command failed on instance. Return code: 255 Output: (TRUNCATED)...ar/src/Composer/DependencyResolver/GenericRule.php on line 36
Fatal error: Out of memory (allocated 809508864) (tried to allocate 8192 bytes) in phar:///opt/elasticbeanstalk/support/composer.phar/src/Composer/DependencyResolver/GenericRule.php on line 36.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/10_composer_install.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2020-03-14 18:52:22 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2020-03-14 18:52:22 ERROR Unsuccessful command execution on instance id(s) 'i-xxxxxxxxx'. Aborting the operation.
2020-03-14 18:52:23 ERROR Failed to deploy application.
From Internet I tried these all these things
1) I extended memory in php.ini but still its not working
2) I created ebextensions folder and do some configurations but thats also not working
My guess was Initially I deployed manually, so at that time I zipped with vendor folder also. Now when I tried with cli it wont take vendor folder. Instead of its by using composer install
So I think I'm facing this issues due to these things
Please let us know, If any other thing I want to do
This is a configuration inside de ElasticBeanstalk Environment.
To increase de memory limit:
Access the Elastic Beanstalk section
Open to your environment
Go to Configuration , then Software
Find the Memory limit field. The default value is 1024M.
Update the value to what you want
Apply the change
Redeploy your application
When deploy with ElasticBeanstalk (ELB) we should check Environment. With memory_limit, you can config this in Software tab (Configuration)
Follow:
Access the ELB
Open to your environment
Go to Configuration , then Software tab
Find the Memory limit field.
Change the value for what you want
Redeploy

Composer-Rest-Server not connecting

I am testing a a business network I created, I ran the Composer-rest-server and all worked fine, then shut the server as suggested in the developers guide , then I proceeded use the yo hyperledger composer to create the skeleton of the angular app, however, now the angular app is showing in the local browser, however, the composer-rest- server is not.
Expected Behavior:
I should start the composer-rest- server in localhost:3000 and the angular app as well
Actual Behavior:
I get this message;
scovering types from business network definition ...
Connection fails: Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
It will be retried for the next request.
Exception: Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
Error: Error trying to ping. Error: Error trying to query chaincode. Error: Connect Failed
at _checkRuntimeVersions.then.catch (/home/node/.nvm/versions/node/v6.11.2/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:696:34)
Your Environment
composer-cli#0.11.3
generator-hyperledger-composer#0.11.3
composer-rest-server#0.11.3
Docker version 17.06.0-ce, build 02c1d87
docker-compose version 1.13.0, build 1719ceb
The Problem
If you kill your fabric instance using ./stopFabric that you started using the ./startFabric command then all the containers that were apart of the business network were killed as well and therefore you need to reinstall the .bna and start the network again. (the development flow provided is purposely volatile for rapid development)
The Solution
1.) Type docker ps to see all of your running containers. You should see none if you are getting that error because your peer is not responding to pings
2.) Open a separate terminal and navigate to where you have fabric-dev-servers in the terminal and run ./fabricStart. This will start all the containers like your network Certificate Authority, the peer, the orderer, etc.
3.) Return to your project in another terminal. Do Step 1 & 2 found at the developer tutorial (you likely won't need to do step 3 since you likely already imported the network administrator identity going through the tutorial)
4.) Run composer network ping --card admin#tutorial-network. The ping should go through.
5.) Run docker ps. You should see 4 containers running
6.) Run composer-rest-server and follow the steps from the tutorial.
7.) Run cd tutorial-network-app to switch to where your angular application is (or wherever you generated it with the yo command)
8.) Navigate to http://localhost:3000 and everything should work.
Any other questions or problems just reply here and I can help.
The expected behaviour is that the REST server is already running (the the generator uses Loopback to spin up a REST server already (that's why you shut down the previous REST server)). Its described here https://hyperledger.github.io/composer/unstable/tutorials/developer-guide.html under 'Generate your Skeleton Web Application'.
After you created the application - following completion of the yo hyperledger-composer questions (and after providing the answers) you run your application using npm start from within the generated application directory. Your app is accessible at http://localhost:4200.

Impala The Cloudera Manager Agent got an unexpected response from this role's web server

i have done an hadoop cluster installation with cloudera manager. After this installation impala status has become bad.
I have the following error for master node:
Web Server Status
and this one for nodes with imapala daemon:
Impala Daemon Ready Check, Web Server Status
looking into logs i have found some errors:
The health test result for IMPALAD_WEB_METRIC_COLLECTION has become bad: The Cloudera Manager Agent got an unexpected response from this role's web server.
looking into cloudera-scm-agent.log there are those errors:
1261 Monitor-HostMonitor throttling_logger ERROR (29 skipped) Failed to collect NTP metrics
i tryed to install NTP (sudo apt-get install ntp) but after this installation HDFS, HIVE, YARN and others services goes bad, removing that only impala goes bad.
MainThread agent ERROR Failed to connect to previous supervisor.
Another error is this:
Monitor-GenericMonitor throttling_logger ERROR Error fetching metrics at 'http://nodo-1:50075/jmx'
i tried looking all hostnames and seems correct...
so, what is this problem? how can i solve it?
I also had problem with NTP, the problem still existed after installing NTP , but when I done sudo service ntp restart the error was fixed

Knife ec2: need to avoid re-boostraping of server after hostname change

I might be doing something wrong, but here is the situation. Standalone Chef server 12.3.0. CentOS 6.3 running on AWS.
During execution of knife bootstrap I am applying hostname:default recipe to change server's FQDN along with some other recipes. Everything iseems to be fine. Chef server shows newly boostrapped instance, but Node Name column is still showing old FQDN, smth like ip-x-x-x-x.aws-region-name.compute.internal.
Then I try to ssh this host and run chef-client I am getting following error:
[ec2-user#newHostName ~]$ sudo chef-client
Starting Chef Client, version 12.3.0
Chef encountered an error attempting to load the node data for "newHostName"
Authentication Error:
----------------
Failed to authenticate to the chef server (http 401).
Server Response:
----------------
Failed to authenticate as 'newHostName'. Ensure that your node_name and client key are correct.
Relevant Config Settings:
-------------------------
chef_server_url "https://chefServerDomain/organizations/organizationName"
node_name "newHostName"
client_key "/etc/chef/client.pem"
If these settings are correct, your client_key may be invalid, or
you may have a chef user with the same client name as this node.
[2015-05-04T12:36:03-07:00] FATAL: Stacktrace dumped to /var/chef/cache/chef-stacktrace.out
Chef Client failed. 0 resources updated in 0.962848623 seconds
[2015-05-04T12:36:03-07:00] ERROR: 401 "Unauthorized"
[2015-05-04T12:36:03-07:00] FATAL: Chef::Exceptions::ChildConvergeError: Chef run process exited unsuccessfully (exit code 1)
I have checked closed issue #8 on GitHub, according to which I need manually change client.rb file and include node_name parameter. At the same time Chef client.rb documentation indicates that I should not do that :
node_name is used to determine which configuration should be applied
and to set the client_name (which is the name used when
authenticating to a Chef server). The default value is set
automatically to be the FQDN of the chef-client, as detected by
Ohai. In general, leaving this setting blank and letting Ohai
assign the FQDN of the node as the node_name during each chef-client
run is the recommended approach.
After cleaning up /etc/chef/* folder, removing this instance from Chef server and re-bootstrapping EC2 instance again I was able to make it work. FQDN was displayed correctly in Chef server under Node Name column as newServerName.
Could you please advise, what should I do to avoid double bootsrapping?
pass the node name you want the node to use into with "-N hostname" to the bootstrap command. Then it will register properly with the final node name.

Resources