Foreman with puppet node.rb error 404 Not Found - ruby

I have installed foreman-1.2 with puppet, after installation i have registered my puppet to smart-proxy on foreman.
when i run following command
[root#puppet ~]# puppet agent -t
Warning: Unable to fetch my node definition, but the agent run will continue:
Warning: Error 400 on SERVER: Failed to find puppet.example.com via exec: Execution of '/etc/puppet/node.rb puppet.example.com' returned 1: --- false
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: Failed when searching for node puppet.example.com: Failed to find puppet.example.com via exec: Execution of '/etc/puppet/node.rb puppet.example.com' returned 1: --- false
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I tried following
[root#puppet ~]# /etc/puppet/node.rb puppet.example.com
--- false
Error retrieving node puppet.example.com: Net::HTTPNotFound
foreman.log debug
Started GET "/node/puppet.example.com?format=yml" for 10.101.20.15 at 2014-03-25 21:01:47 -0400
Processing by HostsController#externalNodes as YML
Parameters: {"name"=>"puppet.example.com"}
Setting Load (1.3ms) SELECT `settings`.* FROM `settings` WHERE `settings`.`name` = 'restrict_registered_puppetmasters' ORDER BY name LIMIT 1
Setting Load (0.3ms) SELECT `settings`.* FROM `settings` WHERE `settings`.`name` = 'require_ssl_puppetmasters' ORDER BY name LIMIT 1
SmartProxy Load (0.5ms) SELECT `smart_proxies`.* FROM `smart_proxies` INNER JOIN `features_smart_proxies` ON `features_smart_proxies`.`smart_proxy_id` = `smart_proxies`.`id` INNER JOIN `features` ON `features`.`id` = `features_smart_proxies`.`feature_id` WHERE `features`.`name` = 'Puppet' ORDER BY smart_proxies.name
Setting Load (0.3ms) SELECT `settings`.* FROM `settings` WHERE `settings`.`name` = 'trusted_puppetmaster_hosts' ORDER BY name LIMIT 1
Verifying request from ["puppet.example.com"] against ["puppet.example.com"]
User Load (0.4ms) SELECT `users`.* FROM `users` WHERE `users`.`login` = 'admin' LIMIT 1
Setting current user thread-local variable to admin
Host::Managed Load (0.7ms) SELECT `hosts`.* FROM `hosts` WHERE `hosts`.`type` IN ('Host::Managed') AND `hosts`.`certname` = 'puppet.example.com' LIMIT 1
Host::Managed Load (0.6ms) SELECT `hosts`.* FROM `hosts` WHERE `hosts`.`type` IN ('Host::Managed') AND `hosts`.`name` = 'puppet.example.com' LIMIT 1
Completed 404 Not Found in 25ms (ActiveRecord: 4.1ms)
Am i missing something? do i need to create host first on foreman GUI? I don't understand concept of node.rb

First you should check the contents of node.rb. There are a number of variables that need to be set for it to work. It looks like this hasn't been done because the "Net::HTTPNotFound" says it can't find your Foreman server.
Second, yes and no -- the host needs to be defined in Froreman first. If the host doesn't exist in Foreman yet Foreman "may" create it -- it really depends on how you've set up Foreman.
If memory serves properly, I believe a non-existent host will be created when the facts are uploaded by node.rb (if that is enabled). If you're just running it from the command line, then no facts are being uploaded and the host isn't being created.
For your testing, ensure the host is created in Foreman. Then test node.rb (after you check that the vars in it are set properly).
EDIT:
You're last question: node.rb's main function is to get the yaml formated config for a server and hand it to puppet. Secondary it also functions to upload facts from the server to Foreman -- which can be used in classifying the server in Foreman.

Basically you have to make sure that the master and the agent are familiar with each other (either via /etc/hosts or dns). This error is usually raised when the master can not resolve the agent's name (e.g puppet.example.com)

Related

Chef: Re-using a previous registered node (AMI) and bootstraping it - not working

Our team is trying to scale out our current Elastic Search cluster. In doing this, we took an AMI of a current elastic node and used that AMI to create the potential 4th elastic node. In the past, Chef was used to configure new elastic nodes. However, the designer of those recipes left our team and we are at a loss. When I try to bootstrap the new host, I get the below error:
Recipe Compile Error in /var/chef/cache/cookbooks/av_elastic/recipes/elastic_cluster.rb
================================================================================
Net::HTTPServerException
------------------------
400 "Bad Request"
Cookbook Trace:
---------------
/var/chef/cache/cookbooks/av_elastic/recipes/elastic_cluster.rb:134:in `from_file'
Relevant File Content:
----------------------
/var/chef/cache/cookbooks/av_elastic/recipes/elastic_cluster.rb:
127: message "Block devices available to Elasticsearch: #{devices}"
128: level :warn
129: end
130:
131: ## Gather Available Nodes within same es-cluster-name, Chef Environment, and elastic_cluster role. Exclude marvel nodes
132: elasticsearch_cluster_nodes = Array.new
133: elasticsearch_cluster_node_names = Array.new
134>> search(:node, "chef_environment:#{node.chef_environment} AND roles:*elastic_cluster AND es-cluster-name:#{node['es-cluster-name']} NOT roles:*elastic_marvel").each do |node|
135: elasticsearch_cluster_nodes << node
136: elasticsearch_cluster_node_names << node['hostname']
137: end
138:
Using the debug option, I can see this in the chef-client output:
[2020-11-24T15:58:00+00:00] DEBUG: ---- HTTP Response Body ----
[2020-11-24T15:58:00+00:00] DEBUG: {"error":["invalid search query: 'chef_environment:production AND roles:*elastic_cluster AND es-cluster-name: NOT roles:*elastic_marvel'"]}
[2020-11-24T15:58:00+00:00] DEBUG: ---- End HTTP Response Body -----
[2020-11-24T15:58:00+00:00] DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response
[2020-11-24T15:58:00+00:00] DEBUG: Expected JSON response, but got content-type ''
Since we are re-using a previous host, I've already updated the /etc/hosts and /etc/hostname files along with removing the /etc/chef/client.pem file. I think the issue is with authentication, but I can't prove it. I also think that something might be left behind on this host that is still pointing/thinking that it's the other host (the one that created the AMI).
The current running elastic nodes, that are using the same recipes as the new host, are all working/running per design. Any ideas on how to fix? Thank you in advance

Exported resources not working with puppet

I've written a module to set up the Prometheus node_exporter (in here called ni_trending). Now I need to add all FQDNs of the nodes to a simple file: So declaring an exported resource makes much sense here. PuppetDB is configured and working.
Here's the declaration, within my config.pp:
##node_exporter { "${listen_address}":
hostname => $ni_trending::hostname,
listen_port => $ni_trending::listen_port,
}
When the module is applied on the node I get following error:
Error: Could not retrieve catalog from remote server: Error 500 on
SERVER: Server Error: Evaluation Error: Error while evaluating a
Resource Statement, Invalid export in Class[Ni_trending]: {} is not a
resource on node ydixken-dev01.berlin.ni
Within the ni_trending module I'm retrieving all collected resources via:
Node_exporter <<| |>>
What is missing here?

Hive Browser Throwing Error

I am trying to put some basic query in hive editor in hue browser , but it is returning the following error whereas my Hivecli works fine and able to execute queries. Could someone help me?
Fetching results ran into the following error(s):
Bad status for request TFetchResultsReq(fetchType=1,
operationHandle=TOperationHandle(hasResultSet=True,
modifiedRowCount=None, operationType=0,
operationId=THandleIdentifier(secret='r\t\x80\xac\x1a\xa0K\xf8\xa4\xa0\x85?\x03!\x88\xa9',
guid='\x852\x0c\x87b\x7fJ\xe2\x9f\xee\x00\xc9\xeeo\x06\xbc')),
orientation=4, maxRows=-1):
TFetchResultsResp(status=TStatus(errorCode=0, errorMessage="Couldn't
find log associated with operation handle: OperationHandle
[opType=EXECUTE_STATEMENT,
getHandleIdentifier()=85320c87-627f-4ae2-9fee-00c9ee6f06bc]",
sqlState=None,
infoMessages=["*org.apache.hive.service.cli.HiveSQLException:Couldn't
find log associated with operation handle: OperationHandle
[opType=EXECUTE_STATEMENT,
getHandleIdentifier()=85320c87-627f-4ae2-9fee-00c9ee6f06bc]:24:23",
'org.apache.hive.service.cli.operation.OperationManager:getOperationLogRowSet:OperationManager.java:229',
'org.apache.hive.service.cli.session.HiveSessionImpl:fetchResults:HiveSessionImpl.java:687',
'sun.reflect.GeneratedMethodAccessor14:invoke::-1',
'sun.reflect.DelegatingMethodAccessorImpl:invoke:DelegatingMethodAccessorImpl.java:43',
'java.lang.reflect.Method:invoke:Method.java:606',
'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:78',
'org.apache.hive.service.cli.session.HiveSessionProxy:access$000:HiveSessionProxy.java:36',
'org.apache.hive.service.cli.session.HiveSessionProxy$1:run:HiveSessionProxy.java:63',
'java.security.AccessController:doPrivileged:AccessController.java:-2',
'javax.security.auth.Subject:doAs:Subject.java:415',
'org.apache.hadoop.security.UserGroupInformation:doAs:UserGroupInformation.java:1657',
'org.apache.hive.service.cli.session.HiveSessionProxy:invoke:HiveSessionProxy.java:59',
'com.sun.proxy.$Proxy19:fetchResults::-1',
'org.apache.hive.service.cli.CLIService:fetchResults:CLIService.java:454',
'org.apache.hive.service.cli.thrift.ThriftCLIService:FetchResults:ThriftCLIService.java:672',
'org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1553',
'org.apache.hive.service.cli.thrift.TCLIService$Processor$FetchResults:getResult:TCLIService.java:1538',
'org.apache.thrift.ProcessFunction:process:ProcessFunction.java:39',
'org.apache.thrift.TBaseProcessor:process:TBaseProcessor.java:39',
'org.apache.hive.service.auth.TSetIpAddressProcessor:process:TSetIpAddressProcessor.java:56',
'org.apache.thrift.server.TThreadPoolServer$WorkerProcess:run:TThreadPoolServer.java:285',
'java.util.concurrent.ThreadPoolExecutor:runWorker:ThreadPoolExecutor.java:1145',
'java.util.concurrent.ThreadPoolExecutor$Worker:run:ThreadPoolExecutor.java:615',
'java.lang.Thread:run:Thread.java:745'], statusCode=3), results=None,
hasMoreRows=None)
This error could be either due to HiveServer2 not running or Hue does not have access to hive_conf_dir.
Check whether the HiveServer2 has been started and is running. It uses the port 10000 by default.
netstat -ntpl | grep 10000
If it is not running, start the HiveServer2
$HIVE_HOME/bin/hiveserver2
Also check the Hue configuration file hue.ini. The hive_conf_dir property must be set under [beeswax] section. If not set, add this property under [beeswax]
hive_conf_dir=$HIVE_HOME/conf
Restart supervisor after making these changes.

Logstash install error: can't get unique system GID (no more available GIDs)

I am trying to install logstash with yum on a red hat vm, I already have the logstash.repo file setup according to the guide and i ran
yum install logstash
but I get the following error after it downloads everything
...
logstash-2.3.2-1.noarch.rpm | 72 MB 00:52
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
groupadd: Can't get unique system GID (no more available GIDs)
useradd: group 'logstash' does not exist
error: %pre(logstash-1:2.3.2-1.noarch) scriptlet failed, exit status 6
Error in PREIN scriptlet in rpm package 1:logstash-2.3.2-1.noarch
error: install: %pre scriptlet failed (2), skipping logstash-1:2.3.2-1
Verifying : 1:logstash-2.3.2-1.noarch 1/1
Failed:
logstash.noarch 1:2.3.2-1
Complete!
I can't find much information about this. Any suggestions?
groupadd determines gids for the creation of regular groups from the /etc/login.defs file.
In my centos 6 box. /etc/login.defs contains following two lines:
#
# Min/max values for automatic gid selection in groupadd
#
GID_MIN 500
GID_MAX 60000
For system accounts add these two lines to your /etc/login.defs
# System accounts
SYS_GID_MIN 100
SYS_GID_MAX 499
I updated the SYS_GID_MAX Value and it worked for me.

Informatica error 1417 :: Task not yet registered with this service process

I am getting following error while running a workflow in informatica.
Session task instance [worklet.session] : [TM_6775 The master DTM process was unable to connect to the master service process to update the session status with the following message: error message [ERROR: The session run for [Session task instance [worklet.session]] and [ folder id = 206, workflow id = 16042, workflow run id = 65095209, worklet run id = 65095337, task instance id = 13272 ] is not yet registered with this service process.] and error code [1417].]
This error comes randomly for many other sessions, when they are ran through workflow as a whole. However if I "start task" that failed task next time, it runs successfully.
Any help is much appreciated.
Just an idea to try if you use versioning. Check that everthing is checked in correctly. If the mapping, worflow or worklet is checked out then you and informatica will run different versions wich may cause the behaivour to differ when you start it manually.
Infromatica will allways use the checked in version and you will allways use the checked out version.

Resources