Opscode Nagios Cookbook Not pulling Cloud IP address - ruby

I am trying to generate the nagios hosts.cfg file from the standard opscode nagios cookbook. Using the standard recipe I am continually getting the same errors from the following lines:
Chef::Mixin::Template::TemplateError (undefined method `[]' for nil:NilClass) on line #19:
17: if node['cloud'].nil? && !n['cloud'].nil?
18: ip = n['cloud']['public_ipv4'].include?('.') ? n['cloud']['public_ipv4'] : n['ipaddress']
19: elsif !node['cloud'].nil? && n['cloud']['provider'] != node['cloud']['provider']
20: ip = n['cloud']['public_ipv4'].include?('.') ? n['cloud']['public_ipv4'] : n['ipaddress']
21: else
22: ip = n['ipaddress']
The full File is here:
http://pastebin.com/FqcdUnSE
Notations on the original file were as follows:
<% # decide whether to use internal or external IP addresses for this node
# if the nagios server is not in the cloud, always use public IP addresses for cloud nodes.
# if the nagios server is in the cloud, use private IP addresses for any
# cloud servers in the same cloud, public IPs for servers in other clouds
# (where other is defined by node['cloud']['provider'])
# if the cloud IP is nil then use the standard IP address attribute. This is a work around
# for OHAI incorrectly identifying systems on Cisco hardware as being in Rackspace

Instead of trying to fix this, I did the following workaround:
define host {
use server
address <%= node['ipaddress'] %>
host_name <%= node[node['nagios']['host_name_attribute']] %>
hostgroups <%= node['nagios']['server_role'] %>,<%= node['os'] %>
}
This worked instead of using the cloud ip configs.

Related

Chef: Re-using a previous registered node (AMI) and bootstraping it - not working

Our team is trying to scale out our current Elastic Search cluster. In doing this, we took an AMI of a current elastic node and used that AMI to create the potential 4th elastic node. In the past, Chef was used to configure new elastic nodes. However, the designer of those recipes left our team and we are at a loss. When I try to bootstrap the new host, I get the below error:
Recipe Compile Error in /var/chef/cache/cookbooks/av_elastic/recipes/elastic_cluster.rb
================================================================================
Net::HTTPServerException
------------------------
400 "Bad Request"
Cookbook Trace:
---------------
/var/chef/cache/cookbooks/av_elastic/recipes/elastic_cluster.rb:134:in `from_file'
Relevant File Content:
----------------------
/var/chef/cache/cookbooks/av_elastic/recipes/elastic_cluster.rb:
127: message "Block devices available to Elasticsearch: #{devices}"
128: level :warn
129: end
130:
131: ## Gather Available Nodes within same es-cluster-name, Chef Environment, and elastic_cluster role. Exclude marvel nodes
132: elasticsearch_cluster_nodes = Array.new
133: elasticsearch_cluster_node_names = Array.new
134>> search(:node, "chef_environment:#{node.chef_environment} AND roles:*elastic_cluster AND es-cluster-name:#{node['es-cluster-name']} NOT roles:*elastic_marvel").each do |node|
135: elasticsearch_cluster_nodes << node
136: elasticsearch_cluster_node_names << node['hostname']
137: end
138:
Using the debug option, I can see this in the chef-client output:
[2020-11-24T15:58:00+00:00] DEBUG: ---- HTTP Response Body ----
[2020-11-24T15:58:00+00:00] DEBUG: {"error":["invalid search query: 'chef_environment:production AND roles:*elastic_cluster AND es-cluster-name: NOT roles:*elastic_marvel'"]}
[2020-11-24T15:58:00+00:00] DEBUG: ---- End HTTP Response Body -----
[2020-11-24T15:58:00+00:00] DEBUG: Chef::HTTP calling Chef::HTTP::ValidateContentLength#handle_response
[2020-11-24T15:58:00+00:00] DEBUG: Expected JSON response, but got content-type ''
Since we are re-using a previous host, I've already updated the /etc/hosts and /etc/hostname files along with removing the /etc/chef/client.pem file. I think the issue is with authentication, but I can't prove it. I also think that something might be left behind on this host that is still pointing/thinking that it's the other host (the one that created the AMI).
The current running elastic nodes, that are using the same recipes as the new host, are all working/running per design. Any ideas on how to fix? Thank you in advance

JMeter - Listen to a port on a given IP

I want to run a jmeter test, which listens to a port on a given ip, and prints the messages which are being sent to that port. I have tried using this:
SocketAddress inetSocketAddress = new InetSocketAddress(InetAddress.getByName("<client ipAddress>"),<port number>);
def server = new ServerSocket();
server.bind(inetSocketAddress);
while(true) {
server.accept { socket ->
log.info('Someone is connected')
socket.withStreams { input, output ->
def line = input.newReader().readLine()
log.info('Received message: ' + line)
}
log.info("Connection processed")
}
}
But this is giving me error - "Cannot assign requested address: JVM_Bind
"
Is there any alternate way to approach this? Or what changes do i need to do for the current approach to work?
You copied and pasted this code from the right place and it should work just fine. Evidence:
as per the BindException documentation
Signals that an error occurred while attempting to bind a socket to a local address and port. Typically, the port is in use, or the requested local address could not be assigned.
So I can think of 2 options:
Your <client ipAddress> is not correct and cannot be resolved.
Something is already running on the <port number>, you cannot have 2 applications listening to the same port, the first one will be successful and another one will fail
More information:
Fixing java.net.BindException: Cannot assign requested address: JVM_Bind in Tomcat, Jetty
Apache Groovy - Why and How You Should Use It

Setting address of whois service for ruby whois gem

Using the ruby whois gem, how do I set the server address of the whois service?
Setting the bind_host, I get an error.
> whois_client = Whois::Client.new(bind_host: "192.0.47.59", bind_port: 43)
=> #<Whois::Client:0x00000008188e7e50 #timeout=10, #settings={:bind_host=>"192.0.47.59", :bind_port=>43}>
> record = whois_client.lookup('wandajackson.com')
Whois::ConnectionError: Errno::EADDRNOTAVAIL: Can't assign requested address - bind(2) for "192.0.47.59" port 43
from (irb):4
I'm pretty sure bind_host doesn't refer to the host used for the whois lookup, but instead refers to the adapter binding on the server running your code. By default it binds to 0.0.0.0, or all the adapters on the local server.
If you want to have the whois gem use a custom server address for looking up whois information then it appears that you have to specify it in one of the following ways:
# Define a server for the .com TLD
Whois::Server.define :tld, "com", "your.whois.server.address"
Whois.whois("google.com")
# Define a new server for an range of IPv4 addresses
Whois::Server.define :ipv4, "10.0.0.0/8", "your.whois.server.address"
Whois.whois("10.0.0.1")
# Define a new server for an range of IPv6 addresses
Whois::Server.define :ipv6, "2001:2000::/19", "your.whois.server.address"
Whois.whois("2001:2000:85a3:0000:0000:8a2e:0370:7334")
These examples were taken from https://www.rubydoc.info/gems/whois/Whois/Server.

Chef::Exceptions::ValidationFailed error during EncryptedDataBagItem.load due to supposed regex mismatch

I'm bootstrapping a node with a cookbook that worked fine with chef-client as of November, unfortunately the following code:
45: #Configure PostgreSQL cluster -- create pertinent databases, users, and groups based on uploaded, decrypted shell here-document.
47>> here_doc_name = Chef::EncryptedDataBagItem.load("database_configs", "tlcworx_#{node["tlcworx_db"]["environment"]}")["filename"]
48: here_doc_content = Chef::EncryptedDataBagItem.load("database_configs", "tlcworx_#{node["tlcworx_db"]["environment"]}")["content"]
49:
50: open("#{node["tlcworx_db"]["tmp_dir"]}/#{here_doc_name}", 'w') { |f| f.puts here_doc_content }
Has rendered up the following error that halts the bootstrap:
Chef::Exceptions::ValidationFailed: Option data_bag's value {"encrypted_data"=>"PffgOkpIpdoEJO8khrUOUQwqv2/vqrtzOf1U/z/a5xD4KqSH2/CkD1zHndzW\nwJL1\n", "iv"=>"d/kiiPRQWQoKBTU5WF8NPw==\n", "version"=>1, "cipher"=>"aes-256-cbc"} does not match regular expression /^[\-[:alnum:]_]+$/
Obviously, I'm supplying the same --secret-file as I did back then via knife CLI argument. Running knife data bag edit database_configs tlcworx_uat --secret-file /path/to/secret.pem decrypts the cookbook content appropriately, and doesn't error out. I've never seen this error before, and looking at other instances of this error I see they involve direct CLI operations in which the data bag in question is not named such as this instance. Again, this is only upon bootstrap when a server's chef-client is communicating with the remote chef-server.
I was hoping someone could provide some insight as to what could be causing the error. Chef client version is 12.7.2.
Thanks in advance for any help on the matter!
For the future, we're pretty sure this is a side effect of a bug with DataBagItem.to_hash mutating it's data. Will be fixed in the next release of Chef.

Anyone get working FTPS/FTP::TLS Under Ruby 1.9.3?

I've tried several gems, examples, etc, and cannot get this working, the more promising gems were: double-bag-ftps and FTPFXP, I can connect but I cannot transfer files, in active or passive mode..
sample code with ftpfxp:
#conn2 = Net::FTPFXPTLS.new
#conn2.passive = true
#conn2.debug_mode = true
#conn2.connect('192.168.0.2', 990)
#conn2.login('myuser2', 'mypass2')
#conn2.chdir('/')
#conn2.get("data.txt")
#conn2.close
sample code with double-bag:
ftps = DoubleBagFTPS.new
ftps.ssl_context = DoubleBagFTPS.create_ssl_context(:verify_mode => OpenSSL::SSL::VERIFY_NONE)
ftps.connect('192.168.0.2')
ftps.login('myuser2', 'mypass2')
ftps.chdir('/')
ftps.get("data.txt")
ftps.close
sample error with double-bag:
~/.rbenv/versions/1.9.3-p385/lib/ruby/gems/1.9.1/gems/double-bag-ftps-0.1.0/lib/double_bag_ftps.rb:148:in `connect': Broken pipe - SSL_connect (Errno::EPIPE)
Sample error with ftpfxp:
~/.rbenv/versions/1.9.3-p385/lib/ruby/1.9.1/net/ftp.rb:206:in `initialize': No route to host - connect(2) (Errno::EHOSTUNREACH)
Any recomendation that does not involve external commands ?
Thanks.
I've solved the issue, the server was returning a private ip address while trying to connect in pasive mode with Explicit tls, so I've added a line to Double-Bag-FTPS to check if the returned ip was private fallback to the original public ip address...
GitHub Pull request
So if someone has the same issue maybe this is the answer hope that this can help someone else :)

Resources