Get variable from PuppetMaster config from ruby functions - ruby

I wrote simple function for my puppet module. It makes some requests using puppetdb API and I need IP address of puppetdb server. Is there correct way to get settings of connection PuppetMaster to puppetdb to get address of puppetdb server or I should parse puppet.conf by hand?

Parsing puppetdb.conf by hand would be the least desirable way to go about it.
Looking at the code that loads the config, it should be possible to access it using
settings_value = Puppet::Util::Puppetdb.config['main'][setting_name]
for configuration options from the [main] section.
Looking at even more code, you should even be able to use
Puppet::Util::Puppetdb.server
Puppet::Util::Puppetdb.port
I'm not entirely sure whether those APIs are available from parser functions, but it's worth a shot.

Related

How to enable proxy using azure::storage?

azure::storage provides a class operation_context which has set_proxy() and set_default_proxy() methods. However, I could not find any info on how to use it.
For example, how do I make sure that a cloud_blob_client created afterwards will use the operation?
I'm assuming that the static version of the method (i.e. set_default_proxy()) will affect all instances of operation.
Will all cloud_blob_client instances use it?
Usage of web::http::client::http_client with proxy is more obvious. I can use the following code to configure the client with a proxy:
http_client_config config;
config.set_proxy(web_proxy(web_proxy::use_default));
http_client(uri, config);
set_default_proxy() worked for me as long as web_proxy was configured with a proxy address. That is, web_proxy::use_default and web_proxy::use_discovery did not work although I had proxy settings defined for current user on Windows.

existdb: identify database server

We have a number of (developer) existDb database servers, and some staging/production servers.
Each have their own configuration, that are slightly different.
We need to select which configuration to load and use in queries.
The configuration is to be stored in an XML file within the repository.
However, when syncing the content of the servers, a single burnt-in XML file is not sufficient, since it is overwritten during copying from the other server.
For this, we need the physical name of the actual database server.
The only function found, request:get-server-name that is not quite stable since a single eXist server can be accessed through a number of various (localhost, intranet or external) URLs. However, that leads to unnecessary duplication of the configuration, one for each external URL...
(Accessing some local files in the file system is not secure and not fast.)
How to get the physical name of the existDb server from XQuery?
I m sorry but I don't fully understand your question, are you talking about exist's default conf.xml or your own configuration file that you need to store in a VCS repo? Should the xquery be executed on one instance and trigger an event in all others, or just some, or...? Without some code it is difficult to see why and when something gets overwritten.
you could try console:jmx-token which does not vary depending on URL (at least it shouldn't)
Also you might find it much easier to use a docker based approach. Either with multiple instances coordinated via docker-compose or to keep the individual configs from not interfering with each other when moving from dev to staging to production https://github.com/duncdrum/exist-docker
If I understand correctly, you basically want to be able to get the hostname or the IP address of a server from XQuery. If the functions in the XQuery Request module are not doing as you wish, then another option would be to set a Java System Property when starting eXist-db. This system property could be the internal DNS name or IP of your server, for example: -Dour-server-name=server1.mydomain.com
From XQuery you could then read that Java System property using util:system-property("our-server-name").

Using maketorrent in libtorrent examples

So I am trying to build an application that uses libtorrent. However, before I start I would like to make sure that I have compiled the lib correctly and that I have a functioning environment for testing.
I am currently running a VM with opentracker and I try to connect using the example client in libtorrent.
First I start by creating a .torrent file using libtorrent (I am currently not sitting in front of a computer with libtorrent available so I might be remembering the exact commands a bit wrong):
maketorrent.exe dummy.txt -t "http://10.XXX.XXX.XXX/announce"
This gives me a .torrent file called a.torrent. Opening the file everything looks ok, the bencoding is correct and the announce address is there.
Next I try to add it to the example client hoping it starts to seed:
client_test.exe a.torrent
Everything starts up OK, but no tracker is found. Then if I press t to show tracker information I see an error (maybe not the exact phrasing):
Alert: {null} unsupported URL protocol
OK, so maybe something is wrong with how I built libtorrent. So I get the Halite client instead since that is also supposed to be build upon libtorret. But there I have the same problem.
So I have a look at the code and found where this error message is generated. The code is checking if I am supplying an address using the HTTP or HTTPS protocol, which I am. So could it be that I am not able to use a bare IP-address or am I doing something wrong?
I found the problem. It was not a problem with the IP address or the torrent itself. Instead it was a problem with caching.
The first time I added the torrent I used http:\XXX.XXX.XXX.XXX instead of http://XXX.XXX.XXX.XXX which didn't work. However whatever change i did to the torrent file after that did not stick. It was always falling back to that original file until I removed the .resume folder.

How do I get ruby to honor a local hosts file?

I have an rspec testsuite that I use to test our internal and public facing API. Usually all I have to do to test the service is setup my parameters (e.g test urls) and from there the tests connect to the required service and do their thing.
My question is, how to I get ruby to honor my host file entries? In this specific scenario I'm trying to hit our pre-live servers, which use the same urls as our live environment, but obviously are on an entirely different IP cluster.
Unless you are doing some very low-level stuff, Ruby will not perform DNS name resolution by itself, it will simply call the appropriate OS API. So, you need to figure out how to configure your operating system to use a local hosts file.

Windows - Private hosts file for a certain environment

I've an application running on a dev server and connecting to a dev-db hosting an oracle instance.
Now i'm deploying the on a prod/prod-db machine
Since the dev-db url is hardcoded inside the java code, the just-copied binaries still points to dev-db. As a quick warkaround i added a line in Windows Host file on prod so that dev-db now points to prod-db IP address. It's work, but i'm not very satisfied of this global-scope solution.
I was wondering if exits a way to make a hosts file "private" for a certain environments ie. only valid in the scope of my running application
No, there's no way to do this, and it's a bad approach anyway.
You should instead fix the real problem, which is the hard-coding of the address inside your java code. Put such things in a properties file, and use a different properties file for production.

Resources