puppet functions exec windows command line - windows

Currently i am trying to automate the start mode in windows server services. i tried to use puppetlabs registry but realized that the module didn't work as i expected.
Basically i have list of windows services that i need to update on each server but on some servers, the services might not exist, but puppetlabs registry will just create the new key if it's not exist which is not the expected behaviour. By right, it should work as mentioned below:
Check whether the service is in the servers or not
If it does, then update the start mode as mentioned inside the manifest/hiera
If not exist, just do nothing and skip to the next service immediately
Based from what i knew, it seems the only way to check whether the service key exist or not is by using custom function. So i already tried to write some custom function using win32/registry, but was unsuccessful by getting some error such as Win32API not supported. Another way i can think of is using the reg command line to check whether the key exist or not. Here is the puppet code functions:
module Puppet::Parser::Functions
newfunction(:check_winservice_exist, :type => :rvalue) do |args|
service_name = args[0]
unless args.length > 0 then
raise Puppet::ParseError, ("check_winservice_exist(): wrong number of arguments (#{args.length}; must be > 0)")
end
command = "reg query HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\#{service_name} /f DisplayName"
result = system command
return result
#if result == true
# return result
#else
# return result
#end
end
end
When i run the simplified ruby scripts in command line, it works and return the expected value. But when i used above scripts as puppet custom functions, it always return empty.
This is my first time to write a puppet custom functions so i am not sure what i did wrong here. Please advise whether there are another alternative that i can use to resolve the issue or maybe advise on what i did wrong on the functions script

I managed to resolve this issue by using custom facter as suggested by Matt. Just sharing the custom facter scripts that i used. It might not be perfect as i am still not really proficient in using ruby.
require 'win32/registry'
Facter.add(:winservices) do
confine :kernel => "windows"
setcode do
keyname= 'SYSTEM\CurrentControlSet\Services'
access = Win32::Registry::KEY_ALL_ACCESS
arr = []
winservices_list = []
Win32::Registry::HKEY_LOCAL_MACHINE.open(keyname, access) do |reg|
service_lists = (reg.each_key { |key, wtime| arr.push key })
arr.each do |service|
service_key = "SYSTEM\\CurrentControlSet\\Services\\#{service}"
begin
Win32::Registry::HKEY_LOCAL_MACHINE.open(service_key, access) do |reg|
value = reg['Start']
winservices_list.push service
end
rescue
end
end
winservices_list
end
end
end
And it simply works just by adding simply checking whether the service name is in the array or not
if $service_name in $facts['winservices'] {
service { "${service_name}":
provider => 'windows',
enable => $start_real,
}
}

Related

changing key trust level (validity) with gpgme

GPGME provides information about a key's trust level as the owner_trust field which is of gpgme_validity_t type. However, I could not find a function in the documentation or the gpgme.h header file that allows me to change the validity of a key.
The GnuPG command line tool sure allows to change the trust level of a key:
$ gpg --edit-key alice#example.com
> trust
Does the GPGME library even support changing the owner_trust field? If so, how do I use it?
I am using the newest version of GPGME which is 1.16.0 (commit hash 1021c8645555502d914afffaa3707609809c9459).
It should be possible to use gpgme_op_interact to accomplish this.
The following demonstrates the process with Python bindings, but analogous code should be possible to write with the C API.
import gpg
def trust_at(level):
done = False
def interact_cb(status, arg):
nonlocal done
if status in ('KEY_CONSIDERED', 'GOT_IT', ''):
return
if status == 'GET_LINE':
if arg == 'keyedit.prompt':
if done:
return 'quit'
done = True
return 'trust'
if arg == 'edit_ownertrust.value':
return level
# needed if we set trust level to 5
if (status, arg) == ('GET_BOOL', 'edit_ownertrust.set_ultimate.okay'):
return 'y'
assert False
return interact_cb
with gpg.Context() as gnupg:
key = gnupg.get_key(FINGERPRINT)
gnupg.interact(key, trust_at('4'))

Save Google Cloud Speech API operation(job) object to retrieve results later

I'm struggling to use the Google Cloud Speech Api with the ruby client (v0.22.2).
I can execute long running jobs and can get results if I use
job.wait_until_done!
but this locks up a server for what can be a long period of time.
According to the API docs, all I really need is the operation name(id).
Is there any way of creating a job object from the operation name and retrieving it that way?
I can't seem to create a functional new job object such as to use the id from #grpc_op
What I want to do is something like:
speech = Google::Cloud::Speech.new(auth_credentials)
job = speech.recognize_job file, options
saved_job = job.to_json #Or some element of that object such that I can retrieve it.
Later, I want to do something like....
job_object = Google::Cloud::Speech::Job.new(saved_job)
job.reload!
job.done?
job.results
Really hoping that makes sense to somebody.
Struggling quite a bit with google's ruby clients on the basis that everything seems to be translated into objects which are much more complex than the ones required to use the API.
Is there some trick that I'm missing here?
You can monkey-patch this functionality to the version you are using, but I would advise upgrading to google-cloud-speech 0.24.0 or later. With those more current versions you can use Operation#id and Project#operation to accomplish this.
require "google/cloud/speech"
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
op = audio.process
# get the operation's id
id = op.id #=> "1234567890"
# construct a new operation object from the id
op2 = speech.operation id
# verify the jobs are the same
op.id == op2.id #=> true
op2.done? #=> false
op2.wait_until_done!
op2.done? #=> true
results = op2.results
Update Since you can't upgrade, you can monkey-patch this functionality to an older-version using the workaround described in GoogleCloudPlatform/google-cloud-ruby#1214:
require "google/cloud/speech"
# Add monkey-patches
module Google
Module Cloud
Module Speech
class Job
def id
#grpc.name
end
end
class Project
def job id
Job.from_grpc(OpenStruct.new(name: id), speech.service).refresh!
end
end
end
end
end
# Use the new monkey-patched methods
speech = Google::Cloud::Speech.new
audio = speech.audio "path/to/audio.raw",
encoding: :linear16,
language: "en-US",
sample_rate: 16000
job = audio.recognize_job
# get the job's id
id = job.id #=> "1234567890"
# construct a new operation object from the id
job2 = speech.job id
# verify the jobs are the same
job.id == job2.id #=> true
job2.done? #=> false
job2.wait_until_done!
job2.done? #=> true
results = job2.results
Ok. Have a very ugly way of solving the issue.
Get the id of the Operation from the job object
operation_id = job.grpc.grpc_op.name
Get an access token to manually use the RestAPI
json_key_io = StringIO.new(ENV["GOOGLE_CLOUD_SPEECH_JSON_KEY"])
authorisation = Google::Auth::ServiceAccountCredentials.make_creds(
json_key_io:json_key_io,
scope:"https://www.googleapis.com/auth/cloud-platform"
)
token = authorisation.fetch_access_token!
Make an api call to retrieve the operation details.
This will return with a "done" => true parameter, once results are in and will display the results. If "done" => true isn't there then you'll have to poll again later until it is.
HTTParty.get(
"https://speech.googleapis.com/v1/operations/#{operation_id}",
headers: {"Authorization" => "Bearer #{token['access_token']}"}
)
There must be a better way of doing that. Seems such an obvious use case for the speech API.
Anyone from google in the house who can explain a much simpler/cleaner way of doing it?

How to create publicly readable object store containers with ruby openstack gem?

I have tried to create publicly readable openstack object store containers like this:
os = OpenStack::Connection.create(...)
container = os.create_container(container_name)
container.set_metadata({'X-Container-Read' => '.r:*'})
Using my code above, the newly created containers are private.
What is the correct way to create containers with public read permissions with the ruby openstack gem?
You can try the following way.
You can redfine the create_container method
then
class MyStack < OpenStack::Swift::Connection
def create_container(containername)
super
#connection.req("PUT", path, {:headers=>{"Content-Length"=>"0", "X-Container-Read" => ".r:*", "X-Container-Write" => ".r:*}})
OpenStack::Swift::Container.new(self, containername)
end
end
These "X-Container-Read" => ".r:*", "X-Container-Write" => ".r:*" header value you need to set.
or
container.set_metadata({"X-Container-Read" => ".r:*", "X-Container-Write" => ".r:*"})
Here's what I ended up doing:
module PubliclyRedableContainerMonkeyPatch
def create_publicy_readable_container(containername)
raise OpenStack::Exception::InvalidArgument.new("Container name cannot contain '/'") if containername.match("/")
raise OpenStack::Exception::InvalidArgument.new("Container name is limited to 256 characters") if containername.length > 256
path = "/#{URI.encode(containername.to_s)}"
#connection.req("PUT", path, {:headers=>{"Content-Length"=>"0", "X-Container-Read" => ".r:*"}})
OpenStack::Swift::Container.new(self, containername)
end
end
OpenStack::Swift::Connection.include PubliclyRedableContainerMonkeyPatch
os = OpenStack::Connection.create(...)
container = os.create_publicy_readable_container(container_name)
Worksforme. :)

Is it reasonable to use resque(ruby) to manage external long-running commands (and log tasks)

I have to run bash heavy-job.sh <data-num> (that takes 0.5~2 days) frequently on my computer to process data located at ~/a/data/num . The script call a few sub-processes sequentially and write a log to ~/a/result/num.log . I have done this manually until now.
I wanted to visualize processed tasks and it's status(success or fail), etc as html table. I wrote simple sinatra app to render a table that shows
the list of ~/a/data/num to be processed
~/a/result/num.log exists or not (process not-launched/processing/done)
it's status (the log file contains the word "error" or not)
I found that it would be convenient that if I could launch a bash heavy-job.sh <data-num> from the sinatra app, log the tasks (and info like time,date,etc..) and it's args (heavy-jobs takes some optional args ) and show them as html table.
So I need something that manages jobs and logs to files (or db).
First I wrote a code like below for test (! for test, not integrated with my system yet !), but later I found resque is what i wanted. I am a beginner and not sure if my decision is reasonable or not.
my questions are
is it reasonable to use resque to manage external long-running commands (and log tasks)
or should I use another tool (not necessarily ruby-tool).
(extra;) the task-manager and the sinatra app should work separately (and communicate each other over REST or something) OR not ?
The jobs are not critical since I can retry tasks manually later if failed.
I am not good at English and my question may be misleading. I appreciate any help :) .
class TaskSpawn
def initialize()
#pids = []
end
def spawn(command, options = {})
#opt = {:pgroup => true}
#pids << Kernel.spawn(command, options)
end
def pids()
return #pids.clone
end
def waitany_nohang()
delete_idx = nil
ret = nil
#pids.each_with_index do |p, idx|
pid,status = Process.waitpid2(p, Process::WNOHANG)
unless pid.nil?
delete_idx = idx
ret = [pid,status]
break
end
end
if delete_idx
#pids.delete_at(delete_idx)
return ret
else
# no task fininshed
return nil
end
end
def waitall()
ret = waitall
raise "interal error" if ret.size != pids.size
return ret
end
end

Ruby: Dynamically defining classes based on user input

I'm creating a library in Ruby that allows the user to access an external API. That API can be accessed via either a SOAP or a REST API. I would like to support both.
I've started by defining the necessary objects in different modules. For example:
soap_connecton = Library::Soap::Connection.new(username, password)
response = soap_connection.create Library::Soap::LibraryObject.new(type, data, etc)
puts response.class # Library::Soap::Response
rest_connecton = Library::Rest::Connection.new(username, password)
response = rest_connection.create Library::Rest::LibraryObject.new(type, data, etc)
puts response.class # Library::Rest::Response
What I would like to do is allow the user to specify that they only wish to use one of the APIs, perhaps something like this:
Library::Modes.set_mode(Library::Modes::Rest)
rest_connection = Library::Connection.new(username, password)
response = rest_connection.create Library::LibraryObject.new(type, data, etc)
puts response.class # Library::Response
However, I have not yet discovered a way to dynamically set, for example, Library::Connection based on the input to Library::Modes.set_mode. What would be the best way to implement this functionality?
Murphy's law prevails; find an answer right after posting the question to Stack Overflow.
This code seems to have worked for me:
module Library
class Modes
Rest = 1
Soap = 2
def self.set_mode(mode)
case mode
when Rest
Library.const_set "Connection", Class.new(Library::Rest::Connection)
Library.const_set "LibraryObject", Class.new(Library::Rest::LibraryObject)
when Soap
Library.const_set "Connection", Class.new(Library::Soap::Connection)
Library.const_set "LibraryObject", Class.new(Library::Soap::LibraryObject)
else
throw "#{mode.to_s} is not a valid Library::Mode"
end
end
end
end
A quick test:
Library::Modes.set_mode(Library::Modes::Rest)
puts Library::Connection.class == Library::Rest::Connection.class # true
c = Library::Connection.new(username, password)

Resources