Is this possible to generate a new Zabbix event from a Ruby script? I was suggested to use zabbix_sender script, but I couldn't find any example.
Also, I couldn't find any API related to the events creation.
First of all create some item of type zabbix trapper and then you can send some values to its name by zabbix_sender. You can send one value, or more collected data with timestamps. Here are docs.
Alternatively you can write own zabbix trap sender. Protocol is quite easy. I done the same in Python... no external commands.
Zabbix API does not allow item value updates by design.
I have investigated couple options* and my preferred way to update Zabbix item value from Ruby is using this approach found in miyucy's gist:
Example with simple error handler:
s = Zabbix::Sender.new 'zabbix_server', 10051
zabbix_resp = s.send('host', item_key, value)
if zabbix_resp["response"] != "success" or zabbix_resp["info"].split(';')[1] != " failed: 0"
puts "error: zabbix responded with #{zabbix_resp} when trying to update #{item_key}"
end
*One example of alternative way - calling zabbix_sender binary
Related
Accurately, I want to filter logs and send some warning email.
Firstly, I tried ommail, but unfortunately, this module only support mail server which do not need authentication, but my mail server needs.
So I tried to use omprog, I wrote a python script to logon to my mail server, it will recieve one parameter which is the log and send it as mail body.
Then I got the problem, I cannot pass the log to my script, if I try like this, $msg will be recognized as a string .
if $fromhost-ip == "x.x.x.x" then {
action(type="omprog"
binary="/usr/bin/python3 /home/elancao/Python/sendmail.py $msg")
}
I tried to search the official doc.
module(load="omprog")
action(type="omprog"
binary="/path/to/log.sh p1 p2 --param3=\"value 3\""
template="RSYSLOG_TraditionalFileFormat")
but in the sample, what they are using is a string "p1", not a dynamic parameter.
Can you please help? Thanks a lot!
The expected use of omprog is for your program to read stdin and there it will find the full default RSYSLOG_FileFormat template data (with date, host, tag, msg). This is useful, as it means you can write your program so that it is started only once, and then it can loop and handle all messages as they arrive.
This cuts down on the overhead of restarting your program for each message, and makes it react faster. However, if you prefer, your program can exit after reading one line, and then rsyslog will restart it for the next message. (You may want to implement confirmMessages=on).
If you just want the msg part as data, you can use template=... in the action to specify your own minimal template.
If you really must have the msg as an argument, you can use the legacy filter syntax:
^program;template
This will run program once for each message, passing it as argument the output of the template. This is not recommended.
if omprog script is not doing or not saving to a file the problem is that :
rsyslog is sending the full message to that script so you need to define or use a template
your script needs to listen to and return an
example in perl whit omprog
#my $input = join( '-', #ARGV ); ///not working I lost 5 hours of my life
my $input = ; now this is what you need
Hope this what the perl/python/rsyslog community needs.
I have a collection of Person, stored in a legacy mongodb server (2.4) and accessed with the mongoid gem via the ruby mongodb driver.
If I perform a
Person.where(email: 'some.existing.email#server.tld').first
I get a result (let's assume I store the id in a variable called "the_very_same_id_obtained_above")
If I perform a
Person.find(the_very_same_id_obtained_above)
I got a
Mongoid::Errors::DocumentNotFound
exception
If I use the javascript syntax to perform the query, the result is found
Person.where("this._id == #{the_very_same_id_obtained_above}").first # this works!
I'm currently trying to migrate the data to a newever version. Currently mongodbrestore-ing on amazon documentdb to make tests (mongodb 3.6 compatible) and the issue remains.
One thing I noticed is that those object ids are peculiar:
5ce24b1169902e72c9739ff6 this works anyway
59de48f53137ec054b000004 this requires the trick
The small number of zeroes toward the end of the id seems to be highly correlated with the problem (I have no idea of the reason).
That's the default:
# Raise an error when performing a #find and the document is not found.
# (default: true)
raise_not_found_error: true
Source: https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#anatomy-of-a-mongoid-config
If this doesn't answer your question, it's very likely the find method is overridden somewhere in your code!
This question is related to this one: Tracking Upload Progress of File to S3 Using Ruby aws-sdk,
However since there is no clear solution to this I was wondering if there's a better/easier way (if one exists) of getting file upload progress with S3 using Ruby in 2018?
In my current setup I'm basically creating a new Resource, fetch my bucket and call upload_file but I haven't yet found any options for passing blocks which would help in yielding some sort of progress.
...
#connection = Aws::S3::Resource.new
#s3_bucket = #connection.bucket(bucket)
#s3_bucket.object(path).upload_file(data, {acl: 'public-read'})
...
Is there a way to do this using the newest sdk-for-ruby v3?
Any help (or even better a small example) would be great.
The example Trevor gives in https://stackoverflow.com/a/12147709/153886 is not hacky from what I can see - just wiring things together. The SDK simply does not provide a feature for passing progress details on all operations. Plus, Trevor is the maintainer of the Ruby SDK at AWS so I trust his judgement.
Expanding on his example
bar = ProgressBar.create(:title => "Uploading action", :starting_at => 0, :total => file.size)
obj = s3.buckets['my-bucket'].objects['object-key']
obj.write(:content_length => file.size) do |writable, n_bytes|
writable.write(file.read(n_bytes))
bar.progress += n_bytes
end
If you want to have a progress block right in the upload_file method I believe you will need to open a PR to the SDK. It is not that strange that is not the case for Ruby (or for any other runtime) because, for example, there could be an optimisation in the HTTP client library that uses IO.copy_stream from your source body argument to the destination socket, which does not relay progress anywhere.
I am looking for the syntax to show both message headers (to, from, subject, date), as well as message size when issuing an IMAP command via OpenSSL or telnet.
Currently, I am using:
. fetch 1:* (body[header.fields (from to subject date)])
and
. fetch 1:* (rfc822.size)
I am using these as separate commands, but I was wondering if there is a way to integrate them into a single command. I haven't been able to figure it out myself and wonder if anybody here knows of a way.
You should be able to put both data items in the list:
. fetch 1:* (rfc822.size body[header.fields (from to subject date)])
I'm using splunk-client to extract results from splunk. Here's the code:
query = "sourcetype=collection #{order_id}"
search = #splunk_client.search(query)
search.wait
The search is happening fine, and it seems like I'm doing everything according to the example (https://github.com/cbrito/splunk-client), but I get this error on the 'search.wait' line:
Undefined namespace prefix: //s:key[#name='isDone']
Any ideas what could be going wrong? Running these commands in irb works fine. Is there some sort of blocking issue?
There is currently very little error checking which occurs within the gem itself. The reason for the error is that wait looks for the status of the isDone key to change to true.
Since your credentials were not properly setup in the first place, the gem creates a search object with an invalid session. The search does not initially fail, because enough response came back from Splunk that Nokogiri processes it into an object without a Splunk search sid.
In the future I should likely raise an exception if a proper sid is not returned to avoid confusion.
Source: I wrote the gem.
I found out the issue -- the splunk client wasn't authenticating properly, and so search was actually a broken SplunkJob object (with a nil username and authentication key). It's strange that there was no error raised until the wait command, but upon inspecting the search object, one of the fields stated that the object was malformed.