Is it possible to emit PostgreSQL style type cast in Arel instead of doing it with Ruby in Type::Value descendant? - rails-activerecord

I would like to use a column of type regclass in PostgreSQL database. Apparently Rails does not know this type as I get unknown OID 2205: failed to recognize type of 'relname'. It will be treated as String.. Treating as a String is fine for reading, however the problems occur when trying to store stuff back into database.
From what I understand, one typically defines a descendant of Type::Value class to perform casts back and forth. This is somewhat cumbersome with this cast as one usually does 'some_table'::regclass in SQL. Doing that in Ruby would result in another roundtrip to a database.
Is it possible to make Arel(?) emit such constructs in the final SQL statement? So if I do Something.find(relname: 'some_table') it emits ... WHERE "relname" = $1 :: regclass, that is it uses cast on PostgreSQL side if I supplied an object of type String. Or better yet WHERE "relname"::text = $1.
I know I can do where("relname::text=?", 'some_table'). However, if I use #first_or_create!, I have to explicitly assign this one. I was curious if it could be more seamless.

One potential way of accomplishing this with Arel is to generate a CAST function.
# Replace `Model` with your ActiveRecord model
def relname_as_text
Arel::Nodes::NamedFunction.new(
'CAST',
[
Arel::Nodes::As.new(
Model.arel_table[:relname],
Arel::Nodes::SqlLiteral.new('text')
),
]
)
end
Model.where(relname_as_text.eq('some_table'))

Related

How should I merge hash while using 'first_or_create'

I have this hash, which is built dynamically:
additional_values = {"grouping_id"=>1}
I want to merge it with this record object after creation via first_or_create:
result = model.where(name: 'test').first_or_create do |record|
# I'm trying to merge any record attributes that exist in my hash:
record.attributes.merge(additional_values)
# This works, but it sucks:
# record.grouping_id = data['grouping_id'] if model.name == 'Grouping'
end
#Not working:
#result.attributes>>{"id"=>1, "name"=>"Test", "grouping_id"=>nil}
I understand that if the record already exists (returned via 'first'), it won't be updated...although that would be a nice option and any recommendations on that are welcome, but the table was just dropped and recreated, so that's not the issue.
What am I missing?
I also tried using to_sym, resulting with:
additional_values = {:grouping_id=>1}
...just in case there was some weirdness I didn't know about...didn't make a difference
The problem is Hash#merge returns a new hash and then you aren't doing anything with that hash, you're just throwing it away. I would also suggest sticking to using the ActiveRecord methods for updating attributes, instead of trying to manipulate the underlying hash, such as using assign_attributes or, if you want to save the record update. Though, you may find the create_with, which can be used with find_or_create_by, useful here:
model.create_with(additional_values).find_or_create_by(name: 'test')
I can't find any documentation that I like (if at all) for first_or_create in recent rails versions, but if you like that more than find_or_create_by, then if we look at the Rails 3 documentation for first_or_create, you should be able to do with out the create_with:
model.where(name: 'test').first_or_create(additional_attributes)

Is there any way to recreate an object from the output of its Object.inspect method?

While working with the open source ELK stack, we have run into an issue where one of the Logstash inputs snmptrap is formatting data in a way that is unusable for us. Within the SNMPv1_Trap class there is an instance variable called agent_address which is stored as a SNMP::IpAddress. For anyone familiar with the way SNMP works, the agent address is extremely important in determining where a SNMP trap originated from when using trap relays on your network.
The problem can be seen when you take a look at an event generated by Logstash upon receiving a trap. Mainly, the inspect method of the agent_address variable is dumping data that does not match anything valid.
A sample event looks kind of like this:
#<SNMP::SNMPv1_Trap:0x2db53346 #enterprise=[1.3.6.1.4.1.6827.10.17.3.1.1.1], #timestamp=#<SNMP::TimeTicks:0x2a643dd1 #value=0>, #varbind_list=[#<SNMP::VarBind:0x2d5043a5 #name=[1.0], #value=#<SNMP::Integer:0x29fb6a4a #value=1>>], #specific_trap=1000, #source_ip=\"192.168.87.228\", #agent_addr=#<SNMP::IpAddress:0x227a4011 #value=\"\\xC0\\xA8V\\xFE\">, #generic_trap=6>
We know however, that the IpAddress object used in SNMP::SNMPv1_Trap is able to return us a nicely formatted string representing the IPv4 address it is storing.
For example:
require 'snmp'
include SNMP
address = IpAddress.new(192.168.86.254)
puts address
will yield 192.168.86.254 whereas:
require 'snmp'
include SNMP
address = IpAddress.new(192.168.86.254)
puts address.inspect
will yield:
#<SNMP::IpAddress:0x0000000168ae88 #value="\xC0\xA8V\xFE">
This is the expected behaviour of an object whose .inspect method has not been overridden.
Obviously the IPv4 address in #value is not useful to us, it has only three valid hex sequences (xC0=192, xA8=168, xFE=254) and also contains an invalid hex sequence ('V'). The same thing occurs whenever an octet string representing an IPv4 address is sent as a variable binding as well, which suggests some strange encoding.
Unfortunately, aside from writing our own SNMP input, there is no interface level access to this object. The object we receive via 'event' contains the inspect string, not the object itself. Therefore, the easiest apparent way to get the information we need would be to reconstruct the SNMPv1_Trap object and then make our own calls to it via Object.#send.
If I have the raw, unformatted and default string dump returned by Object.#inspect, is there any way to physically recreate the object used to make this inspect dump on the fly?
For example, given the string dump:
#<Integer:0x2737476 #value=1>
is it possible to recreate an Integer object with a field whose value is 1?. If this is possible, is there also a way to recreate nested objects the same way? For example, given the string:
#<SNMP::SNMPv1_Trap:0x2ef73621 #value=1, #agent_address=#<SNMP::IpAddress:0x0000000168ae88 #value="\xC0\xA8V\xFE">>
Would it possible to have an object that looks like the following?
SNMP::SNMPv1_Trap{
#value : 1
#agent_address : SNMP::IpAddress{
#value : 1
}
}
If I have the raw, unformatted and default string dump returned by Object#inspect, is there any way to physically recreate the object used to make this inspect dump on the fly?
No. inspect is intended for debugging purposes to be read by humans.
It is not guaranteed to be machine-readable. It is not guaranteed to be the same across different Ruby versions. It is not guaranteed to be the same across different Ruby implementations. It isn't even guaranteed to be the same across different versions of the same Ruby implementation implementing the same Ruby version. Heck, I don't even think it is guaranteed to be the same across two runs!
It is not a serialization format.
There are plenty of serialization formats specifically for Ruby (Marshal) or generically (XML, YAML, JSON, and of course ASN.1), but inspect isn't it.

Mongoengine Django Rest Framework - Serializer Error - ReferenceField is not JSON serializable

Everything works great until the ObjectID value of the ReferenceField no longer points to a valid document. Then The ObjectID is left as the value, and json doesn't know how to serialize this.
How do I deal with invalid ReferenceFields?
E.g.
class Food(Document):
name = StringField()
owner = ReferenceField("Person")
class Person(Document):
first_name = StringField()
last_name = StringField()
...
p = Person(...)
apple = Food(name="apple", owner=p)
p.delete() # might be the wrong method, but you get the idea
At this point, attempting to fetch a list of foods via the REST API will fail with the is not JSON serializable error, since apple.owner no longer points to an owner that exists.
Since you are using DRF with mongoengine, you must be using django-rest-framework-mongoengine.
Apparenly, its a bug in django-rest-framework-mongoengine. Check this open issue on Github which was reported recently regarding the same.
https://github.com/umutbozkurt/django-rest-framework-mongoengine/issues/91
One way is to write your own JSONEncoder for this. This link might help.
Another option is to use the json_util library of Pymongo. They provide explicit BSON conversion to and from json.
As per json-util docs:
This module provides two helper methods dumps and loads that wrap the
native json methods and provide explicit BSON conversion to and from
json. This allows for specialized encoding and decoding of BSON
documents into Mongo Extended JSON‘s Strict mode. This lets you encode
/ decode BSON documents to JSON even when they use special BSON types.

Stored procedure OUT params in JRuby using Sequel

I'm trying to call a stored procedure in a DB2 database that has output params and also returns a cursor. I can get this done using JDBC through JRuby, but I'd like to extend Sequel to do it, because of the nicer interface. I've gotten this far:
Sequel::JDBC::Database.class_eval do
def call_test
sql = "{call ddd.mystoredproc(?)}"
result = {}
synchronize do |conn|
cps = conn.prepare_call(sql)
cps.register_out_parameter(1, Types::INTEGER)
result[:success] = cps.execute
result[:outparam_val] = cps.get_int(1)
if result[:success]
dataset.send(:process_result_set, cps.get_data_set) do |row|
yield row
end
end
# rescue block
end
end
end
This gets me a ResultSet that I have to work with in a very Java-ish way, though, not a nice Sequel::Dataset object. I know this code doesn't make sense - I'm just using it to experiment with values, so at one point I was returning the result hash and seeing what it contained. If I can get something that works, I will clean it up and make it more flexible. It looks like the log_yield method just logs the sql and yields to the block, so I don't know how anything else is getting converted to a Sequel::Dataset. Doing something like DB[:ddd__sometable] will return a dataset that I can loop through, but I can't figure out how and at what point the underlying Java ResultSet is getting changed over, or how to do it myself.
edit: Since Sequel::Database can create a dummy Dataset, and the Sequel::JDBC::Dataset has a private method that converts a result set and yields it to a block, the above is what I have now. This works, but I'm absolutely positive that there has to be a better way.
Sequel seems like the best database library for Ruby, which is why I'm trying to work with it, but if there are alternatives that are nicer than using straight JDBC, I'd like to know about them, too.
Sequel doesn't currently support OUT params in stored procedures on JDBC, so what you are currently doing is probably best.

Marshal class, is there a way to find whether a data is already serialized or not?

I am using Marshal class to serialize a Ruby object, using the functions: dump() and load() everything works well, but when a value not related to any serialized data is passed, the load() function returns the expected and logical error:
incompatible marshal file format (can't be read)
format version 4.8 required; 45.45 given
What I need is to check if this data had already been serialized or not before loading it. My goal is to avoid this error and do something else.
Ok. I've encountered very similar problem and, based on hints from this post http://lists.danga.com/pipermail/memcached/2007-December/006062.html, I've figured out that this happens when you either try to load data that was not marshaled before or the data was stored improperly (e.g. not in binary field in database).
In my case specifically I used text type instead of binary field in database, and marshal data got mangled.
Changing type of column from text to binary helped. Unfortunately you cannot convert old (corrupted) data, so you have to drop the column and create it again as binary.
Maybe just rescue from the error?
begin
Marshal.load("foobar")
rescue TypeError
# not a marshalled object, do something else
puts "warning: could not load ..."
end
I have applied Padde way, but using a function that does the job for me and get me back the object, either preexistent or new created as following:
def get_serialized_banner
begin
#banner_obj = Marshal.load(self.path)
rescue TypeError
self.path = Marshal.dump(Banner.new())
self.save
#banner_obj = Marshal.load(self.path)
end
return #banner_obj
end

Resources