AES Decryption in ruby and activerecord - ruby

I have super ugly code that looks like this:
class User < ActiveRecord::Base
self.table_name = 'users'
def get_password
#test_password = User.find_by_sql "SELECT CAST(AES_DECRYPT(Pass, 'kkk') AS CHAR(50)) Pass From prod.sys_users Where Owner = '"+#owner+"' AND User = '"+#user+"'"
#test_password[0].Pass
end
end
This code works, but it makes me sick, since it is not written according to Ruby Coding Style. So I decided to fix this code, and here what I have so far:
class User < ActiveRecord::Base
self.table_name = 'users'
def get_pass
User.where(Owner: #owner, User: #user).pluck(:Pass).first
end
end
So, I am getting encrypted password, how can I decrypt it?
I tired OpenSSL, but key 'kkk' here is too short.
How can I resolve this issue?

In a situation like this, you might be better off converting the field values entirely. This could be done in a migration and once it's done, you never have to be concerned about how MySQL has stored the data. It's also one step toward database independence.
So, the migration would basically do 3 things:
add a flag column to track which records have been converted
iterate over each records, converting the encrypted value and setting the flag
remove the flag column once all records have been processed
The migration might look like this:
class ConvertMySqlEncryptedData < ActiveRecord::Migration
# Local proxy class to prevent interaction issues with the real User class
class User < ActiveRecord::Base
end
def up
# Check to see if the flag has already been created (indicates that migration may have failed midway through)
unless column_exists?(:users, :encrypted_field_converted)
# Add the flag field to the table
change_table :users do |t|
t.boolean :encrypted_field_converted, null: false, default: false
end
end
# Add an index to make the update step go much more quickly
add_index :users, :encrypted_field_converted, unique: false
# Make sure that ActiveRecord can see the new column
User.reset_column_information
# Setup for AES 256 bit cipher-block chaining symetric encryption
alg = "AES-256-CBC"
digest = Digest::SHA256.new
digest.update("symetric key")
key = digest.digest
iv = OpenSSL::Cipher::Cipher.new(alg).random_iv
key64 = Base64.encode(key)
# Don't update timestamps
ActiveRecord::Base.record_timestamps = false
begin
# Cycle through the users that haven't yet been updated
User.where(encrypted_field_converted: false).pluck("CAST(AES_DECRYPT(Pass, 'kkk') AS CHAR(50)) Pass").each do |user|
# Re-encode the password with OpenSSL AES, based on the setup above
new_pass = aes.update(user.pass).final
# Update the password on the row, and set the flag to indicate that conversion has occurred
user.update_attributes(pass: new_pass, encrypted_field_converted: true)
end
ensure
# Reset timestamp recording
ActiveRecord::Base.record_timestamps = true
end
end
def down
# To undo or not undo, that is the question...
end
end
This was off the top of my head, so there may be issues with the encryption. Structure-wise, it should be in good shape, and it takes into account a number of things:
Provides incremental database processing by using a flag to indicate progress
Uses an index on the flag field to improve query performance, particularly if multiple runs are required to complete processing
Avoids updating the updated_at column to prevent overwriting prior values that may be useful to keep (this is not a material change, so updated_at doesn't require updating)
Plucks only the pass field, so that transfer overhead is minimized
Now, you can query pass and encrypt/decrypt as needed by the application. You can document and support the field at the application level, rather than rely on the database implementation.
I spent a few years consulting and doing database conversion, either from one database product to another, or as part of a significant version upgrade. It also allows development to use lighter-weight databases (e.g. SQLite) or test viability with other products when upscaling is needed. Avoiding database-specific features, like the MySQL encryption, will save you (or your employer) a LOT of money and hassle in the long run. Database independence is your friend; embrace it and use what ActiveRecord provides to you.

Related

How to use the variable in one action to another action

In the ruby controller, I have two methods in the same controller.
class NotificationsController < ApplicationController
def first
variable_one = xxxx
end
def second
// do something
end
end
I want to use the variable one in the method first, and use it in the method two. I tried to assign the variable one to a session hash. session[:variable_one] = variable_one, and access it in the method two. But it turns out the session[:variable_one] in the method two is nil. These two methods don't have the corresponding views, so I cannot add a link_to and pass parameters. The method one cannot be set as before_action as well.
Could you please have some suggestions on this problem? Thanks so much.
The issue that session is stored via cookie, and therefore it is specific to one device. So, you will have one session between the rails app and your frontend, and another session betweeen the rails app and Twilio (probably the Twilio session will reset between each request). Basically, they're totally separate contexts.
Possibly you could figure out how to pass the information along via Twilio - see https://www.twilio.com/docs/voice/how-share-information-between-your-applications - but as a general-purpose workaround, you could just store the column on the database.
First, make a migration to add the column:
add_column :users, :my_variable, :string
Set this value in the first endpoint:
def first
current_user.update my_variable: "xxxx"
end
Then read it from the second:
def second
# first you would need to load the user, then you can read the value:
my_variable = current_user.my_variable
# you could set the db value to nil here if you wanted
current_user.update my_varible: nil
end

Rails 4 update Type when migrating to Single Table Inheritance

Rails 4.0.4, Ruby 2.1.2
I want to use STI like so:
User < ActiveRecord::Base
Admin < User
But currently I have:
User < ActiveRecord::Base
Info < ActiveRecord::Base
So I changed my models, and then start writing my migration. In my migration, I first add a column to allow STI:
add_column :users, :type, :string
Then I want to update the Users currently in the database to be Admin
# Place I'm currently stuck
Then I move all my Info records into the Users table
Info.all.each { |info| User.create(name: info.name, email: info.email) }
Everything seems to work except turning the previous Users into Admins. Here are some things I've tried:
# Seems to work, but doesn't actually save type value
User.each do |user|
user.becomes!(Admin)
user.save! # evaluates to true, doesn't have any errors
end
# Seems to work, but doesn't actually save type value
# I've also tried a combo of this one and the above one
User.each do |user|
user.type = "Admin"
user.save! # evaluates to true, doesn't have any errors
end
User.each do |user|
user = user.becomes!(Admin)
user.save! # evaluates to true, doesn't have any errors
end
# Seems to work, but doesn't actually save type value
User.each do |user|
user.update_attributes(type: "Admin")
end
Each time the local user variables seems to have the correct type ("Admin"), along with save evaluating to true, but when I check Admin.count or check Users type value, it is always nil. I know you're not supposed to change them, but this is just to migrate the data over to STI and then I'll be able to start creating Users or Admin with the proper class.
At the very least I think Rails should raise an error, set an error or somehow let the developer know it's failing the save calls.
It turns out that while update_attributes doesn't work for type (I haven't researched why yet), update_column does work.
So the migration simply becomes:
User.each do |user|
user.update_columns(type: "Admin")
end
The reason this works and other updates don't can probably be traced back to either callbacks or validations not being run. I have no callbacks that would prevent it, but maybe there are default Rails ones for type
http://apidock.com/rails/ActiveRecord/Persistence/update_columns
If you had more rows in the database User.each would become quite slow as it makes an SQL call for each user.
Generally you could use User.update_all(field: value) to do this in one SQL call but there is another reason to avoid this: if the User model is later removed the migration will no longer run.
One way to update all rows at once without referencing the model is to use raw SQL in the migration:
def up
execute "UPDATE users SET type = 'Admin' WHERE type IS NULL"
end

Does adding a new object in a has_one relationship, not update the association?

I have 2 models, an example:
class Report ...
belongs_to :answer_sheet
end
class AnswerSheet ...
has_one :report
end
When I do a:
#answersheet.report = Report.create(:data => 'bleah')
#answersheet.save
# and then create another report and assign it to the same #answersheet
# assuming at this stage #answersheet is already reloaded
#answersheet.report = Report.create(:data => 'new data')
#answersheet.save
# (irb) #answersheet.report returns the first report with the data 'bleah' and not
# the one with the new data.
Is this supposed to be the correct behavior?
If I want to update the association to the later report, how should I go about doing it?
It took me a few tries to see what you were talking about. But I got it now.
Take a look at the SQL and you'll find ActiveRecord is doing a select and then adding ASC and LIMIT 1. There can be more than one report records that refer to the same answer_sheet.
You can prevent this situation by adding a validation that checks for uniqueness of answer_sheet_id.
You should also start using save! and create! (note the bang operators) so exceptions are thrown during validation.
Lastly, calling Report.create followed by #answersheet.save performs two database transactions, whereas Report.new followed by #answersheet.save would perform just one.

Datamapper: report why I can't destroy record

I'm setting up my db model using datamapper and dm-contraints. I have two models which have a many to many relationship but when I try to destroy one, the only message I get is false.
Is it possible to get datamapper to give me more feedback one which relationship is exactly causing the problem?
With datamapper 1.2.1:
def why_you_no_destroy? model
preventing = []
model.send(:relationships).each do |relationship|
next unless relationship.respond_to?(:enforce_destroy_constraint)
preventing << relationship.name unless relationship.enforce_destroy_constraint(model)
end
preventing
end
Unfortunately DM doesn't provide a way to report why destroy failed.
Most of time the destroy failed because of its associations. DM have a mechanism to avoid orphan records.
To avoid this kind of destroy failed, you can Use dm-constraints(https://github.com/datamapper/dm-constraints ) to set up true database level foreign key references, which default to protect, but can be set to cascade deletes instead.
class List
has n, :todos, :constraint => :destroy (or :destroy!)
end
Sadly, Currently dm-constraints only supports PostgreSQL and MySQL.
For other database, you can check all the associations manually and delete them first, then delete the model。
You can get information on DataMapper errors from
model.destroy
if model
model.errors.each do |error|
p error
end
end
Sometimes that doesn't tell you anything though, in which case you can put your code inside of a begin/rescue block e.g.
begin
model.destroy
rescue Exception => exc
p exc
end

Runtime changing model with mongodb/mongoid

I've to add several fields in a mongoid model, I know there is not migration with MongoDB but if I go on without dropping the DB, making rails to "regenerate" the DB entirely, it doesn't display or use the new fields at all !
What's the best way to go here ? Is there something softer than drop/reopen mongodb ?
Thanks in advance
luca
In general it should be possible to update old documents with the new fields at runtime. There is no need for migrations in MongoDB.
You maybe want to write rake tasks to update your old documents with the new fields and default values.
You could find out these documents by checking those new fields which have per default a nil value.
Update
Easy style:
If you define a new field with a default value, this value should always be used as long as you set a new one:
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String, default: "no note given so far!"
end
If you query your database you should get your default value for documents which haven't this field before your extension:
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: "no note given so far!">
I tested this with a fresh rails stack with a current mongoid on Ruby 1.9.2 - should work with other stacks, too.
More complicated/complex style:
If you didn't set a default value, you'll get nil for this new field.
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String
end
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: nil>
Then you could set up a rake task and migration file like in this example:
lib/tasks/my_model_migration.rake:
namespace :mymodel do
desc "MyModel migration task"
task :migrate => :environment do
require "./db/migrate.rb"
end
end
db/migrate.rb:
olds = MyModel.where(note: nil)
# Enumerator of documents without a valid :note field (= nil)
olds.each do |doc|
doc.note = "(migration) no note given yet"
# or whatever your desired default value should be
doc.save! rescue puts "Could not modify doc #{doc.id}/#{doc.name}"
# the rescue is only a failsafe statement if something goes wrong
end
Run this migration with rake mymodel:migrate.
This is only a starting point and you can extend this to a full mongoid migration engine.
The task :migrate => :environment do … is necessary, otherwise rake won't load models.
It is a little ridiculous to say that you don't need migrations with mongodb or mongoid. Any sophisticated app needs to be refactored from time to time and that can mean pulling fields out of disparate documents into a new one.
Writing one off rake tasks is way less convenient and error prone than having migrations be part of your deploy script so that it always gets run on every environment.
https://github.com/adacosta/mongoid_rails_migrations brings AR style migrations to mongoid.
You might need them less often, but you will certainly need them as an app grows.
Below is a nice code example for data migration script with mongoid and the ruby mongo driver - to be used when your updated model no longer match production data.
http://pivotallabs.com/users/lee/blog/articles/1548-mongoid-migrations-using-the-mongo-driver
I whish we would stop using "no migrations with mongoid" as slogan. It'll turn people to MongoDB for the wrong reasons, and it's only partially true. No schema, true, but data still needs to be maintained, which IMO is harder with MongoDB than RDBMs. There are other, great reasons for choosing MongoDB and it depends on your problem.

Resources