brainspec/enumerize throwing mysql error for default value - activerecord

I'm working on a Rails 4 app, and I've begun using brainspec/enumerize gem. I have an integer value in the database, column status and would like to enumerate it in my model.
Below you can see the snippet I use to set it up. Unfortunately, in my tests (which were all passing previously) it complains about creating a Partner due to not being able to save the row. Can't assign a default status of NULL. Not sure where it's getting NULL from, as the database itself (MySQL) is set up for a default value of 0 and as you can see from below, the back-end side instructs a default of 4 or :incomplete.
enumerize :status, in: [
:pending, # 0 account has a pending billing request (but is not yet open)
:active, # 1 account has an active base subscription
:suspended, # 2 account has been suspended (e.g. after a base subscription decline)
:expired, # 3 base subscription has expired
:incomplete, # 4 partner application process incomplete
:closed, # 5 account has been permanently closed
:cancelled # 6 account has been cancelled by user (but is still unexpired)
], default: :incomplete
Here is the ActiveRecord/MySQL error.
PartnerTest#test_create_with_nested_attributes:
ActiveRecord::StatementInvalid: Mysql2::Error: Column 'status' cannot be null: UPDATE `partner` SET `primary_contact_id` = 3, `status` = NULL WHERE `partner`.`id` = 3
test/models/partner_test.rb:9:in `block in <class:PartnerTest>'
Furthermore, I know that the default value (:incomplete) is being picked up by Enumerize. If I throw gibberish into the default (default: :asdoiasoas) it baulks.
I'm using the master/branch so that it works with Rails 4.
Gemfile
gem 'enumerize', :github => 'brainspec/enumerize'

According to brainspec/enumerize README, you should provide an integer value for each status, like:
enumerize :status, in: {
pending: 0, # 0 account has a pending billing request (but is not yet open)
active: 1, # 1 account has an active base subscription
suspended: 2, # 2 account has been suspended (e.g. after a base subscription decline)
expired: 3, # 3 base subscription has expired
incomplete: 4 # 4 partner application process incomplete
# And so on...
}, default: :incomplete
As you provided only the key but not the value, it was setting it to nil/NULL.

Related

how to scale down instances based on their uptime with apache marathon?

I find myself in a situation where I have the necessity to scale down container instances based on their actual lifetime. It looks like fresh instances are removed first when scaling down through marathon's API. Is there any configuration I'm not aware of to implement this kind of strategy or policy when scaling down instances on apache marathon?
As of right now I'm using marathon-lb-autoscale to atumatically adjust the number of running instances. What actually happens under the hood though is that marathon-lb-autoscale does perform a PUT request updating the instances property of the current application when req/s increases or decreaseas.
scale_list.each do |app,instances|
req = Net::HTTP::Put.new('/v2/apps/' + app)
if !#options.marathonCredentials.empty?
req.basic_auth(#options.marathonCredentials[0], #options.marathonCredentials[1])
end
req.content_type = 'application/json'
req.body = JSON.generate({'instances'=>instances})
Net::HTTP.new(#options.marathon.host, #options.marathon.port).start do |http|
http.request(req)
end
end
end
I don't know if the upgradeStrategy configuration is taken into account when scaling down instances. With default settings i cannot get the expected behaviour to work.
{
"upgradeStrategy": {
"minimumHealthCapacity": 1,
"maximumOverCapacity": 1
}
}
ACTUAL
instance 1
instance 2
PUT /v2/apps/my-app {instances: 3}
instance 1
instance 2
instance 3
PUT /v2/apps/my-app {instances: 2}
instance 1
instance 2
EXPECTED
instance 1
instance 2
PUT /v2/apps/my-app {instances: 3}
instance 1
instance 2
instance 3
PUT /v2/apps/my-app {instances: 2}
instance 2
instance 3
One can specify a killSelection directly inside the application's config and specify YoungestFirst which kills youngest tasks first or OldestFirst which kills the oldest ones first.
Reference: https://mesosphere.github.io/marathon/docs/configure-task-handling.html

How to get order username and provisionDate for all SoftLayer machines using Ruby?

Using Ruby I'm making a call like:
client = SoftLayer::Client.new(:username => user, :api_key => api_key, :timeout => 999999)
client['Account'].object_mask("mask[id, hostname, fullyQualifiedDomainName, provisionDate, datacenter[name], billingItem[recurringFee, associatedChildren[recurringFee], orderItem[description, order[userRecord[username], id]]], tagReferences[tagId, tag[name]], primaryIpAddress, primaryBackendIpAddress]").getHardware
But only some machines return a provisionDate and only some return orderItem information. How can I consistently get this information for each machine? What would cause one machine to return this data and another machine to not?
Example output:
{"fullyQualifiedDomainName"=>"<removed_by_me>",
"hostname"=>"<removed_by_me>",
"id"=>167719,
"provisionDate"=>"",
"primaryBackendIpAddress"=>"<removed_by_me>",
"primaryIpAddress"=>"<removed_by_me>",
"billingItem"=>
{"recurringFee"=>"506.78",
"associatedChildren"=>
[<removed_by_me>]},
"datacenter"=>{"name"=>"dal09"},
"tagReferences"=>
[{"tagId"=>139415, "tag"=>{"name"=>"<removed_by_me>"}},
{"tagId"=>139417, "tag"=>{"name"=>"<removed_by_me>"}},
{"tagId"=>140549, "tag"=>{"name"=>"<removed_by_me>"}}]}
To be clear, most machines return this data so I'm trying to understand why some do not.
Please see the following provisioning steps, below is a little flow to consider:
1. Order a Server
Result:
* An orderId is assigned to the server
* The createDate has a new value
* activeTransaction value is = Null
* provisionDate value is = Null
2. The order is approved
Result:
* activeTransaction value is <> Null
* provisionDate value = Null
3. Server is already provisioned
Result:
* activeTransaction value is = Null
* provisionDate value has a New value
* billingItem property has a new value
To see if your machines have still ”activeTransaction”, please execute:
https://[username]:[apikey]#api.softlayer.com/rest/v3/SoftLayer_Hardware_Server/[server_id]/getActiveTransaction
Method: GET
Now, after reviewing your example response, this server had some problems when completing the provisioning; for that reason this step was completed manually but the provisionDate was not set for any reason(please open a ticket if you want that the provisionDate can be set) . This is a special case. I can see that another server has a similar behavior. But the other servers that don’t have provisionDate, have still ”activeTransaction<>null” (it means that these server are not provisioned yet).
EDIT:
Other property can help you to know that your machine has been already provisioned although other kind of transaction is being executed, is “hardwareStatus”, it should have “ACTIVE” value.
https://[username]:[apikey]#api.softlayer.com/rest/v3/SoftLayer_Account/getHardware?objectMask=mask[id, hostname, fullyQualifiedDomainName, provisionDate,hardwareStatus]
Method: GET
The response should be something like this:
{
"fullyQualifiedDomainName": "myhostname.softlayer.com"
"hostname": " myhostname"
"id": 1234567
"provisionDate": "2015-06-29T00:21:39-05:00"
"hardwareStatus": {
"id": 5
"status": "ACTIVE"
}

Generate expiring activator token or a key hash in rails manually

I'm trying to verify a link that will expire in a week. I have an activator_token stored in the database, which will be used to generate the link in this format: http://www.example.com/activator_token. (And not activation tokens generated by Devise or Authlogic.)
Is there a way to make this activator token expire (in a week or so) without comparing with updated_at or some other date. Something like an encoded token, which will return nil when decoded after a week. Can any existing modules in Ruby do this? I don't want to store the generated date in the database or in an external store like Redis and compare it with Time.now. I want it to be very simple, and wanted to know if something like this already exists, before writing the logic again.
What you want to use is: https://github.com/jwt/ruby-jwt .
Here is some boilerplate code so you can try it out yourself.
require 'jwt'
# generate your keys when deploying your app.
# Doing so using a rake task might be a good idea
# How to persist and load the keys is up to you!
rsa_private = OpenSSL::PKey::RSA.generate 2048
rsa_public = rsa_private.public_key
# do this when you are about to send the email
exp = Time.now.to_i + 4 * 3600
payload = {exp: exp, discount: '9.99', email: 'user#example.com'}
# when generating an invite email, this is the token you want to incorporate in
# your link as a parameter
token = JWT.encode payload, rsa_private, 'RS256'
puts token
puts token.length
# this goes into your controller
begin
#token = params[:token]
decoded_token = JWT.decode token, rsa_public, true, { :algorithm => 'RS256' }
puts decoded_token.first
# continue with your business logic
rescue JWT::ExpiredSignature
# Handle expired token
# inform the user his invite link has expired!
puts "Token expired"
end

MongoDB return codes meaning (ruby driver)

I'm calling collection update from ruby driver to mongodb and gets a return code 117.
How do I generally interpret the error codes that I get?
If you are using safe mode, the update method returns a hash containing the output of getLastError. However, when you are not using safe mode, we simply return the number of bytes that were sent to the server.
# setup connection & get handle to collection
connection = Mongo::Connection.new
collection = connection['test']['test']
# remove existing documents
collection.remove
=> true
# insert test document
collection.insert(:_id => 1, :a => 1)
=> 1
collection.find_one
=> {"_id"=>1, "a"=>1}
# we sent a message with 64 bytes to a mongod
collection.update({_id: 1},{a: 2.0})
=> 64 # number of bytes sent to server
# with safe mode we updated one document -- output of getLastError command
collection.update({_id: 1},{a: 3.0}, :safe => true)
=> {"updatedExisting"=>true, "n"=>1, "connectionId"=>19, "err"=>nil, "ok"=>1.0}
This is something that could be made clearer in the documentation. I will update it for the next ruby driver release.

What is a dirty resource?

I just started using Datamapper.
I am trying to update an object. I get the object/model using its id:
u1 = User.get(1)
u1.name = "xyz"
u1.update
which throws a error/raises an exception. I tried again:
u1 = User.get(1)
and after that:
u1.update({:name => "xyz"})
will throw false and dirty? returns true.
After that any call to update would fail saying it is dirty.
I can do a save by:
u1.name = "xyz"
u1.save
Here are my questions:
What should I be using: Save or update?
Should I say get(id) even to just change one field?
When should I use update? What is the syntax: user.update({ ....}) or user.name = "xyz"; user.update?
What is dirty?, and is it once I make a object dirty do I have to
get the object fresh from the database to the variable?
When you fetched a resource from the db and then changed its attributes then the resource becomes 'dirty'. This means that the resource is loaded into memory and its state has changed and changes can be persisted in the db.
You use #save to persist changes made to a loaded resource and you use #update when you want to immediately persist changes without changing resource's state to 'dirty'. Here's an example session:
User.create(:name => 'Ted')
# update user via #save
user = User.get(1)
user.name = 'John'
user.dirty? # => true
user.save
# update user via #update
user = User.get(1)
user.update(:name => 'John')
user.dirty? # => false

Resources