ruby on rails: How to create table for a new model - ruby

I use
rails generate model mynewmodel string:name string:description
to generate a new model. How do I deploy this new model to my develop databse ? I already have a bunch of databases in my sqlite db.
I have tried
rake db:migrate
it seemed having trouble to generate this new table in db.
update: added error message
== CreateMynewmodels: migrating ===============================================
-- create_table(:mynewmodels)
rake aborted!
An error has occurred, this and all later migrations canceled:
undefined method `name' for #<ActiveRecord::ConnectionAdapters::TableDefinition:0x3ad5c50>
Tasks: TOP => db:migrate
Thanks

The order of your fieldname:type combo is incorrect. Try
rails generate model mynewmodel name:string description:string

The error in rails generate model mynewmodel string:name string:description
You should swap string and name
rails generate model mynewmodel name:string description:string

Use name:string instead of string:name same for description

Great article for advanced usage:
Advanced Rails model generators
Pay attention that you have to wrap parameter price:decimal{10,2} to
quotes. It's vital and you may have incorrect behavior of generator if
you don't do it.

Related

ActiveRecord::ValueTooLong with Ruby Gem Mail

When using the Mail gem to recive an Email, i get sometimes the following error.
ActiveRecord::ValueTooLong
The part that is causing it is curr_mail.body.decoded.
How can i get this running on Mysql?
When setting an max Size to the Body everything works fine.
curr_mail.body.decoded[5000]
emails.each do |curr_mail|
Email.create subject: curr_mail.subject, content: curr_mail.body.decoded,
from: curr_mail.from.first , to: curr_mail.to.first, date: curr_mail.date,
messageId: curr_mail.message_id
end
The exception ActiveRecord::ValueTooLong already told you what is wrong. The content column in emails table is too short for a decoded email body.
Change content column type to text type or set content's length to a larger number would solve this problem.
Run rails g migration change_content_of_emails_to_text to generate a migration file.
You can write something like this in the generated migration file:
class ChangeContentOfEmailsToText < ActiveRecord::Migration[5.2]
def change
change_column :emails, :content, :text
end
end
Then run bundle exec rake db:migrate command.
Edit:
I just realized this might not be a Rails specific question.
If this is a non-rails project, the idea is same.
Go to mysql console and run:
ALTER TABLE emails MODIFY content TEXT;

Delete record from console in Hanami

In Rails you can do:
rails c
record = Record.where(name: 'Test Record').first
record.destroy
How can you do the same in Hanami? I've been reading through the docs but I'm struggling to see how to do console commands like Rails to interact with the database objects.
You can do
$ hanami c
UserRepository.new.users.where(name: "Test Record").delete
When a class inherits from Hanami::Repository
delete(id) – Delete the record corresponding to the given id
in Hanami use delete instead of destroy

How to use Bigquery streaming insertall on Ruby Rails

EDIT: Fixed - for ruby use "insert_all" instead of "insertAll" like the api specifies. The api for ruby needs updating.
Im using v 0.6.4 of the google-api-client gem and trying to create a streaming insert, but keep getting the following error:
google_bigquery.rb:233:in undefined method `insertAll' for #<Google::APIClient::Resource:0xcbc974 NAME:tabledata> (NoMethodError)
My code is as follows:
def streaming_insert_data_in_table(table, dataset=DATASET)
body = {"rows"=>[
{"json"=> {"person_id"=>1, "name"=>"john"}},
{"json"=> {"person_id"=>2, "name"=>"doe"}},
]}
result = #client.execute(
:api_method=> #bigquery.tabledata.insert_all,
:parameters=> {
:projectId=> #project_id.to_s,
:datasetId=> dataset,
:tableId=>table},
:body_object=>body,
)
puts result.body
end
Could someone tell me if the insetAll has been created for the google-api-client gem? I have tried 'insert' as that is what table, dataset etc use and get the same error as well.. I can however run tabledata.list perfectly fine.. I've tried digging throught the gem source code and didn't get anywhere with that.
Is the body object that I created correct or do I need to alter it?
Any help is much appreciated.
Thanks in advance and have a great day.
Ok. So fixed it and updated the code in the question. For ruby: the method is called "insert_all". Also note that the table & schema must be created BEFORE the insert_all. This id different when compared to the the "jobs.insert" method which will create the table if it doesn't exist

How to get the journal/notes for a Redmine issue in Ruby?

I am using the redmine_client (0.0.1) gem to lookup issues in Redmine but get an error when I do something like this:
puts issue.journals.inspect # error - undefined method `journals' for #
I am following this example: https://gist.github.com/552610
Any idea what might be the problem? All the other fields work fine except for journal.
Journals is an association to an Array of Journal objects. I don't know if I've added Journal support to that gem yet (I don't use it much anymore). Try forking the gem and adding a new journal.rb class like issue.rb.

Runtime changing model with mongodb/mongoid

I've to add several fields in a mongoid model, I know there is not migration with MongoDB but if I go on without dropping the DB, making rails to "regenerate" the DB entirely, it doesn't display or use the new fields at all !
What's the best way to go here ? Is there something softer than drop/reopen mongodb ?
Thanks in advance
luca
In general it should be possible to update old documents with the new fields at runtime. There is no need for migrations in MongoDB.
You maybe want to write rake tasks to update your old documents with the new fields and default values.
You could find out these documents by checking those new fields which have per default a nil value.
Update
Easy style:
If you define a new field with a default value, this value should always be used as long as you set a new one:
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String, default: "no note given so far!"
end
If you query your database you should get your default value for documents which haven't this field before your extension:
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: "no note given so far!">
I tested this with a fresh rails stack with a current mongoid on Ruby 1.9.2 - should work with other stacks, too.
More complicated/complex style:
If you didn't set a default value, you'll get nil for this new field.
app/models/my_model.rb
class MyModel
include Mongoid::Document
field :name, type: String
field :data, type: String
# NEW FIELD
field :note, type: String
end
(rails console)
MyModel.first
#=> #<MyModel …other fields…, note: nil>
Then you could set up a rake task and migration file like in this example:
lib/tasks/my_model_migration.rake:
namespace :mymodel do
desc "MyModel migration task"
task :migrate => :environment do
require "./db/migrate.rb"
end
end
db/migrate.rb:
olds = MyModel.where(note: nil)
# Enumerator of documents without a valid :note field (= nil)
olds.each do |doc|
doc.note = "(migration) no note given yet"
# or whatever your desired default value should be
doc.save! rescue puts "Could not modify doc #{doc.id}/#{doc.name}"
# the rescue is only a failsafe statement if something goes wrong
end
Run this migration with rake mymodel:migrate.
This is only a starting point and you can extend this to a full mongoid migration engine.
The task :migrate => :environment do … is necessary, otherwise rake won't load models.
It is a little ridiculous to say that you don't need migrations with mongodb or mongoid. Any sophisticated app needs to be refactored from time to time and that can mean pulling fields out of disparate documents into a new one.
Writing one off rake tasks is way less convenient and error prone than having migrations be part of your deploy script so that it always gets run on every environment.
https://github.com/adacosta/mongoid_rails_migrations brings AR style migrations to mongoid.
You might need them less often, but you will certainly need them as an app grows.
Below is a nice code example for data migration script with mongoid and the ruby mongo driver - to be used when your updated model no longer match production data.
http://pivotallabs.com/users/lee/blog/articles/1548-mongoid-migrations-using-the-mongo-driver
I whish we would stop using "no migrations with mongoid" as slogan. It'll turn people to MongoDB for the wrong reasons, and it's only partially true. No schema, true, but data still needs to be maintained, which IMO is harder with MongoDB than RDBMs. There are other, great reasons for choosing MongoDB and it depends on your problem.

Resources