I have coded an app in ruby which queries the youtube website for some keywords which are taken from a specific db. After the youtube query is done the resulted videos' id and title are inserted into 2 different dbs. I have run the code and I got this exception:
C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/sqlite3-1.3.8-x86-mingw32/lib/sqlite3/database.rb:91:in `initialize': near "'tag:youtube.com,2008:video:3taEuL4EHAg'": syntax error (SQLite3::SQLException)
Is this error caused by invalid characters in my queries? Should the SQLite gem that I used handle these?
If you need the code here it is:
require "sqlite3"
require "youtube_it"
database = SQLite3::Database.new("data.db")
client = YouTubeIt::Client.new(:dev_key =>"myyoutubekey")
result = database.query("SELECT `keyphrase` FROM `keyphrases`")
result.each do |array|
array.each do |result|
results = client.videos_by(:query => "#{result}", :page => 1, :per_page => 10)
results.videos.each do |videoone|
database.query("INSERT OR IGNORE INTO `videos` (videoID,cachedTitle) VALUES ('#{videoone.video_id}','videoone.title')")
rezultat = database.query("SELECT `id` FROM `keyphrases` WHERE `keyphrase` = '#{result}' ")
rezultat.each do |n|
id = n.to_s.delete("[").delete("]").to_i
database.query("INSERT INTO `keyphrase2videos` (keyphraseID,videoID) VALUES ('#{id}','#{videoone.video_id}'")
end
end
end
end
p "Recored Entered"
I was missing a " ) " in the VALUES clause
Related
How do I display the result set of a postgres query using the pg gem only ? I want it to be in tabular form like it is in pgAdmin GUI tool. The code below does not help. The documentation is not clear, so I am unable to figure another way out. Please help !
require 'pg'
conn = PGconn.connect("db.corp.com", 5432, '', '', "schema", "user", "pass")
sql = 'select * from tbl limit 2'
res = conn.exec(sql)
res.each do |row|
row.each do |column|
end
end
gem list -
pg (0.9.0.pre156 x86-mswin32)
ruby - 1.8.7
Steps -
1. Get list of column names in result set (type PGResult).
2. Iterate each row(hash) of result set.
3. For each row (hash key)/columns found in step 1, find the column values (hash value).
Then print results as csv. I dont think this is efficient, but it gets the job done.
require 'pg'
conn = PGconn.connect("db.corp.com", 5432, '', '', "schema", "user", "pass")
sql = 'select * from tbl limit 2'
res = conn.exec(sql)
rows_count = res.num_tuples
column_names = res.fields
col_header = column_names.join(', ')
puts col_header
for i in 0..rows_count-1
row_hash = res[i]
row_arr = []
column_names.each do |col|
row_arr << row_hash[col]
end
row = row_arr.join(', ')
puts row
end
I would like to use the Google bigquery gem (https://rubygems.org/gems/bigquery) to create an Array of table names. So far, this is what I have written:
require 'json'
bqRepsonse = bq.tables('myDataSet')
bqRepsonseCleaned = bqRepsonse.to_s.gsub("=>", ":")
data = JSON.parse(bqRepsonseCleaned)
tableListing = []
data["tableID"]["type"].each do |item|
case item["type"]
when 'TABLE'
bqTableList << item["tableId"]
else
end
end
If I print bqResponse, I get this result:
[{"kind"=>"bigquery#table",
"id"=>"curious-idea-532:dataset_test_4.TableA",
"tableReference"=>{"projectId"=>"curious-idea-532",
"datasetId"=>"dataset_test_4", "tableId"=>"TableA"}, "type"=>"TABLE"},
{"kind"=>"bigquery#table",
"id"=>"curious-idea-532:dataset_test_4.TableB",
"tableReference"=>{"projectId"=>"curious-idea-532",
"datasetId"=>"dataset_test_4", "tableId"=>"TableB"}, "type"=>"TABLE"},
{"kind"=>"bigquery#table",
"id"=>"curious-idea-532:dataset_test_4.TableC",
"tableReference"=>{"projectId"=>"curious-idea-532",
"datasetId"=>"dataset_test_4", "tableId"=>"TableC"}, "type"=>"TABLE"},
{"kind"=>"bigquery#table",
"id"=>"curious-idea-532:dataset_test_4.TableD",
"tableReference"=>{"projectId"=>"curious-idea-532",
"datasetId"=>"dataset_test_4", "tableId"=>"TableD"}, "type"=>"TABLE"}]
And running the code throws and error
`[]': no implicit conversion of String into Integer (TypeError)
Not sure where to correct this. My desired outcome is:
tableListing = ["TableA","TableB","TableC","TableD"]
Thanks in advance for your advice.
Try this:
require 'json'
string = '[{"kind": "bigquery#table", "id": "curious-idea-532:dataset_test_4.TableA", "tableReference" : {"projectId":"curious-idea-532", "datasetId":"dataset_test_4", "tableId":"TableA"}, "type":"TABLE"}, {"kind":"bigquery#table", "id":"curious-idea-532:dataset_test_4.TableB", "tableReference":{"projectId":"curious-idea-532", "datasetId":"dataset_test_4", "tableId":"TableB"}, "type":"TABLE"}, {"kind":"bigquery#table", "id":"curious-idea-532:dataset_test_4.TableC", "tableReference":{"projectId":"curious-idea-532", "datasetId":"dataset_test_4", "tableId":"TableC"}, "type":"TABLE"}, {"kind":"bigquery#table", "id":"curious-idea-532:dataset_test_4.TableD", "tableReference":{"projectId":"curious-idea-532", "datasetId":"dataset_test_4", "tableId":"TableD"}, "type":"TABLE"}]'
data = JSON.parse(string)
tableListing = []
# Here we are iterating over the data instead of its child element
data.each do |item|
case item["type"]
when 'TABLE'
tableListing << item["tableReference"]["tableId"]
else
end
end
puts tableListing
I am trying to write a rake task for importing a CSV file into multiple models. The code compiles without error, but I get this error message when I attempt to run it:
rake aborted! NameError: undefined local variable or method csv' for
main:Object
/Users/rickcasey/Projects/Programming/wfrails/lib/tasks/import_partial.rake:28:in
block in '
Here is the script:
desc "Imports the CSV file "
task :import_partial => :environment do
require 'csv'
csv.foreach('public/partial.csv', :headers => true) do |row|
# create records in independent tables
# create the Company object
this_company_name = row.to_hash.slice(*%w[county_name])
if !(Company.exists?(company_name: this_company_name))
Companies.create(row.to_hash.slice(*%w[company_name operator_num]))
end
thecompany = Company.find(this_company_name)
company_id = thecompany.id
# create the County object
this_county_name = row.to_hash.slice(*%w[county])
if !(County.exists?(county_name: this_county_name))
Counties.create(county_name: this_county_name)
end
thecounty = County.find(this_county_name)
county_id = thecounty.id
# create the GasType object
this_gastype_name = row.to_hash.slice(*%w[gas_type])
if !(GasType.exists?(gastype_name: this_gastype_name))
GasType.create(gastype_name: this_gastype_name)
end
thegastype = GasType.find(this_gastype_name)
gastype_id = thegastype.id
# create the Field object
this_field_name = row.to_hash.slice(*%w[field])
if !(Field.exists?(field_name: this_field_name))
Field.create(field_name: this_field_name, field_code: field_code)
end
thefield = Field.find(this_field_name)
field_id = thefield.id
# create the Formations object
this_formation_name = row.to_hash.slice(*%w[formation])
if !(Formation.exists?(formation_name: this_formation_name))
Counties.create(formation: this_formation_name, formation_code: formation_code)
end
theformation = Formation.find(this_formation_name)
formation_id = theformation.id
# debugging:
puts "company_id:", company_id
puts "county_id:", county_id
puts "gastype_id:", gastype_id
puts "field_id:", field_id
puts "formation_id:", formation_id
# create records in dependent tables:
# Use the record id's from above independent table create records containing foreign keys:
#Facilities.create(row.to_hash.slice(*%w[dir_e_w dir_n_s dist_e_w dist_n_s facility_name facility_num ground_elev lat long meridian qtrqtr range sec twp utm_x utm_y])
#Wells.create(row.to_hash.slice(*%w[api_county_code api_seq_num first_prod_date form_status_date formation_status sidetrack_num spud_date status_date td_date test_date wbmeasdepth wbtvd well_bore_status well_name])
end
end
My environment is: ruby 2.1.2p95, Rails 4.1.1
This is quite unclear, and have not found an example of similar error with an answer I understand yet....any help much appreciated!
I believe the error is in this line
csv.foreach('public/partial.csv', :headers => true) do |row|
It should be
CSV.foreach('public/partial.csv', :headers => true) do |row|
I believe the class name is uppercase - CSV.foreach, not csv.foreach.
I am trying to follow a tutorial on big data, it wants to reads data from a keyspace defined with cqlsh.
I have compiled this piece of code successfully:
require 'rubygems'
require 'cassandra'
db = Cassandra.new('big_data', '127.0.0.1:9160')
# get a specific user's tags
row = db.get(:user_tags,"paul")
###
def tag_counts_from_row(row)
tags = {}
row.each_pair do |pair|
column, tag_count = pair
#tag_name = column.parts.first
tag_name = column
tags[tag_name] = tag_count
end
tags
end
###
# insert a new user
db.add(:user_tags, "todd", 3, "postgres")
db.add(:user_tags, "lili", 4, "win")
tags = tag_counts_from_row(row)
puts "paul - #{tags.inspect}"
but when I write this part to output everyone's tags I get an error.
user_ids = []
db.get_range(:user_tags, :batch_size => 10000) do |id|
# user_ids << id
end
rows_with_ids = db.multi_get(:user_tags, user_ids)
rows_with_ids.each do |row_with_id|
name, row = row_with_id
tags = tag_counts_from_row(row)
puts "#{name} - #{tags.inspect}"
end
the Error is:
line 33: warning: multiple values for a block parameter (2 for 1)
I think the error may have came from incompatible versions of Cassandra and Ruby. How to fix it?
Its a little hard to tell which line is 33, but it looks like the problem is that get_range yields two values, but your block is only taking the first one. If you only care about the row keys and not the columns then you should use get_range_keys.
It looks like you do in fact care about the column values because you fetch them out again using db.multi_get. This is an unnecessary additional query. You can update your code to something like:
db.get_range(:user_tags, :batch_size => 10000) do |id, columns|
tags = tag_counts_from_row(columns)
puts "#{id} - #{tags.inspect}"
end
I am using ruby-dbi to access a MS SQL database. The problem is that whenever I select more than one row from the DB, the result contains correct number of items, but all of them are the same, when they shouldn't be:
irb(main):001:0> require 'dbi'
=> true
irb(main):010:0> db=DBI.connect('dbi:ODBC:dataSource', 'userName', '****')
=> #<DBI::DatabaseHandle:0xff3df8 #handle=#<DBI::DBD::ODBC::Database:0xff3e88 #h
andle=#<ODBC::Database:0xff3f30>, #attr={}>, #trace_output=nil, #trace_mode=nil,
#convert_types=true, #driver_name="odbc">
irb(main):009:0> db.select_all('select distinct top 10 id from rawdata')
=> [[308], [308], [308], [308], [308], [308], [308], [308], [308], [308]]
The problem seems to be the as the one discussed here, but the solution proposed there (using alias) didn't work for me (or maybe I misunderstood it).
How can I fix this?
I'm using DBI 0.4.5, and Ruby 1.9.2 on Windows.
That looks kind of strange because select_all are supposed to return DBI::Row objects. Try
rows = db.select_all('select distinct top 10 id from rawdata')
rows.each do |row|
printf "ID: %d\n", row["id"]
end
I can only recommend: Go for TinyTds
https://github.com/rails-sqlserver/tiny_tds
Its
- easier to install and configure
- faster
- more stable
In the end, after realizing (at least partially) what was the post I linked in the question talking about, I modified the file row.rb from the source code of DBI:
I removed the code
if RUBY_VERSION =~ /^1\.9/
def __getobj__
#arr
end
def __setobj__(obj)
#delegate_dc_obj = #arr = obj
end
else
and the acommpanying end and I also removed the inheritance: < DelegateClass(Array).
I had the same problem on a MS-SQL database with ruby 1.9.2p180 (2011-02-18)
This is how I solved it:
def myDBIexecute(dbhash,query)
begin
# open the connection
conn = DBI.connect('DBI:ODBC:'+dbhash['datasource'].to_s,dbhash['username'].to_s,dbhash['password'].to_s)
sth = conn.prepare(query)
sth.execute()
outputme=[]
while row = sth.fetch
mrow={}
sth.column_names.each{|aname|
mrow[aname]=row[aname].to_s
}
outputme << mrow
end
sth.finish
return outputme
rescue DBI::DatabaseError => e
puts "Error code: #{e.err}"
puts "Error message: #{e.errstr}"
ensure
# disconnect from server
conn.disconnect if conn
end
end