calling the column_name in the table while connecting via Oracle DB - ruby

I am trying to get the value from the table(employee)connecting through
the oracle database. Since there are 100s of values in one column, I would need to iterate the table and get the exact value.
I have the code that works if I use the index no. such as row[1] but I
wanted to use the column_name "first name" instead of row[1]. Below is
the code that I have which works.
Code:
def load_borrower
connection = OCI8.new('usrname', 'pwd', //host:portno/sid')
connection.exec(("SELECT BI_PREFIX, BI_FNAME, BI_MNAME, BI_LNAME, B.BI_SUFFIX, BI_ID_TYPE, BI_ID_NUMBER, BI_DOB, B1.*, R.*, M.*, C.*, L.* FROM EMPLOYEE, SC_BORROWERPREF_NEW S1, BORROWER_NEW B, BORROWERPREF_NEW B1, RES_ADD R, MAIL_ADD M, CLOS_ADD C, LLORD_ADD L WHERE S2=SCENARIO_ID = S1.SCENARIO_ID AND S1.PREF_ID = B1.PREF_ID AND B1.BORROWER_ID = B.BORROWER_ID AND B1.PREF_ID = R.RES_PREF_ID AND B1.PREF_ID = M.MAIL_PREF_ID AND B1.PREF_ID = C.CLOS_PREF_ID AND B1.PREF_ID = L.LLORD_PREF_ID AND S.RELEASE_ID= "1" AND S.SCENARIO_NO = '2' ORDER BY S1.SC_BORROWERPREF_ID") do |row|
$BI_PREFIX=row[0].to_s
$BI_FNAME=row[1].to_s
$BI_MNAME=row[2].to_s
$BI_LNAME=row[3].to_s
$BI_SUFFIX=row[4].to_s
$BI_BI_ID_TYPE=row[5].to_s
$BI_BI_ID_NUMBER=row[6].to_s
$BI_DOB=row[7].to_s
$BI_EMAIL=row[9].to_s
$BI_CELL_PH=row[11].to_s
$BI_WORK_PH=row[12].to_s
$BI_PREF_CONT=row[13].to_s
$BI_MAR_STATUS=row[16].to_s
$BI_EMP_STATUS=row[23].to_s
$BI_EDUC_YEARS=row[17].to_s
$BI_NUM_DEPEND=row[21].to_s
end
end
Now I'm running the above functions below
load_borrower
So the code above right now works fine. But As you can see from above, I am defining the variables from the db table as row[5], row[24] like that which is very hectic and time consuming although it works. So I was just wondering if we have any method or command to use the column_name such that it gets the value from the row and the column such as row['Emp_id'] instead of finding about the index of every column_name.
I am not sure if this is a drawback of Ruby as it treates the table from the db as an array and may be that's why we can't specify by column_name.

Firstly it appears you are a bit confused by the boundaries and separations between the various bits of technology you are using. There is no Watir in the code you provided, NONE. it's all pure Ruby and a tiny bit of stuff from the OCI8 Gem. A GEM is a standard way that Ruby folks use to distribute code libraries and programs written in the Ruby language. See HERE for more info to better understand what a Gem is and how they are used.
Watir is another Ruby gem that is for driving web-browsers, and you might be using it elsewhere in your code, but it doesn't relate to this question or OCI8 other than both of them being Ruby code libraries distributed as Gems. So lets leave it aside so as to not confuse things.
The behavior you are seeing is how the OCI8 gem works, NOT anything to do with Ruby specifically. If you want something more elegant, then look into different gems that have been created for doing db access with Ruby, for example ActiveRecord, which was suggested in another answer already. The OCI8 Gem only returns an array if you have the results feeding into a block like you do in your current code. Otherwise the results are in an object called a Cursor, and you can use the cursor's fetch_hash method to get fetched data as a Hash. The hash keys are column names. (see http://ruby-oci8.rubyforge.org/en/api_OCI8Cursor.html)
Allow me to strongly recommend that you spend a little time learning a bit more about the Ruby language before you tear much further into your current project. Given the nature of the coding you seem to be doing, I'd advise you to read Brian Marik's book "Everyday Scripting with Ruby", thats going to give you a lot better understanding of the technology you are using, and you'll understand better when we toss around terms like 'hash' as I just did.
If you will allow a bit of general advice in terms of how you are going about interfacing with your database. IMHO, you should be taking advantage of the db by constructing a query that returns JUST the data you want, instead of grabbing huge amounts of data and trying to parse through it manually. It's better use of the resource, uses less memory, takes less time to transfer the info from the db, and no matter how good your parsing code might be, it won't be as good as what the Oracle people wrote. Let the db do the heavy lifting, that's what it's there for.
If what you are dealing with here is data to drive your testing, or validate results, rather than construct one huge monolithic array, I'd recommend you use a much more modular approach. Use one global variable such as the EMP_ID of the current user you are testing with or against, and have the test code get query results for just the values needed for each validation, or a small logical group of validations like the parts of an address. It's a lot easier to build up stuff that way on a case by case basis working as you go, instead of trying to write the whole data retrieval bit in one giant piece that will be a nightmare to maintain.
As it stands all your test code that is verifying function or validating how the site works is going to be tightly coupled to a big monolithic piece that fetches the data from the db. that creates a lot of dependencies and makes your test code hard to maintain. If you deal with things in a more modular way, where each validation step retrieves just the data it needs, then it's a lot easier to expand or modify your test code as the site or database changes.

If you had an array containing the column names then you could zip it up with the row array and build a hash:
Hash[column_names.zip( row )]
I would recommend using activerecord for this though.

This should work
connection = OCI8.new('usrname', 'pwd', //host:portno/sid')
cursor = connection.exec(("SELECT BI_PREFIX ...")
cols = cursor.get_col_names
while r = cursor.fetch
$BI_PREFIX=r[cols.index('BI_PREFIX')].to_s
...
end

Related

Accessing and scraping sporadically available Wikipedia sections

I need to fetch some data but I'm completely stumped after trying a few things.
I want to access Airlines & Destinations from the Albuquerque_International_Sunport's wiki page - keep in mind, I'll be going through a prepopulated list of airports with this data.
There are multiple "types" of Airlines: Passenger, Cargo, sometimes there's other (sub?)sections; other times there are none:
Articles for multiple airports will be accessed automatically - including some less known airports. This means I need to:
Check if "Airlines & Destinations" section exists
Take all data inside of any table
Scrape it; otherwise do nothing
I've tried using the ruby wikipedia-client gem however, the .raw_data method isn't even returning the section data:
Next, I went to Wikipedia's API: unless I am mistaken, but it doesn't return "section" names! This doesn't seem right but I wasn't able to get it working.
So I suppose that leaves Nokogiri. I can grab and parse the pages fine, but:
How would I go about detecting "Airlines & Destinations" section presence, getting all table data BEFORE end of section? I have a suspicion I need some tricky Xpath for this.
Seems to be the only viable solution.
Any thoughts welcome. Putting a bounty on this question when I can.
Edit: Perhaps it's better to simply somehow grab a list of all airlines in the world and hit them against HTML? Seems like it could be computationally expensive.
Well, I'm not an expert user of Nokogiri but maybe this can give you some idea.
require 'nokogiri'
require 'open-uri'
page = Nokogiri::HTML(open("https://en.wikipedia.org/wiki/Albuquerque_International_Sunport"))
# this is the passenger table
page.xpath('//*[#id="mw-content-text"]/div/table[2]/tr').each do |tr|
p tr.text()
puts "-"*50
end
# this is the cargo table
page.xpath('//*[#id="mw-content-text"]/div/table[3]/tr').each do |tr|
p tr.text()
puts "-"*50
end

Performing MongoDB's cursor.forEach() in ruby

I've just started experimenting with Ruby's Sinatra a couple of days ago, I'm trying to query a MongoDB, the find_one() method works very well, but when trying to get more than one document (i.e when using find()) a cursor is returned, I'm used to using the cursor.forEach() method to iterate through all the returned documents, but as I am new to ruby, I am having a hard time figuring it out.
Would be great if you can point me in the right direction, also if you know of a Mongo/Ruby command dictionary or cheat sheet, I would really appreciate it.
Some code to help with the matter:
#The following code is intentionally formatted the way it is, (i.e the case
#insensitive, the way I'm calling the database), all that is irrelevant,
#but there to show you what I'm doing; I might be screwing up somewhere.
#works fine, returns JSON of required document
settings.mongo_db['col'].find_one({"key" => /#{value}/i}).to_json
#returns cursor, need to iterate
settings.mongo_db['col'].find({"key" => /#{value}/i}).to_json
Your replies/thoughts are much appreciated.
Well generally in ruby in order to iterate you just use .each but since you just want to return your cursor results as JSON just turn the statement around
JSON.generate( settings.mongo_db['col'].find({"key" => /#{value}/i}).to_a )
So that should serialize as an array of documents.
Also see other methods in the JSON package.

Path to dynamic object?

I have a system_settings table which has a key and value columns. The key looks something like general.site.something.config and the value is a simple string.
I'd like to have a static class which, upon initialization, reads the settings and caches the values. Furthermore, I'd like to be able to access the settings in an OO way, such as SystemSetting.CACHE.General.Site.Something.Config in order to pull back the value for that key. Basically turning the rows in the table into a tree.
Is there an easy way to do this in Ruby 1.8.7?
TL;DR, No. No easy (read 'built-in') way atleast.
The syntax you want is not the way things happen in Ruby (without over-plumbing, that is). To have a look at the over-plumbing I'm referring to, have a look at the code I wrote for this example that demonstrates some of the desired functionality you want. I wouldn't suggest using it though and that's the same reason I'm not posting it here.

How can I scrape, parse and crawl files in Ruby?

I have a number of data files to process from a data warehouse that have the following format:
:header 1 ...
:header n
# remarks 1 ...
# remarks n
# column header 1
# column header 2
DATA ROWS
(Example: "#### ## ## ##### ######## ####### ###afp## ##e###")
The data is separated by white spaces and has both numbers and other ASCII chars. Some of those pieces of data will be split up and made more meaningful.
All of the data will go into a database, initially an SQLite db for development, and then pushed up to another, more permanent, storage.
These files will be pulled in actually via HTTP from the remote server and I will have to crawl a bit to get some of it as they span folders and many files.
I was hopeful to get some input what the best tools and methods may be to accomplish this the "Ruby way", as well as to abstract out some of this. Otherwise, I'll tackle it probably similar to how I would in Perl or other such approaches I've taken before.
I was thinking along the lines of using OpenURI to open each url, then if input is HTML collect links to crawl, otherwise process the data. I would use String.scan to break apart the file appropriately each time into a multi-dimensional array parsing each component based on the established formatting by the data provider. Upon completion, push the data into the database. Move on to next input file/uri. Rinse and repeat.
I figure I must be missing some libs that those with more experience would use to clean/quicken this process up dramatically and make the script much more flexible for reuse on other data sets.
Additionally, I will be graphing and visualizing this data as well as generating reports, so perhaps that should too be considered.
Any input as to perhaps a better approach or libs to simply this?
Your question focuses on a lot on "low level" details -- parsing URL's and so on. One key aspect of the "Ruby Way" is "Don't reinvent the wheel." Leverage existing libraries. :)
My recommendation? First, leverage a crawler such as spider or anemone. Second, use Nokogiri for HTML/XML parsing. Third, store the results. I recommend this because you might do different analyses later and you don't want to throw away the hard work of your spidering.
Without knowing too much about your constraints, I would look at storing your results in MongoDB. After thinking this, I did a quick search and found a nice tutorial Scraping a blog with Anemone and MongoDB.
I've written probably a bajillion spiders and site analyzers and find that Ruby has some nice tools that should make this an easy process.
OpenURI makes it easy to retrieve pages.
URI.extract makes it easy to find links in pages. From the docs:
Description
Extracts URIs from a string. If block given, iterates through all matched URIs. Returns nil if block given or array with matches.
require "uri"
URI.extract("text here http://foo.example.org/bla and here mailto:test#example.com and here also.")
# => ["http://foo.example.com/bla", "mailto:test#example.com"]
Simple, untested, logic to start might look like:
require "openuri"
require "uri"
urls_to_scan = %w[
http://www.example.com/page1
http://www.example.com/page2
]
loop do
break if urls_to_scan.empty?
url = urls_to_scan.shift
html = open(url).read
# you probably want to do something to make sure the URLs are not
# pointing outside the site you're walking.
#
# Something like:
#
# URI.extract(html).select{ |u| u[%r{^http://www\.example\.com}i] }
#
new_urls = URI.extract(html)
if (new_urls.any?)
urls_to_scan += new_urls
else
; # parse your file as data using the content in html
end
end
Unless you own the site you're crawling, you want to be kind and gentle: Don't run as fast as possible because it's not your pipe. Pay attention to the site's robot.txt file or risk being banned.
There are true web-crawler gems for Ruby, but the basic task is so simple I never bother with them. If you want to check out other alternatives, visit some of the links to the right for other questions on SO that touch on this subject.
If you need more power or flexibility, the Nokogiri gem makes short work of parsing HTML, allowing you to use CSS accessors to search for tags of interest. There are some pretty powerful gems for making it easy to grab pages such as typhoeus.
Finally, while ActiveRecord, which is recommended in some comments, is nice, finding documentation for using it outside of Rails can be difficult or confusing. I recommend using Sequel. It is a great ORM, very flexible, and well documented.
Hi I would start by taking a very close look at the gem called Mechanize before firing up any basic open-uri stuff - cause it's build into mechanize. It's a brilliant, fast, and easy to use gem for automating web-crawling. Since your data-format is pretty strange (at least compared to json, xml or html) I don't think you will make any use of the build-in parser - but you could still take a look at it. it's called nokogiri and is extremely smart as well. But in the last end, after crawling and fetching the resources, you will probably have to go with some good old regular expression stuff.
Good luck!

LINQ Conflict Detection: Setting UpdateCheck attribute

I've been reading up on LINQ lately to start implementing it, and there's a particular thing as to how it generates UPDATE queries that bothers me.
Creating the entities code automatically using SQLMetal or the Object Relational Designer, apparently all fields for all tables will get attribute UpdateCheck.Always, which means that for every UPDATE and DELETE query, i'll get SQL statement like this:
UPDATE table SET a = 'a' WHERE a='x' AND b='x' ... AND z='x', ad infinitum
Now, call me a purist, but this seems EXTREMELY inefficient to me, and it feels like a bad idea anyway, even if it weren't inefficient. I know the fetch will be done by the clustered primary key, so that's not slow, but SQL still needs to check every field after that to make sure it matches.
Granted, in some very sensitive applications something like this can be useful, but for the typical web app (think Stack Overflow), it seems like UpdateCheck.WhenChanged would be a more appropriate default, and I'd personally prefer UpdateCheck.Never, since LINQ will only update the actual fields that changed, not all fields, and in most real cases, the second person editing something wins anyway.
It does mean that if two people manage to edit the same field of the same row in the small time between reading that row and firing the UPDATE, then the conflict that would be found won't be fired. But in reality that's a very rare case. The one thing we may want to guard against when two people change the same thing won't be caught by this, because they won't click Submit at the exact same time anyway, so there will be no conflict at the time the second DataContext reads and updates the record (unless the DataContext is left open and stored in Session when the page is shown, or some other seriously bad idea like that).
However, as rare as the case is, i'd really like to not be getting exceptions in my code every now and then if this happens.
So my first question is, am I wrong in believing this? (again, for "typical" web apps, not for banking applications)
Am I missing some reason why having UpdateCheck.Always as default is a sane idea?
My second question is, can I change this civilizedly? Is there a way to tell SQLMetal or the ORD which UpdateCheck attribute to set?
I'm trying to avoid the situation where I have to remember to run a tool I'll have to make that'll take some regexes and edit all the attributes in the file directly, because it's evident that at some point we'll run SQLMetal after an update to the DB, we won't run this tool, and all our code will break in very subtle ways that we probably won't find while testing in dev.
Any suggestions?
War stories are more than welcome, i'd love to learn from other people's experiences on this.
Thank you very much!
Well, to answer the first question - I agree with you. I'm not a big fan of this "built in" optimistic concurrency, especially if you have timestamp columns or any fields which are not guaranteed to be the same after an update occurs.
To address the second question - I don't know of any way to override SqlMetal's default approach (UpdateCheck = Always), we ended up writing a tool which sets UpdateCheck = Never for appropriate columns. We're using a batch file to call SqlMetal and afterwards running the tool).
Oh, while I think of it - it was also a treat to find that SqlMetal also models relationships to set a foreign key to null instead of "Delete On Null" (for join tables in particular). We had to use the same post-generation tool to set these appropriately too.

Resources