How can I QUICKLY get a string from one of the first couple lines of a long CSV at a remote URL? - ruby

I'm working on an assignment where I retrieve several stock prices from online, using Yahoo's stock price system. Unfortunately, the Yahoo API I'm required to use returns a .csv file that apparently contains a line for every single day that stock has been traded, which is at least 5 thousand lines for the stocks I'm working with, and over 10 thousand lines for some of them (example).
I only care about the current price, though, which is in the second line.
I'm currently doing this:
require 'open-uri'
def get_ticker_price(stock)
open("http://ichart.finance.yahoo.com/table.csv?s=#{stock}") do |io|
io.read.split(',')[10].to_f
end
end
…but it's really slow.
Is all the delay coming from getting the file, or is there some from the way I'm handling it? Is io.read reading the entire file?
Is there a way to download only the first couple lines from the Yahoo CSV file?
If the answers to questions 1 & 2 don't render this one irrelevant, is there a better way to process it that doesn't require looking at the entire file (assuming that's what io.read is doing)?

You can use query string parameters to reduce the data to the current date, by using date range parameters.
example for MO on 7/13/2012: (start/end month starts w/ a zero-index, { 00 - 11 } ).
http://ichart.finance.yahoo.com/table.csv?s=MO&a=06&b=13&c=2012&d=6&e=13&f=2012&g=d
api description here:
http://etraderzone.com/free-scripts/47-historical-quotes-yahoo.html

Related

Can't Group DateTime by Hour or Dump Result Apache Pig

I'm working on a project that requires me to find the temporal average (e.g: hour, day, month) for multiple datasets and then do calculations on those averages. The issue I am running into is that Apache Pig will not group by the time nor dump the DateTime values. I've tried several solutions posted here on Stack Overlflow and elsewhere to no avail. I've also read over the documentation, and am unable to find a solution.
Here is my code so far:
data = LOAD 'TestData' USING PigStorage(',');
t_data = foreach data generate (chararray)$0 as date, (double)$305 as w_top, (double)$306 as t_top, (double)$310 as w_mid, (double)$311 as t_mid, (double)$315 as w_bot, (double)$316 as t_bot, (double)$319 as pressure;
times = FOREACH t_data GENERATE ToDate(date,'YYYY-MM-ddThh:mm:ss.s') as (date), w_top, t_top, w_mid, t_mid, w_bot, t_bot, pressure;
grp_hourly = GROUP times by GetHour(date);
average = foreach grp_hourly generate flatten(group), times.date, AVG(times.w_top), AVG(times.t_top), AVG(times.w_mid), AVG(times.t_mid), AVG(times.w_bot), AVG(times.t_bot);
And some sample lines from the data:
2011-01-06 15:00:00.0 ,0.07225,-11.36384,-0.045,-11.24599,0.036,-12.44104,1021.707
2011-01-06 15:00:00.1 ,0.09975,-11.34448,-0.0325,-11.26053,0.041,-12.45392,1021.694
2011-01-06 15:00:00.2 ,0.15375,-11.35576,-0.02975,-11.26536,0.01025,-12.44748,1021.407
2011-01-06 15:00:00.3 ,-0.00225,-11.42034,-0.03775,-11.28477,-0.013,-12.44429,1021.764
2011-01-06 15:00:00.4 ,0.01625,-11.33965,-0.0395,-11.27989,-0.0395,-12.42172,1021.484
What I Currently Get as Output:
I get a file with one average of every variable I feed into APACHE Pig without a date and time (most likely the average of each variable over the entire data set). I need them for each hour and to be printed with the output. Any tips would be appreciated. Sorry if my post is messy, I don't post to Stack Overflow often.
The date and time pattern string in ToDate doesn't exactly match your data. You have YYYY-MM-ddThh:mm:ss.s but your data looks like 2011-01-06 15:00:00.0. You need to match the spaces in your data, and since your hours are on the 24 hour, you need to use HH instead of hh. Check out the documentation for Java SimpleDateFormat class. Try this pattern string instead:
times = FOREACH t_data GENERATE ToDate(date,'yyyy-MM-dd HH:mm:ss.s ') as date;
To debug your code, try dumping right after creating the relation times instead of at the end since it seems like the problem is with ToDate().
Savage's answer was correct. The issue I had in my code was a quotation mark that was too close to the date and time string. So instead of writing mine like this:
(date,'YYYY-MM-ddThh:mm:ss.s')
It should be written like this:
(date,'YYYY-MM-ddThh:mm:ss.s ')

is it easy to modify this python code to use pandas and would it help if i did?

I have written a Python 2.7 script that reads a CSV file and then does some standard deviation calculations . It works absolutely fine however it is very very slow. A CSV I tried with 100 million lines took around 28 hours to complete. I did some googling and it appears that maybe using the pandas module might makes this quicker .
I have posted part of the code below, since i am a pretty novice when it comes to python , i am unsure if using pandas would actually help at all and if it did would the function need to be completely re-written.
Just some context for the CSV file, it has 3 columns, first column is an IP address, second is a url and the third is a timestamp.
def parseCsvToDict(filepath):
with open(csv_file_path) as f:
ip_dict = dict()
csv_data = csv.reader(f)
f.next() # skip header line
for row in csv_data:
if len(row) == 3: #Some lines in the csv have more/less than the 3 fields they should have so this is a cheat to get the script working ignoring an wrong data
current_ip, URI, current_timestamp = row
epoch_time = convert_time(current_timestamp) # convert each time to epoch
if current_ip not in ip_dict.keys():
ip_dict[current_ip] = dict()
if URI not in ip_dict[current_ip].keys():
ip_dict[current_ip][URI] = list()
ip_dict[current_ip][URI].append(epoch_time)
return(ip_dict)
Once the above function has finished the data is parsed to another function that calculates the standard deviation for each IP/URL pair (using numpy.std).
Do you think that using pandas may increase the speed and would it require a complete rewrite or is it easy to modify the above code?
The following should work:
import pandas as pd
colnames = ["current_IP", "URI", "current_timestamp", "dummy"]
df = pd.read_csv(filepath, names=colnames)
# Remove incomplete and redundant rows:
df = df[~df.current_timestamp.isnull() & df.dummy.isnull()]
Notice this assumes you have enough RAM. In your code, you are already assuming you have enough memory for the dictionary, but the latter may be significatively smaller than the memory used by the above, for two reasons.
If it is because most lines are dropped, then just parse the csv by chunks: arguments skiprows and nrows are your friends, and then pd.concat
If it is because IPs/URLs are repeated, then you will want to transform IPs and URLs from normal columns to indices: parse by chunks as above, and on each chunk do
indexed = df.set_index(["current_IP", "URI"]).sort_index()
I expect this will indeed give you a performance boost.
EDIT: ... including a performance boost to the calculation of the standard deviation (hint: df.groupby())
I will not be able to give you an exact solution, but here are a couple of ideas.
Based on your data, you read 100000000. / 28 / 60 / 60 approximately 1000 lines per second. Not really slow, but I believe that just reading such a big file can cause a problem.
So take a look at this performance comparison of how to read a huge file. Basically a guy suggests that doing this:
file = open("sample.txt")
while 1:
lines = file.readlines(100000)
if not lines:
break
for line in lines:
pass # do something
can give you like 3x read boost. I also suggest you to try defaultdict instead of your if k in dict create [] otherwise append.
And last, not related to python: working in data-analysis, I have found an amazing tool for working with csv/json. It is csvkit, which allows to manipulate csv data with ease.
In addition to what Salvador Dali said in his answer: If you want to keep as much of the current code of your script, you may find that PyPy can speed up your program:
“If you want your code to run faster, you should probably just use PyPy.” — Guido van Rossum (creator of Python)

Bizarre field switch between cli and file output in Ruby

I'm having such an strange issue with an ruby script which i'm working with... in this script i parse an iTunes Library xml file and form objects for Artists, Albums and Tracks. In my Album class, i have two numeric field, YEAR and TRACK_COUNT.
My script parses correctly the two fields, let's say, for example, the output of it:
#<Album:0x007f59b1472a18 #compilation=false, #title="Straight Out Of Hell", #year=2013, #track_count=13, #trackList=[], #coverList=[]>
when i output this same object to file, it get crippled, transforming to this, here in json format:
{"compilation":false,"title":"Straight Out Of Hell","year":13,"track_count":13,"trackList":[],"coverList":[]}]
as you can see, the field YEAR get overwritten with the value in TRACK_COUNT field... i'm getting crazy with this, as i don't do any change to this field between these outputs!
UPDATE
As asked by #Amadan...
http://pastebin.com/1FUuvaCr Biblioteca.xml (EXCERPT)
http://pastebin.com/F8wgu6bz Track.rb
http://pastebin.com/3qhd4TRU Song.rb
http://pastebin.com/RNf5S7AZ dependencies.rb
http://pastebin.com/haXPpJgN Cover.rb
http://pastebin.com/1JYtT1nn Artist.rb
http://pastebin.com/qsgLsAJa Album.rb
http://pastebin.com/eiUAMfwR app.rb (MAIN SCRIPT)
This is happening because your source file is not as clean as you believe it to be. In some albums in the source XML, "Track Count" and "Year" are appearing on the same line, without a recognized line break between them. So you might have a line like this:
<key>Track Count</key><integer>12</integer><key>Year</key><integer>2006</integer>
When your if-else-if ladder asks if "track count" appears in the line, it does, so you're grabbing the first <integer>something</integer> match on the line. This works fine. But when you try to extract the year out of this line, you're again asking for the first <integer> on the line, which is the Track Count.
The bigger problem is that you're attempting to parse an XML file line-by-line, and that's not how they're meant to be read. Install the nokogiri gem and call this:
data = Nokogiri::XML('Biblioteca.xml')
Now you can get to any information contained in the document. The official tutorials on user Nokogiri are here: http://www.nokogiri.org/tutorials/
Use this method to parse your file:
def parse filename
xml = Nokogiri::XML(filename)
songs = xml.css('dict key').select{|key| key.text =~ /^[0-9]{4}$/}
songs.map do |song|
info = {}
song.next_element.css('key').each do |attribute|
info[attribute.text] = attribute.next_element.text
end
info
end
end
This will create a list of song hashes. Here are some examples for how to use it:
# load the two songs in your example file
songs = parse('Biblioteca.xml')
# Get the year of the first song
songs[0]['Year'] #=> 2006
# Get the Track Count of the second song's album
songs[1]['Track Count'] #=> 12
# Get the Name of the second song
songs[1]['Name'] #=> 'Baby Come On'
# Get the Album name of the second song
songs[1]['Album'] #=> 'When Your Heart Stops Beating'
From here, you can easily put info into your song objects. Let me know if you have any more questions.
I've found a library for iTunes dodgy plist xml standart... Nokogiri-plist... working fine now :D

How do I get a file's age in days in Ruby?

How would I get the age of a file in days in Ruby?
NOTE that I need a way to accurately get the age of a given file; this means that leap years need to be taken into account.
I need this for a program that removes files after they reach a certain age in days, such as files that are 20 days or older.
And by age, I mean the last access time of a given file, so if a file hasn't been accessed in the past 20 days or more, it gets deleted.
In Perl, I know that you can use date::calc to calculate a date in terms of days since 1 AD, and I used to have a Common-Lisp program that used the Common-Lisp implementation of date::calc, but I don't have that anymore, so I've been looking for an alternative and Ruby seems to have the required capability.
path = '/path/to/file'
(Time.now - File.stat(path).mtime).to_i / 86400.0
#=> 1.001232
Here is the implementation of my above comment, it returns a floating point number expressing the number of days passed.
I know it is an old question, but I needed the same and came up with this solution that might be helpful for others.
As the difference is in days, there is not need to directly deal with seconds.
require 'date'
age_in_days = (Date.today - File.mtime(path).to_date).to_i
if age_in_days > 20
# delete logs
end
If using Rails, you can take advantage of ActiveSupport:
if File.mtime(path) < 20.days.ago
# delete logs
end
If you aren't using Rails, Eduardo's solution above would be my pick.

How to fetch all records using NCBI Batch Entrez

I have over 200,000 accessions in a flat file, which need to retrieve relevant entry from NBCI.
I use Batch Entrez (http://www.ncbi.nlm.nih.gov/sites/batchentrez) to do the job. But encountered several problems:
The initial file was splitted into multiple sub-files, each containing 4000 lines. But it seems Batch Entrez has some size limitation on the returned file. For example: if the first 1000 accessions all have tens of thousands lines which reach the size limitation, then the rest 3000 accessions will be rejected and won't be searched.
One possible solution in my head is to split the file into more sub-files and search individually. However this requires too much manual effort.
So I am just wondering if there is any other solution, or any code could be used.
Thanks in advance
Your problem sounds a good fit for a Bio-star toolkit. This is a solution using BioSmalltalk
| giList gbReader |
giList := (BioObject openFullFileNamed: 'd:\Batch_entrez_1.txt') contents lines.
gbReader := BioNCBIGenBankReader new.
gbReader
genBankRecordsFrom: 'nuccore'
format: #setModeXML
uids: giList.
(BioGBSeqCollection newFromXMLCollection: gbReader searchResults)
collect: [: e | BioParser
tokenizeNcbiXmlBlast: e contents
nodes: #('GBAuthor' 'GBSeq_definition') ]
To execute/debug the script, just select it and a right-click will open the Smalltalk world-menu.
The API automatically split and fetch your accession list (in the script contained in Batch_entrez_1.txt) maintaining the NCBI Entrez post limits to avoid penalities.
The result format is XML (which is an "easy" format to parse or filter specific fields) although it could be any of the retrieval modes supported by Entrez, for example setting #setModeText will answer an ASN.1 representation. Replace 'nuccore' for the database you want to query. Finally choose the interesting fields, in the script I have choosed 'GBAuthor' and 'GBSeq_definition', but you are free to choose anyone of the available nodes.

Resources