How can I generate sql inserts from pipe delimited data? - elisp

Given a set of delimited data in the following format:
1|Star Wars: Episode IV - A New Hope|1977|Action,Sci-Fi|George Lucas
2|Titanic|1997|Drama,History,Romance|James Cameron
In elisp, how can I generate sql insert statements in this format?
insert into table
values(1,"Star Wars: Episode IV - A New Hope",1977","Action,Sci-Fi","George Lucas",0);
insert into table
values(2,"Titanic",1997,"Drama,History,Romance","James Cameron",0);
To simplify the problem, let's allow for a parameter to tell which
columns are text or numeric. (e.g. 0,1,0,1,1)
Here's how I would do it in Perl.
my #ctypes=qw/0 1 0 1 1/;
while(<>) {
chop;
#F=split('\|', $_);
print "insert into table values(";
foreach my $col (#F) {
my $type=shift(#ctypes);
print ($type == 1 ? '"'.$col.'"' : $col);
print ",";
}
print "0);\n";
}

To insert the data at the end of a buffer:
(require 'cl)
(defun* insert-statements (rows ctypes &key (table-name "table") (delimiter "|"))
(let* ((values-template
(mapconcat '(lambda (type) (if (= type 1) "\"%s\"" "%s")) ctypes ","))
(template (format "insert into %s values(%s);\n" table-name values-template)))
(mapcar '(lambda (row) (insert (apply 'format (cons template (split-string row delimiter)))))
rows)))
(let ((data "1|Star Wars: Episode IV - A New Hope|1977|Action,Sci-Fi|George Lucas
2|Titanic|1997|Drama,History,Romance|James Cameron")
(ctypes '(0 1 0 1 1)))
(save-excursion
(goto-char (point-max))
(insert-statements (split-string data "\n") ctypes)))
This solution may not scale well compared with the Perl solution.
Alternatively, we could insert the values into a database using SqlMode:
(require 'cl)
(defun* insert-statements (rows ctypes &key (table-name "table") (delimiter "|"))
(let* ((values-template
(mapconcat '(lambda (type) (if (= type 1) "\"%s\"" "%s")) ctypes ","))
(template (format "insert into %s values(%s);" table-name values-template)))
(mapcar '(lambda (row) (sql-send-string
(apply 'format (cons template (split-string row delimiter)))) )
rows)))
(sql-sqlite)
(sql-send-string "create table test (id, title, yr, genre, director);")
(let ((data "1|Star Wars: Episode IV - A New Hope|1977|Action,Sci-Fi|George Lucas
2|Titanic|1997|Drama,History,Romance|James Cameron")
(ctypes '(0 1 0 1 1)))
(insert-statements (split-string data "\n") ctypes :table-name 'test))
(sql-send-string "select title from test;")
(sql-send-string "drop table test;")

you can write a simple program to go through each line and look for delimiter | and generate insert statements. if you are familiar with using mysql you can import the data into mysql table using "load data" command. you will have to have the data in file though

Related

Need to extend elisp function

All,
I must suck at eLisp. Banged this first function out in no time.
(defun sort-lines-reverse (beg end)
"sort lines in reverse order"
(interactive
(if (use-region-p)
(list (region-beginning) (region-end))
(list (point-min) (point-max))))
(if (and beg end)
(sort-lines 1 beg end))
)
works perfectly. Hosed this next one
(defun sort-numeric-fields-reverse (field beg end)
"sort numeric fields in reverse order"
(interactive
(if (use-region-p)
(list (read-number "Field number: " 1) (region-beginning) (region-end))
(list (read-number "Field number: " 1) (point-min) (point-max)))
(message "calling if")
(if (and beg end)
((message "inside if")
(sort-numeric-fields field beg end)
(reverse-region beg end))
)
))
No runs no hits no errors. Don't see a single message displayed in messages. I do get my field number prompt.
A snippet of randomly generated test data if one so wishes.
8 412086510
8 744308263
8 1482781895
8 995992436
1 1021762533
1 897682569
1 963686690
1 166565707
1 2096612583
1 829723388
1 587753131
1 264251710
32 139885828
32 781244288
Adding insult to injury in my KDE Neon environment the C-M-x to bring up the lisp debugger doesn't do squat.
The only real difference between these two functions is in this one I have to prompt for a field number. Inside the if I run 2 functions instead of one. After getting the first one to work the second should have been a cakewalk.
Help would be appreciated.
Two issues:
missing ) at the end of interactive, after (if (use-region-p) ...
missing progn in (if (and beg end)...
(progn is superfluous because if has been replaced by when.)
Corrected version:
(defun sort-numeric-fields-reverse (field beg end)
"sort numeric fields in reverse order"
(interactive
(if (use-region-p)
(list (read-number "Field number: " 1) (region-beginning) (region-end))
(list (read-number "Field number: " 1) (point-min) (point-max))))
(message "calling if")
(when (and beg end)
(message "inside if")
(sort-numeric-fields field beg end)
(reverse-region beg end)))
EDIT: Code changed: if-progn replaced with when according to hint from #phils.
Hint: using an adequate editor makes the typing easy and gives you control over parentheses.

How to execute batch SQL update query in Clojure?

How to execute the following query for thousand rows as a single batch call using a prepared statement under the hood?
(clojure.java.jdbc/execute! db ["UPDATE person SET zip = ? WHERE zip = ?" 94540 94546])
Does clojure/jdbc has an appropriate function or something else for that?
Found the answer. Applicable function is clojure.java.jdbc/db-do-prepared with enabled :multi? key.
(clojure.java.jdbc/db-do-prepared db
["UPDATE person SET zip = ? WHERE zip = ?"
[94540 94546]
[94541 94547]
...
]
{:multi? true})
Here is the way I did it for one of my projects:
(require '[clojure.java.jdbc :as sql])
(defn- db-do-cmd
[command]
(sql/db-do-commands #db-url command))
(defn- create-update-phone-sql
[{:keys [fname lname dob phone]}]
(let [where-init (format "UPDATE person SET phone = %s WHERE " (escape-quote-and-null phone))
where-rest (apply str (interpose " AND " [(str "person.dob" (escape-quote-and-null-for-where dob))
(str "person.fname" (escape-quote-and-null-for-where fname))
(str "person.lname" (escape-quote-and-null-for-where lname))]))]
(str where-init where-rest)))
(defn batch-update!
[coll]
(db-do-cmd (map create-update-phone-sql coll)))

How to extract element from html in Racket?

I want to extract the urls in reddit, my code is
#lang racket
(require net/url)
(require html)
(define reddit (string->url "http://www.reddit.com/r/programming/search?q=racket&sort=relevance&restrict_sr=on&t=all"))
(define in (get-pure-port reddit #:redirections 5))
(define response-html (read-html-as-xml in))
(define content-0 (list-ref response-html 0))
(close-input-port in)
The content-0 above is
(element
(location 0 0 15)
(location 0 0 82)
...
I'm wondering how to extract specific content from it.
Usually it's more convenient to deal with HTML as x-expressions instead of the html module's structs.
Also you should probably use call/input-url to handle closing the port automatically.
You can combine both of these ideas by defining a read-html-as-xexpr function and using it like this:
#lang racket/base
(require html
net/url
xml)
(define (read-html-as-xexpr in) ;; input-port? -> xexpr?
(caddr
(xml->xexpr
(element #f #f 'root '()
(read-html-as-xml in)))))
(define reddit (string->url "http://www.reddit.com/r/programming/search?q=racket&sort=relevance&restrict_sr=on&t=all"))
(call/input-url reddit
get-pure-port
read-html-as-xexpr)
That will return a big x-expression like:
'(html
((lang "en") (xml:lang "en") (xmlns "http://www.w3.org/1999/xhtml"))
(head
()
(title () "programming: search results")
(meta
((content " reddit, reddit.com, vote, comment, submit ")
(name "keywords")))
(meta
((content "reddit: the front page of the internet") (name "description")))
(meta ((content "origin") (name "referrer")))
(meta ((content "text/html; charset=UTF-8") (http-equiv "Content-Type")))
... snip ...
How to extract specific pieces of that?
For simple HTML where I don't expect the overall structure to change, I will often just use match.
However a more correct and robust way to go about it is to use the xml/path module.
UPDATE: I noticed your question started by asking about extracting URLs. Here's the example updated to use se-path*/list to get all the href attributes of all the <a> elements:
#lang racket/base
(require html
net/url
xml
xml/path)
(define (read-html-as-xexprs in) ;; (-> input-port? xexpr?)
(caddr
(xml->xexpr
(element #f #f 'root '()
(read-html-as-xml in)))))
(define reddit (string->url "http://www.reddit.com/r/programming/search?q=racket&sort=relevance&restrict_sr=on&t=all"))
(define xe (call/input-url reddit
get-pure-port
read-html-as-xexprs))
(se-path*/list '(a #:href) xe)
Result:
'("#content"
"http://www.reddit.com/r/announcements/"
"http://www.reddit.com/r/Art/"
"http://www.reddit.com/r/AskReddit/"
"http://www.reddit.com/r/askscience/"
"http://www.reddit.com/r/aww/"
"http://www.reddit.com/r/blog/"
"http://www.reddit.com/r/books/"
"http://www.reddit.com/r/creepy/"
"http://www.reddit.com/r/dataisbeautiful/"
"http://www.reddit.com/r/DIY/"
"http://www.reddit.com/r/Documentaries/"
"http://www.reddit.com/r/EarthPorn/"
"http://www.reddit.com/r/explainlikeimfive/"
"http://www.reddit.com/r/Fitness/"
"http://www.reddit.com/r/food/"
... snip ...

clojure.java.jdbc/query large resultset lazily

I'm trying to read millions of rows from a database and write to a text file.
This is a continuation of my question database dump to text file with side effects
My problem now seems to be that the logging doesn't happen until the program completes. Another indicator that i'm not processing lazily is that the text file isn't written at all until the program finishes.
Based on an IRC tip it seems my issue is likely having to do with :result-set-fnand defaulting to doall in the clojure.java.jdbc/query area of the code.
I have tried to replace this with a for function but still discover that memory consumption is high as it pulls the entire result set into memory.
How can i have a :result-set-fn that doesn't pull everything in like doall? How can I progressively write the log file as the program is running, rather then dump everything once the -main execution is finished?
(let [
db-spec local-postgres
sql "select * from public.f_5500_sf "
log-report-interval 1000
fetch-size 100
field-delim "\t"
row-delim "\n"
db-connection (doto ( j/get-connection db-spec) (.setAutoCommit false))
statement (j/prepare-statement db-connection sql :fetch-size fetch-size )
joiner (fn [v] (str (join field-delim v ) row-delim ) )
start (System/currentTimeMillis)
rate-calc (fn [r] (float (/ r (/ ( - (System/currentTimeMillis) start) 100))))
row-count (atom 0)
result-set-fn (fn [rs] (lazy-seq rs))
lazy-results (rest (j/query db-connection [statement] :as-arrays? true :row-fn joiner :result-set-fn result-set-fn))
]; }}}
(.setAutoCommit db-connection false)
(info "Started dbdump session...")
(with-open [^java.io.Writer wrtr (io/writer "output.txt")]
(info "Running query...")
(doseq [row lazy-results]
(.write wrtr row)
))
(info (format "Completed write with %d rows" #row-count))
)
I took the recent fixes for clojure.java.jdbc by putting [org.clojure/java.jdbc "0.3.0-beta1"] in my project.clj dependencies listing. This one enhances/corrects the :as-arrays? true functionality of clojure.java.jdbc/query described here.
I think this helped somewhat however I may still have been able to override the :result-set-fn to vec.
The core issue was resolved by tucking all row logic into :row-fn. The initial OutOfMemory problems had to do with iterating through j/query result sets rather than defining the specific :row-fn.
New (working) code is below:
(defn -main []
(let [; {{{
db-spec local-postgres
source-sql "select * from public.f_5500 "
log-report-interval 1000
fetch-size 1000
row-count (atom 0)
field-delim "\u0001" ; unlikely to be in source feed,
; although i should still check in
; replace-newline below (for when "\t"
; is used especially)
row-delim "\n" ; unless fixed-width, target doesn't
; support non-printable chars for recDelim like
db-connection (doto ( j/get-connection db-spec) (.setAutoCommit false))
statement (j/prepare-statement db-connection source-sql :fetch-size fetch-size :concurrency :read-only)
start (System/currentTimeMillis)
rate-calc (fn [r] (float (/ r (/ ( - (System/currentTimeMillis) start) 100))))
replace-newline (fn [s] (if (string? s) (clojure.string/replace s #"\n" " ") s))
row-fn (fn [v]
(swap! row-count inc)
(when (zero? (mod #row-count log-report-interval))
(info (format "wrote %d rows" #row-count))
(info (format "\trows/s %.2f" (rate-calc #row-count)))
(info (format "\tPercent Mem used %s " (memory-percent-used))))
(str (join field-delim (doall (map #(replace-newline %) v))) row-delim ))
]; }}}
(info "Started database table dump session...")
(with-open [^java.io.Writer wrtr (io/writer "./sql/output.txt")]
(j/query db-connection [statement] :as-arrays? true :row-fn
#(.write wrtr (row-fn %))))
(info (format "\t\t\tCompleted with %d rows" #row-count))
(info (format "\t\t\tCompleted in %s seconds" (float (/ (- (System/currentTimeMillis) start) 1000))))
(info (format "\t\t\tAverage rows/s %.2f" (rate-calc #row-count)))
nil)
)
Other things i experimented (with limited success) involved the timbre logging and turning off stardard out; i wondered if with using a REPL it might cache the results before displaying back to my editor (vim fireplace) and i wasn't sure if that was utilizing a lot of the memory.
Also, I added the logging parts around memory free with (.freeMemory (java.lang.Runtime/getRuntime)). I wasn't as familiar with VisualVM and pinpointing exactly where my issue was.
I am happy with how it works now, thanks everyone for your help.
You can use prepare-statement with the :fetch-size option. Otherwise, the query itself is eager despite the results being delivered in a lazy sequence.
prepare-statement requires a connection object, so you'll need to explicitly create one. Here's an example of how your usage might look:
(let [db-spec local-postgres
sql "select * from big_table limit 500000 "
fetch-size 10000 ;; or whatever's appropriate
cnxn (doto (j/get-connection db-spec)
(.setAutoCommit false))
stmt (j/prepare-statement cnxn sql :fetch-size fetch-size)
results (rest (j/query cnxn [stmt]))]
;; ...
)
Another option
Since the problem seems to be with query, try with-query-results. It's considered deprecated but is still there and works. Here's an example usage:
(let [db-spec local-postgres
sql "select * from big_table limit 500000 "
fetch-size 100 ;; or whatever's appropriate
cnxn (doto (j/get-connection db-spec)
(.setAutoCommit false))
stmt (j/prepare-statement cnxn sql :fetch-size fetch-size)]
(j/with-query-results results [stmt] ;; binds the results to `results`
(doseq [row results]
;;
)))
I've have found a better solution: you need to declare a cursor and fetch chunks of data from it in a transaction. Example:
(db/with-tx
(db/execute! "declare cur cursor for select * from huge_table")
(loop []
(when-let [rows (-> "fetch 10 from cur" db/query not-empty)]
(doseq [row rows]
(process-a-row row))
(recur))))
Here, db/with-tx, db/execute! and db/query are my own shortcuts declared in db namespace:
(def ^:dynamic
*db* {:dbtype "postgresql"
:connection-uri <some db url>)})
(defn query [& args]
(apply jdbc/query *db* args))
(defn execute! [& args]
(apply jdbc/execute! *db* args))
(defmacro with-tx
"Runs a series of queries into transaction."
[& body]
`(jdbc/with-db-transaction [tx# *db*]
(binding [*db* tx#]
~#body)))

Extracting URLs from an Emacs buffer?

How can I write an Emacs Lisp function to find all hrefs in an HTML file and extract all of the links?
Input:
<html>
<a href="http://www.stackoverflow.com" _target="_blank">StackOverFlow&lt/a>
<h1>Emacs Lisp</h1>
<a href="http://news.ycombinator.com" _target="_blank">Hacker News&lt/a>
</html>
Output:
http://www.stackoverflow.com|StackOverFlow
http://news.ycombinator.com|Hacker News
I've seen the re-search-forward function mentioned several times during my search. Here's what I think that I need to do based on what I've read so far.
(defun extra-urls (file)
...
(setq buffer (...
(while
(re-search-forward "http://" nil t)
(when (match-string 0)
...
))
I took Heinzi's solution and came up with the final solution that I needed. I can now take a list of files, extract all URL's and titles, and place the results in one output buffer.
(defun extract-urls (fname)
"Extract HTML href url's,titles to buffer 'new-urls.csv' in | separated format."
(setq in-buf (set-buffer (find-file fname))); Save for clean up
(beginning-of-buffer); Need to do this in case the buffer is already open
(setq u1 '())
(while
(re-search-forward "^.*<a href=\"\\([^\"]+\\)\"[^>]+>\\([^<]+\\)</a>" nil t)
(when (match-string 0) ; Got a match
(setq url (match-string 1) ) ; URL
(setq title (match-string 2) ) ; Title
(setq u1 (cons (concat url "|" title "\n") u1)) ; Build the list of URLs
)
)
(kill-buffer in-buf) ; Don't leave a mess of buffers
(progn
(with-current-buffer (get-buffer-create "new-urls.csv"); Send results to new buffer
(mapcar 'insert u1))
(switch-to-buffer "new-urls.csv"); Finally, show the new buffer
)
)
;; Create a list of files to process
;;
(mapcar 'extract-urls '(
"/tmp/foo.html"
"/tmp/bar.html"
))
If there is at most one link per line and you don't mind some very ugly regular expression hacking, run the following code on your buffer:
(defun getlinks ()
(beginning-of-buffer)
(replace-regexp "^.*<a href=\"\\([^\"]+\\)\"[^>]+>\\([^<]+\\)</a>.*$" "LINK:\\1|\\2")
(beginning-of-buffer)
(replace-regexp "^\\([^L]\\|\\(L[^I]\\)\\|\\(LI[^N]\\)\\|\\(LIN[^K]\\)\\).*$" "")
(beginning-of-buffer)
(replace-regexp "
+" "
")
(beginning-of-buffer)
(replace-regexp "^LINK:\\(.*\\)$" "\\1")
)
It replaces all links with LINK:url|description, deletes all lines containing anything else, deletes empty lines, and finally removes the "LINK:".
Detailed HOWTO: (1) Correct the bug in your example html file by replacing <href with <a href, (2) copy the above function into Emacs scratch, (3) hit C-x C-e after the final ")" to load the function, (4) load your example HTML file, (5) execute the function with M-: (getlinks).
Note that the linebreaks in the third replace-regexp are important. Don't indent those two lines.
You can use the 'xml library, examples of using the parser are found here. To parse your particular file, the following does what you want:
(defun my-grab-html (file)
(interactive "fHtml file: ")
(let ((res (car (xml-parse-file file)))) ; 'car because xml-parse-file returns a list of nodes
(mapc (lambda (n)
(when (consp n) ; don't operate on the whitespace, xml preserves whitespace
(let ((link (cdr (assq 'href (xml-node-attributes n)))))
(when link
(insert link)
(insert "|")
(insert (car (xml-node-children n))) ;# grab the text for the link
(insert "\n")))))
(xml-node-children res))))
This does not recursively parse the HTML to find all the links, but it should get you started in the direction of the general solution.

Resources