Cannot read unicode .csv into R - windows

I have a .csv file, which contains the following data:
"Ա","Բ"
1,10
2,20
I cannot read it into R so that the column names are displayed like they are in the file.
d <- read.csv("./Data/1.csv", fileEncoding="UTF-8")
head(d)
Produces the following:
> d <- read.csv("./Data/1.csv", fileEncoding="UTF-8")
Warning messages:
1: In read.table(file = file, header = header, sep = sep, quote = quote, :
invalid input found on input connection './Data/1.csv'
2: In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on './Data/1.csv'
> head(d)
[1] X.
<0 rows> (or 0-length row.names)
Meanwhile, doing the same without specifying the fileEncoding produces this:
> d <- read.csv("./Data/1.csv")
> head(d)
Ô. Ô²
1 1 10
2 2 20
When I run the "file" utility to find out the encoding of the file, it says it is UTF-8:
Data\1.csv: UTF-8 Unicode text, with CRLF line terminators
I am using RStudio, Windows 7, R version 2.15.2, 32-bit.
Thanks in advance.

I wrote a longer answer on the same issue here: R on Windows: character encoding hell .
Quick answer, using the parameter encoding instead of fileEncoding should fix your first issue. You will not be able to read it possibly in either console or table view in RStudio, but you will be able to use it in formulaes.
d <- read.csv("./Data/1.csv", encoding="UTF-8")
head(d)
Having saved your table into a UTF-8 file:
> test2 <- read.csv("test2.csv", header = FALSE, sep = ",", quote = "\"", dec = ".", fill = TRUE, comment.char = "", encoding = "UTF-8")
Warning message:
In read.table(file = file, header = header, sep = sep, quote = quote, :
incomplete final line found by readTableHeader on 'test2.csv'
This gives you how it looks like in the console and RStudio view
> test2
V1 V2
1 <U+0531> <U+0532>
2 1 10
3 2 20
However importantly you are able to manipulate this within R. Thus in my case it is possible to see that the script window input Ա has UTF-8 encoding, and a grep correctly finds this encoding in your table.
> Encoding("Ա")
[1] "UTF-8"
> grep("Ա", as.character(test2[1,1]))
[1] 1
You may need to find suitable encoding variants that work on your settings, or possibly change them. Unfortunately I am not sure where it is done.
You might not be able to make it pretty in all stages, but it is definitely possible to get it to work also in Windows 7 environment.

I tried two ways to replicate your problem.
I copied the characters above into RStudio, saved it to a csv with this code:
write.csv(c("Ա","Բ",
1,10,
2,20), "test.csv")
df <- read.csv("test.csv")
This worked fine.
Then I thought, well maybe R is cheating when I save it to CSV with R? So I just pasted the characters to a text file and save it as a CSV. This approach doesn't have problems either.
Here's my session info:
sessionInfo()
R version 3.0.1 (2013-05-16)
Platform: x86_64-pc-linux-gnu (64-bit)
locale:
[1] LC_CTYPE=en_CA.UTF-8 LC_NUMERIC=C LC_TIME=en_CA.UTF-8
[4] LC_COLLATE=en_CA.UTF-8 LC_MONETARY=en_CA.UTF-8 LC_MESSAGES=en_CA.UTF-8
[7] LC_PAPER=C LC_NAME=C LC_ADDRESS=C
[10] LC_TELEPHONE=C LC_MEASUREMENT=en_CA.UTF-8 LC_IDENTIFICATION=C
attached base packages:
[1] stats4 grid stats graphics grDevices utils datasets methods base
other attached packages:
[1] party_1.0-9 modeltools_0.2-21 strucchange_1.4-7 sandwich_2.2-10 zoo_1.7-10
[6] GGally_0.4.4 reshape_0.8.4 plyr_1.8 ggplot2_0.9.3.1
loaded via a namespace (and not attached):
[1] coin_1.0-23 colorspace_1.2-2 dichromat_2.0-0 digest_0.6.3
[5] gtable_0.1.2 labeling_0.2 lattice_0.20-23 MASS_7.3-29
[9] munsell_0.4.2 mvtnorm_0.9-9995 proto_0.3-10 RColorBrewer_1.0-5
[13] reshape2_1.2.2 scales_0.2.3 splines_3.0.1 stringr_0.6.2

I had the same problem and found out that the file was corrupted.
I opened the file with OpenOffice and saved it back using "UTF8" character set (you need to click the edit filter settings box) and then imported it with the read.csv()(no encoding or filencoding option) and it worked fine.

Related

Why does Windows have issues with the encoding, but Linux doesn't?

For my toy project mpu I have two CI solutions running:
Travis / Linux: Works
Azure / Windows: Fails
It fails with this message:
_______________________________ test_read_json ________________________________
def test_read_json():
path = "files/example.json"
source = pkg_resources.resource_filename(__name__, path)
data_real = read(source)
data_exp = {
"a list": [1, 42, 3.141, 1337, "help", "�"],
"a string": "bla",
"another dict": {"foo": "bar", "key": "value", "the answer": 42},
}
> assert data_real == data_exp
E AssertionError: assert {'a list': [1... answer': 42}} == {'a list': [1... answer': 42}}
E Omitting 2 identical items, use -vv to show
E Differing items:
E {'a list': [1, 42, 3.141, 1337, 'help', '€']} != {'a list': [1, 42, 3.141, 1337, 'help', '�']}
E Use -v to get the full diff
tests\test_io.py:175: AssertionError
Why can it read the € sign from the JSON, but within the test it fails? (Python 3.6)
I assume that the read function which is used in the test wraps open in some way or another.
TL;DR Try adding encoding='utf8' to the call to open.
From my experience, Windows does not always play nice with non-ascii characters when reading files unless the encoding is set explicitly.
Also, it does not help that the default value for encoding is platform-dependent:
encoding is the name of the encoding used to decode or encode the
file. This should only be used in text mode. The default encoding is
platform dependent (whatever locale.getpreferredencoding() returns),
but any text encoding supported by Python can be used. See the codecs
module for the list of supported encodings.
some tests (ran on Win 10, Python 3.7, locale.getpreferredencoding() returns cp1262):
test.csv
€
with open('test.csv') as f:
print(f.read())
# €
with open('test.csv', encoding='utf8') as f:
print(f.read())
# '€'

Trying to get all paths in a YAML file

I've got an input YAML file (test.yml) as follows:
# sample set of lines
foo:
x: 12
y: hello world
ip_range['initial']: 1.2.3.4
ip_range[]: tba
array['first']: Cluster1
array2[]: bar
The source contains square brackets for some keys (possibly empty).
I'm trying to get a line by line list of all the paths in the file, ideally like:
foo.x: 12
foo.y: hello world
foo.ip_range['initial']: 1.2.3.4
foo.ip_range[]: tba
foo.array['first']: Cluster1
array2[]: bar
I've used the yamlpaths library and the yaml-paths CLI, but can't get the desired output. Trying this:
yaml-paths -m -s =foo -K test.yml
outputs:
foo.x
foo.y
foo.ip_range\[\'initial\'\]
foo.ip_range\[\]
foo.array\[\'first\'\]
Each path is on one line, but the output has all the escape characters ( \ ). Modifying the call to remove the -m option ("expand matching parent nodes") fixes that problem but the output is then not one path per line:
yaml-paths -s =foo -K test.yml
gives:
foo: {"x": 12, "y": "hello world", "ip_range['initial']": "1.2.3.4", "ip_range[]": "tba", "array['first']": "Cluster1"}
Any ideas how I can get the one line per path entry but without the escape chars? I was wondering if there is anything for path querying in the ruamel modules?
Your "paths" are nothing more than the joined string representation of the keys (and probably indices) of the
mappings (and potentially sequences) in your YAML document.
That can be trivially generated from data loaded from YAML with a recursive function:
import sys
import ruamel.yaml
yaml_str = """\
# sample set of lines
foo:
x: 12
y: hello world
ip_range['initial']: 1.2.3.4
ip_range[]: tba
array['first']: Cluster1
array2[]: bar
"""
def pathify(d, p=None, paths=None, joinchar='.'):
if p is None:
paths = {}
pathify(d, "", paths, joinchar=joinchar)
return paths
pn = p
if p != "":
pn += '.'
if isinstance(d, dict):
for k in d:
v = d[k]
pathify(v, pn + k, paths, joinchar=joinchar)
elif isinstance(d, list):
for idx, e in enumerate(d):
pathify(e, pn + str(idx), paths, joinchar=joinchar)
else:
paths[p] = d
yaml = ruamel.yaml.YAML(typ='safe')
paths = pathify(yaml.load(yaml_str))
for p, v in paths.items():
print(f'{p} -> {v}')
which gives:
foo.x -> 12
foo.y -> hello world
foo.ip_range['initial'] -> 1.2.3.4
foo.ip_range[] -> tba
foo.array['first'] -> Cluster1
array2[] -> bar
While Anthon's answer certainly produces the output you were after, I think your question was specifically about how to get the yaml-paths command to produce the desired output. I'll address that original question.
As of version 3.5.0, the yamlpath project's yaml-paths command supports a --noescape option which removes the escape symbols from output. Using your input file and the new option, you may find this output more to your liking:
$ yaml-paths --nofile --expand --keynames --noescape --values --search='=~/.*/' test.yml
foo.x: 12
foo.y: hello world
foo.ip_range['initial']: 1.2.3.4
foo.ip_range[]: tba
foo.array['first']: Cluster1
array2[]: bar
Note:
Using the --values option includes the value with each YAML Path.
For interest, I changed the --search expression to match every node in the input file rather than only the "foo" data.
The default output (without setting --noescape) produces YAML Paths which can be used as direct input into other YAML Path parsers and processors; setting --noescape changes this to render human-friendly paths which may not work as downstream YAML Path input.
Disclaimer: I am the author of the yamlpath project. Should you ever run into issues or have questions about it, please visit the project's GitHub project site and engage me via Issues (bugs and feature requests) or Discussions (questions). Thank you!

pandoc-citeproc as API: processCites' does not add references

I have a small text file in markdown :
---
title: postWithReference
author: auf
date: 2010-07-29
keywords: homepage
abstract: |
What are the objects of
ontologists .
bibliography: "/home/frank/Workspace8/SSG/site/resources/BibTexLatex.bib"
csl: "/home/frank/Workspace8/SSG/site/resources/chicago-fullnote-bibliography-bb.csl"
---
An example post. With a reference to [#Frank2010a] and more[#navratil08].
## References
and process it in Haskell with processCites' which has a single argument, namely the Pandoc data resulting from readMarkdown. The bibliography and the csl style should be taken from the input file.
The process does not produce errors, but the result of processCites is the same text as the input; references are not treated at all. For the same input the references are resolved with the standalone pandoc (this excludes errors in the bibliography and the csl style)
pandoc -f markdown -t html --filter=pandoc-citeproc -o p1.html postWithReference.md
The issue is therefore in the API. The code I have is:
markdownToHTML4 :: Text -> PandocIO Value
markdownToHTML4 t = do
pandoc <- readMarkdown markdownOptions t
let meta2 = flattenMeta (getMeta pandoc)
-- test if biblio is present and apply
let bib = Just $ ( meta2) ^? key "bibliography" . _String
pandoc2 <- case bib of
Nothing -> return pandoc
_ -> do
res <- liftIO $ processCites' pandoc -- :: Pandoc -> IO Pandoc
when (res == pandoc) $
liftIO $ putStrLn "*** markdownToHTML3 result without references ***"
return res
htmltex <- writeHtml5String html5Options pandoc2
let withContent = ( meta2) & _Object . at "contentHtml" ?~ String ( htmltex)
return withContent
getMeta :: Pandoc -> Meta
getMeta (Pandoc m _) = m
What do I misunderstand? are there any reader options necessary for citeproc? The bibliography is a BibLatex file.
I found in hakyll code a comment, which I cannot understand in light of the code there - perhaps somebody knows what the intention is.
-- We need to know the citation keys, add then *before* actually parsing the
-- actual page. If we don't do this, pandoc won't even consider them
-- citations!
I have a workaround (not an answer to the original question, I still hope that somebody can identify my error!). It is simple to call the standalone pandoc with System.readProess and pass the text and get the result back, not even reading and writing files:
processCites2x :: Maybe FilePath -> Maybe FilePath -> Text -> ErrIO Text
-- porcess the cites in the text (not with the API)
-- using systemcall because the standalone pandoc works with
-- call: pandoc -f markdown -t html --filter=pandoc-citeproc
-- with the input text on stdin and the result on stdout
-- the csl and bib file are used from text, not from what is in the arguments
processCites2x _ _ t = do
putIOwords ["processCite2" ] -- - filein\n", showT styleFn2, "\n", showT bibfn2]
let cmd = "pandoc"
let cmdargs = ["--from=markdown", "--to=html5", "--filter=pandoc-citeproc" ]
let cmdinp = t2s t
res :: String <- callIO $ System.readProcess cmd cmdargs cmdinp
return . s2t $ res
-- error are properly caught and reported in ErrIO
t2s and s2t are conversion utilities between string and text, ErrIO is ErrorT Text a IO and callIO is essentially liftIO with handling of errors.
The original problem was very simple: I had not included the option Ext_citations in the markdownOptions. When it is included, the example works (thanks to help I received from the pandoc-citeproc issue page). The referenced code is updated...

Convert markdown italics and boldface to latex

I want to be able to convert markdown italics and boldface to latex versions on the fly (i.e., give a text string(s) return a text string(s)). I thought easy. Wrong! (Which it still may be). See the sill buisness and error I tried at the bottom.
What I have (note the starting asterisk that's been escaped as in markdown):
x <- "\\*note: I *like* chocolate **milk** too ***much***!"
What I would like:
"*note: I \\emph{like} chocolate \\textbf{milk} too \\textbf{\\emph{much}}!"
I'm not attached to regex but would prefer a base solution (though not essential).
Silly business:
helper <- function(ins, outs, x) {
gsub(paste0(ins[1], ".+?", ins[2]), paste0(outs[1], ".+?", outs[2]), x)
}
helper(rep("***", 2), c("\\textbf{\\emph{", "}}"), x)
Error in gsub(paste0(ins[1], ".+?", ins[2]), paste0(outs[1], ".+?", outs[2]), :
invalid regular expression '***.+?***', reason 'Invalid use of repetition operators'
I have this toy that Ananda Mahto helped me make if it's helpful. You could access it from reports via wheresPandoc <- reports:::wheresPandoc
EDIT Per Ben's comments I tried:
action <- paste0(" echo ", x, " | ", wheresPandoc(), " -t latex ")
system(action)
*note: I *like* chocolate **milk** too ***much***! | C:\PROGRA~2\Pandoc\bin\pandoc.exe -t latex
EDIT2 Per Dason's comments I tried:
out <- paste("echo", shQuote(x), "|", wheresPandoc(), " -t latex"); system(out)
system(out, intern = T)
> system(out, intern = T)
\*note: I *like* chocolate **milk** too ***much***! | C:\PROGRA~2\Pandoc\bin\pandoc.exe -t latex
The lack of pipes on Windows made this tricky, but you can get around it using input to provide the stdin:
> x = system("pandoc -t latex", intern=TRUE, input="\\*note: I *like* chocolate **milk** too ***much***!")
> x
[1] "*note: I \\emph{like} chocolate \\textbf{milk} too \\textbf{\\emph{much}}!"
Noting I am working on windows and from ?system
This means that redirection, pipes, DOS internal commands, ... cannot be used
and the note from ?system2
Note
system2 is a more portable and flexible interface than system,
introduced in R 2.12.0. It allows redirection of output without
needing to invoke a shell on Windows, a portable way to set
environment variables for the execution of command, and finer control
over the redirection of stdout and stderr. Conversely, system (and
shell on Windows) allows the invocation of arbitrary command lines.
Using system2
system2('pandoc', '-t latex', input = '**em**', stdout = TRUE)

Ruby libxml: format XMLParser to expand closing tags [duplicate]

libxml2 (for C) is not preserving empty elements in their original form on a save. It replaces <tag></tag> with <tag/> which is technically correct but causes problems for us.
xmlDocPtr doc = xmlParseFile("myfile.xml");
xmlNodePtr root = xmlSaveFile("mynewfile.xml", doc);
I've tried playing with the various options (using xlmReadFile) but none seem to affect the output. One post here mentioned disabling tag compression but the example was for PERL and I've found no analog for C.
Is there an option to disable this behavior?
Just found this enum in the xmlsave module documentation:
Enum xmlSaveOption {
XML_SAVE_FORMAT = 1 : format save output
XML_SAVE_NO_DECL = 2 : drop the xml declaration
XML_SAVE_NO_EMPTY = 4 : no empty tags
XML_SAVE_NO_XHTML = 8 : disable XHTML1 specific rules
XML_SAVE_XHTML = 16 : force XHTML1 specific rules
XML_SAVE_AS_XML = 32 : force XML serialization on HTML doc
XML_SAVE_AS_HTML = 64 : force HTML serialization on XML doc
XML_SAVE_WSNONSIG = 128 : format with non-significant whitespace
}
Maybe you can refactor your application to use this module for serialization, and play a little with these options. Specially with XML_SAVE_NO_EMPTY.
Your code may look like this:
xmlSaveCtxt *ctxt = xmlSaveToFilename("mynewfile.xml", "UTF-8", XML_SAVE_FORMAT | XML_SAVE_NO_EMPTY);
if (!ctxt || xmlSaveDoc(ctxt, doc) < 0 || xmlSaveClose(ctxt) < 0)
//...deal with the error

Resources