I have a number in Mathematica, a large number. I have even gotten this number in base 16 form, using OutputForm[]. I am basically trying to write out a number to a file in hex format.
Please keep in mind I am using 123456 in these examples instead of my 70,000 digit number.
Whenever I write a file using a simple Put[123456, "file.raw"] command, I get a raw data file with the actual data 3132333435360A with a line ending.
If I use Put[OutputForm[BaseForm[123456, 16]], "file.raw"] command, I get a raw data file with the data in hex format 31653234300A202020202031360A but still not written as raw data.
I would like the Hex Form of the Number Dumped as Data.
I have tried Export, BinaryWrite, and DumpSave, but can't figure it out.
I just am getting a headache I guess cause I can't see past what I need to do.
One thing I did try was doing:
Export["file.raw", 123456];
But the file is not raw enough. What I mean by that is there is there is header data and extra crap.
Would love to get this working thanks.
Please let us know what you expect to see in your output file, and what you want use it for. Do you want something a human can read, or something in a specified format to be used by a computer? Please provide an example.
The two examples using Put[] correctly provide files containing ASCII characters corresponding to the text representations of your inputs, and which are human-readable.
I think what you're looking for is IntegerString[_,16]:
In[33]:= IntegerString[123456, 16]
Out[33]= "1e240"
str = OpenWrite[];
WriteString[str, IntegerString[123456, 16]];
Close[str];
FilePrint[%]
1e240
(using WriteString instead of Put avoids having the string characters
Related
I have a simple struct containing some stuff, and also a Text field. I was looking at the result of encoding this data using Capnp, and for some reason the value of the text field appears in the encoded output twice! That doesn't seem very efficient or sane. Why does this happen?
Cap'n Proto does not encode text fields twice. To understand what happened in your case, we'd need to see your code.
I would like to extract a line of strings but am having difficulties using the correct RegEx. Any help would be appreciated.
String to extract: KSEA 122053Z 21008KT 10SM FEW020 SCT250 17/08 A3044 RMK AO2 SLP313 T01720083 50005
For Some reason StackOverflow wont let me cut and paste the XML data here since it includes "<>" characters. Basically I am trying to extract data between "raw_text" ... "/raw_text" from a xml that will always be formatted like the following: http://www.aviationweather.gov/adds/dataserver_current/httpparam?dataSource=metars&requestType=retrieve&format=xml&hoursBeforeNow=3&mostRecent=true&stationString=PHNL%20KSEA
However, the Station name, in this case "KSEA" will not always be the same. It will change based on user input into a search variable.
Thanks In advance
if I can assume that every strings that you want starts with KSEA, then the answer would be:
.*(KSEA.*?)KSEA.*
using ? would let .* match as less as possible.
I have a large CSV with a large number of columns. I am trying to count the number of lines using
File.open(file).readlines.to_a.compact.count.to_i
It displays 57 although there are only 56 rows. Upon close examination I found that a part of one line is wrapped to form the next line. How to get the correct count?
Upon close examination I found that a part of one line is wrapped to form the next line. How to get the correct count?
You need to show an example of the incoming data if you want us to help beyond generic answers.
To fix the problem, you have to be able to identify the line. We can't help you there because it could look like anything. Making a wild guess, I'd say that one of the columns had an embedded new-line in it, which forces the line to wrap.
It the file is a true CSV file, that column should be wrapped in double-quotes, so you could search the file for lines that do NOT end with whatever data type should be in the last column, then read the next line, join them, then rewrite the file. But, again, we have nothing to work with, because your file's format could be a huge number of different things.
Your best bet is to use the CSV class that comes with Ruby, and let it read the file, instead of trying to treat it like a text file. CSV files are text, but they are formatted to maintain the columns and rows, so using the CSV class will give you a better chance of getting at the data.
Looking at your code:
There are a number of ways to count the number of lines in a file, including the easiest which is:
`wc -l /path/to/file`.to_i
if you're using *nix.
Using File.open(file).readlines.to_a is horribly redundant and not fast or scalable if your file is big.
readlines returns an array.
to_a returns an array.
Why turn the array into an array?
readlines loads an entire file into memory, then splits it on line ends into an array. That process can be a lot slower than simply reading the file line-by-line and incrementing a counter, plus "slurping" can make your program crawl if the file is larger than available memory.
See "Why is "slurping" a file not a good practice?" for more information.
compact removes nils from an array. readlines should never return any nils so compact will iterate over the array looking for something that shouldn't exist.
count returns an integer.
to_i converts the receiver to an integer.
In other words, to_i is turning an integer into an integer. Why?
If you want to do it in Ruby instead of using wc -l, do something simple and fast:
lines_in_file = 0
File.foreach(some_file) { lines_in_file += 1 }
After running that, lines_in_file will contain the number of lines read. Memory won't be impacted and it'll run like blue blazes on huge files.
One annoying thing of encoded packages is that they have to be in a separate file. If we want to distribute a simple self contained app (encoded), we need to supply two files: the app "interface", and the app package.
If I place all the content of the encoded file inside a string, and transform that string into an InputStream, I'm halfway to view that package content as a file.
But Get, that to my knowledge is the only operation (also used by Needs) that has the decoding function, doesn't work on Streams. It only works on real files.
Can someone figure out a way to Get a Stream?
Waiting for Mathematica to arrive on my iPhone so couldn't test anything, but why don't you write the string to a temporary file and Get that?
Update
Here's how to do it:
encoded = ToFileName[$TemporaryDirectory, "encoded"];
Export[encoded, "code string", "Text"]; (*export encrypted code to temp file *)
It's important to copy the contents of the code string from the ASCII file containing the encoded code using an ASCII editor and paste it between existing empty quotes (""). Mathematica will then do automatic escaping of backslashes and quotes that may be in the code. This file has been made earlier using Encode. Can't do it here in the sample code as SO's Markdown messes with the string.
Get[encoded] (* get encrypted code and decode *)
DeleteFile[encoded] (* Remove temp file *)
Final Answer
Get doesn't appear to be necessary for decoding. ImportString does work as well:
ImportString["code string", "NB"]
As above, paste your encoded tekst from an ASCII editor straight between the "" and let MMA do the escaping.
I don't know of a way to Get a Stream, but you could store the encoded data in your single package, write it out to a temp file, then read the temp file back in with Get.
Just to keep things up to date:
Get works with streams since V9.0.
I need to parse some text from pdfs but the pdf formatting results in extremely unreliable spacing. The result is that I have to ignore the spaces and have a continuous stream of non-space characters.
Any suggestions on how to parse the string and put spaces back into the string by guessing?
I'm using ruby. Or should I say I'musingruby?
Edit: I've pulled the text out using pdf-reader. Some of the pdf files are nicely formatted and some are not. An example of text mixed with positioning:
.7aspe-5.5cts-715.1o0.6f-708.5f-0.4aces-721.4that-716.3are-720.0i-1.8mportant-716.3in-713.9soc-5.5i-1.8alcommunica6.6tion6.3.-711.6Althoug6.3h-708.1m-1.9od6.3els-709.3o6.4f-702.8f5.4ace-707.9proc6.6essing-708.2haveproposed-611.2ways-615.5to-614.7deal-613.2with-613.0these-613.9diff10.4erent-613.7tasks,-611.9it-617.1remainsunclear-448.0how-450.7these-443.2mechanisms-451.7might-446.7be-447.7implemented-447.2in-450.3visualOne-418.9model-418.8of-417.3human-416.4face-421.9processing-417.5proposes-422.7that-419.8informa-tion-584.5is-578.0processed-586.1in-583.1specialised-584.7modules-577.0(Breen-584.4et-582.9al.,-582.32002;Bruce-382.1and-384.0Y92.0oung,-380.21986;-379.2Haxby-379.9et-380.5al.,-
and if I print just string data (I added returns at the end of each line to keep it from
messing up the layout here:
'Distinctrepresentationsforfacialidentityandchangeableaspectsoffacesinthehumantemporal
lobeTimothyJ.Andrews*andMichaelP.EwbankDepartmentofPsychology,WolfsonResearchInstitute,
UniversityofDurham,UKReceived23December2003;revised26March2004;accepted27July2004Availab
leonline14October2004Theneuralsystemunderlyingfaceperceptionmustrepresenttheunchanging
featuresofafacethatspecifyidentity,aswellasthechangeableaspectsofafacethatfacilitates
ocialcommunication.However,thewayinformationaboutfacesisrepresentedinthebrainremainsc
ontroversial.Inthisstudy,weusedfMRadaptation(thereductioninfMRIactivitythatfollowsthe
repeatedpresentationofidenticalimages)toaskhowdifferentface-andobject-selectiveregionsofvisualcortexcontributetospecificaspectsoffaceperception'
The data is spit out by callbacks so if I print each string as it is returned it looks like this:
'The
-571.3
neural
-573.7
system
-577.4
underly
13.9
ing
-577.2
face
-573.0
perc
13.7
eption
-574.9
must
-572.1
repr
20.8
esent
-577.0
the
unchangin
14.4
g
-538.5
featur
16.5
es
-529.5
of
-536.6
a
-531.4
face
'
On examination it looks like the true spaces are large negative numbers < -300 and the false spaces are much smaller positive numbers. Thanks guys. Just getting to the point where i am asking the question clearly helped me answer it!
Hmmmm... I'd have to say that guessing is never a good idea. Looking at the problem root cause and solving that is the answer, anything else is a kludge.
If the spacing is unreliable from the PDF, how is it unreliable? The PDF viewer needs to be able to reliably space the text so the data is there somewhere, you just need to find it.
EDIT following comment:
The idea of parsing the file using a dictionary (your only other option really, apart from randomly inserting spaces and hoping for the best) and inserting spaces at identified word boundaries (a real problem when dealing with punctuation, plurals that don't alter the base word i.e. plural, etc) would, I believe, be a much greater programming challenge than correctly parsing the PDF in the first place. After all, PDF is clearly defined whereas English is somewhat wooly.
Why not look down the route of existing solutions like ps2ascii in linux, call the function from your Ruby and pick up the result.
PDF doesn't only store spaces as space characters, but also uses layout commands for spacing (so it doesn't print a space, but moves the "pen" to the right). Perhaps you should have a look at the PDF reference (the big PDF on the bottom of the site), Chapter 9 "Text" should be what you're looking for.
EDIT: After reading your comment to Lazarus' answer, this doesn't seem to be what you're looking for. I think you should try to get a word list from somewhere and try to split your text using it. A good strategy would be to do that using recursion, because for example:
"meandyou"
The first word could be "me" or "mean", but if you try "mean", "dyou" doesn't make sense, so it will be "me", same for the next word that could be "a" or "an" or "and", only "and" makes sense.
If it were me I'd go back to the source PDFs and try a different method of extracting the text, such as iText (for Java) or maybe some kind of PDF-to-HTML to text conversion software method.