Use Python to generate input to XeTeX - math-mode

I am wondering what is a good way to use Python to generate an arithmetic worksheet consisting only of 2-digit numbers squared. Specifically I want to be able to call upon a Python program in which it asks me for parameters such as the range of the numbers in can call upon to square and the number of questions I want to generate. Once that is done the program will generate the numbers and then automatically open up a .tex file (already with preamble and customizations) and basically do a loop for each question like this:
\begin{exer}
n^2
\end{exer}
%%%%%Solution%%%%%%
\begin{solution}
n^2=n^2
\end{solution}
for some integer n.
Once it is done writing the .tex file then it will run xetex and output the pdf ready to use and print. Any help or suggestions? Python is preferred but not mandatory.

Actually your problem is so simple that doesn't require any special magic. However I would suggest you don't try to append your generated content into a file you already have with preamble, good practice is to leave it untouched and include (in fact you can copy it on generation or use TeX \include).
Now, let's add more to the generation. Python formatter is your friend here, you use the example you've given as a template, and write the product into a file in every iteration. Don't forget to escape "{" brackets, as they're symbols used by formatter.
At the end, (suggestion) you can subprocess to launch XeTeX - depending on your need call() is enough or use popen().

Related

Binary operation between hexadecimal numbers in Ruby

Is there any simple way in Ruby to operate with Hex numbers?
[Updated 2017-02-23] added background.
I created a ruby parser to analyse C code.
Background: The C code is automatically generated by a Python script that reads from a big configuration file. This Python script uses templates to create C and H files. Basically these C files are configuration for a C project.
The file contains macro definitions, arrays with parameters and operations like:
0X5EEA11 & 0X000100 // checking if the bit 8 is active
Since this code is safety relate the correctness of the code has to be ensure somehow, so I decided to use ruby to parse the generated file and compare them back to the original seeds (configuration files, which are excel lists with thousand of rows)
So I wonder if I have to convert it to binary and check bit a bit if the operation is correct.
I also check the result of the executable comparing that the mask was calculated correctly.
I saw how to convert to hex, but actually those are integers so I dont think I can do binary operation over them as if they were Hex.

How to find foreign language used in "C comments"

I have a large source code where most of the documentation and source code comments are in english. But one of the minor contributors wrote comments in a different language, spread in various places.
Is there a simple trick that will let me find them ? I imagine first a way to extract all comments from the code and generate a single text file (with possible source file / line number info), then pipe this through some language detection app.
If that matters, I'm on Linux and the current compiler on this project is CLang.
The only thing that comes to mind is to go through all of the code manually and check it yourself. If it's a similar language, that doesn't contain foreign letters, consider using something with a spellchecker. This way, the text that isn't recognized will get underlined, and easy to spot.
Other than that, I don't see an easy way to go through with this.
You could make a program, that reads the files and only prints the comments out to another output file, where you then spell check that file, but this would seem to be a waste of time, as you would easily be able to spot the comments yourself.
If you do make a program for that, however, keep in mind that there are three things to check for:
If comment starts with /*, make sure it stops reading when encountering */
If comment starts with //, only read one line - unless:
If line starting with // ends with \, read next line as well
While it is possible to detect a language from a string automatically, you need way more words than fit in a usual comment to do so.
Solution: Use your own eyes and your own brain...

How to save output from mathematica file into some other format and then using it later?

I have a large program in mathematica, and generate many outputs. However, I don't want those outputs to be visible in my program and I want to save the outputs in text or any other format.
Moreover I also want to call any of the outputs and perform some specific operations on it (plotting, squaring etc.)
Please guide me in this respect.
You need to use the Export function. If you want the output suppressed, use semicolons. See the help file on Export for more.
(I don't know what else to say..)
Export function is the way to go.
For example, if you ever needed 1 billion digits of Pi:
Export["pi-to-1B.txt", N[[Pi], 1000000000]];

How to transform all the numbers in a text file

I have an XML file that contains, among other things, numbers. Something like:
<things>
<a name="cat">
<vecs>(100,20),(200,40),(50,85)</vecs>
</a>
<b name="dog">
<vecs>(0,10),(5,75)</vecs>
<ratio>85.5</ratio>
</b>
... many more elements and numbers ...
</things>
Unfortunately all of the numbers with <vecs> elements in my file are 4 times larger than they should be. I need to multiply them all by 0.25. Numbers in <ratio> and other elements are fine. So for example the first <vecs> line above should read:
<vecs>(25,5),(50,10),(12.5,21.25)</vecs>
Is there a convenient solution (e.g. UNIX command line tool, bash script, etc.) to processing the file so that I can find all the numbers that live within a particular context (e.g. between <vecs> and </vecs>), perform a mathematical operation on them, and replace the existing numeric text in each instance with the result of the operation?
And no, I'm not asking you to write a whole program to solve this particular problem in detail. I'm wondering if there is an existing tool for such purposes or a clever combination of existing tools that could accomplish the job.
The problem itself is fairly easy, but the syntax is uncommon enough to have to use a general purpose script language to tackle the problem. For example in Python you would write something like this
from __future__ import print_function
import re
def transform(match):
return '(%.2f,%.2f)' % (int(match.group(1))*0.25,
int(match.group(2))*0.25)
for line in file('test.xml'):
if '<vecs>' in line:
print(re.sub(r'\((\d+),(\d+)\)',transform,line),end='')
else:
print(line,end='')
For particular problems your best bet is to learn a script language and use that to solve them.
If you want to use unix tools to do this kind of things sed and awk are your friends.

convert multiple .txt to multiple ascii files fast - possible in Matlab?

I have over 120 .txt files (all named like s1.txt, s2.txt, ..., s120.txt) that I need to convert to ASCII extension to use in MATLAB.
my .txt (comma , delimited .txt) files look like the following:
20080102,43.0300,3,9.493,569.567,34174.027,34174027
20080102,43.0600,3,9.498,569.897,34193.801,34193801
In MATLAB I wish to use code similar to the following:
for i = svec;
%# where svec = [1 2 13 15] some random number between 1 and 120.
eval(['load %mydirectory', eval(['s',int2str(i)]),'.ascii']);
end;
If I am not mistaken I can't use the above command with .txt files and therefore I must use ASCII files.
Since I have a lot of files to convert and they are large in size, is there a quick way to convert all my files via MATLAB, or perhaps there is a great converting software available for Mac on the web? Would anyone have a better suggestion than using the code above?
Adding to nrz's answer:
I'm not sure what you want to do exactly, but know that you can open any file in MATLAB, both as text (ASCII) or in binary mode. The latter can be achieved using fread.
As a side note, you also asked for a better suggestion for your code.
Well, what did you try to achieve with the two eval invocations? Why not call the commands directly? Do this instead:
for i = svec
load (['%mydirectory\s', int2str(i), '.txt'], '-ascii');
end
I also took the liberty to add a backslash that I think you had omitted.
In most cases, you'd be better off without using eval. Check the alternatives...
Can you show an example file? Not every text file is valid for load command. If your file is not in a valid format, changing the extension part of filename from .txt to .ascii doesn't help at all. Instead, in that case the data must be either converted to a valid format for load command or, alternatively, loaded into MATLAB by some other means eg. by using fscanf or xlsread. File structure is needed for both ways to solve this.
See also load command in matlab loading blank file.
A slightly cleaner way:
for i=1:120
fname = fullfile('mydirectory', sprintf('s%d.txt',i));
X = load(fname, '-ascii');
end

Resources