I am writing a mathematica script and running it in the linux batch shell. The script gives as a result a list of numbers. I would like to write this list to a file as a one single column without the braces and commas. For this, I tried to use Export comand as
Export["file.txt", A1, "Table"]
but I get the error:
Export::infer: Cannot infer format of file test1.txt
I tried with other format but i got the same error.
Could someone please tell what is wrong and what i can do? Thank beforehand
From what I understand you are trying to export the file in TABLE, why don't you try something like this ,
Export["file.txt", A1, "Text"]
This:
A1 = {1,2,3};
Export["test.tab", Transpose[{A1}], "Table"];
produces a single column without braces and commas.
Related
I am trying to run a command on a list of variables stored as values in a different file. To do that I am creating a new syntax based on the variable names, like this:
WRITE OUT="\Selection.sps"
/"VARIABLE ATTRIBUTE VARIABLES = Final_",Var1," ATTRIBUTE=selectVars('yes')." .
EXECUTE.
The Problem is between Final and Var1, I am getting 11 spaces. The file in which I want to use this macro has variable names as Final_Var1 (So in the new file, Final is added to each variable's Name). How can I remove the space so that the new variable can be referred to properly? Should I create a new file or COMPUTE and CONCAT commands?
The write command is limited like that - you can't avoid the spaces or use trim. For commands like the one you're working on there is no way to build the command within the write command - you have to build the text in advance and then put it in the write command, so -
strimg mycmd(a100).
compute mycmd=concat("VARIABLE ATTRIBUTE VARIABLES = Final_",
ltrim(string(Var1,f4)),
" ATTRIBUTE=selectVars('yes').").
WRITE OUT="\Selection.sps" /mycmd .
exe.
Note that this is not the only way to work on variable lists - you can use Python code within the syntax to build your variable lists more efficiently.
I have found a temporary solution, in order to remove the spaces from the variables, I am creating a new variable using:
*Add a variable to use in .sps file.
NUMERIC A(F4).
COMPUTE A = Var1.
ALTER TYPE A (A25).
STRING CMD (A100).
COMPUTE CMD = CONCAT("VARIABLE ATTRIBUTE VARIABLES = Final_", LTRIM (A) , ATTRIBUTE=selectVars('yes').").
EXECUTE.
WRITE OUT="File location\Selection.sps" /CMD.
EXECUTE.
and now a macro can be created using Selection.sps.
If a simpler way exists, please let me know!
So, I am writing a shell script and I am running a command that gives me an output like:
{"a":"some_text","b":some_other_text","c":"even_more_text"}
Now, I am not sure how to parse it, I basically need the value of "c", i.e. "ever_more_text" in a variable, but finding out results on internet have not worked yet! TIA.
the output which you paste here is not valid json. Check with https://jsonformatter.curiousconcept.com/ There is missing first double quote in "some_other_text". If you add it, you can then easily parse with jq:
./your_script.sh | jq -r ".c"
I have a problem reading ASCII-Files into Origin9.1. My ASCII-File looks like below: (note that I have 1 space before , 2 spaces between and 1 space after the numbers)
C:\amiX_TimeHist_P1.dat:
0,19325E-02 0,10000E+00
0,97679E-11 0,99997E-11
0,19769E+10 0,10025E+00
0,39169E+00 0,11636E+00
0,47918E+00 0,13156E+00
later I want to do the following with a scr-File but for now I write the following in Origin2015 in the Script-LabTalk-window:
open -w C:\amiX_TimeHist_P1.dat;
That command works but the numbers I get are in a wrong format:
When I read the file with the Import-wizzard or with ASCII-Import I can choose several options to fit the numbers correctly in the my columns. But this has to be done automatically.
Is there a way to read an ASCII-File uncluding setting parameters when using a script?
Instead of open you should use impASC to import ASCII data. Then you can specify some options to the command. In your case the following should work:
impASC fname:=C:\amiX_TimeHist_P1.dat options.FileStruct.DataStruct:=2 options.FileStruct.MultipleDelimiters:=" " options.FileStruct.NumericSeparator:=1;
If you just type impASC in your script window, in the following dialog box you can edit the import options and display the corresponding skript command.
I have a JSON file and want to read using Apache Pig.
I tried using the regular JSONLOADER, but looks like JSONLOADER works only with single line JSON. Then I tried with Elephant-Bird. But I am still not able to see the results correctly. Can any one please suggest a solution?
Input :
{"employees":[
{"firstName":"John", "lastName":"Doe"},
{"firstName":"Anna", "lastName":"Smith"},
{"firstName":"Peter", "lastName":"Jones"}
]}
Note : I dont want to convert the input in to a single line.
Script:
A = LOAD 'input' USING com.twitter.elephantbird.pig.load.JsonLoader('-nestedLoad');
B = FOREACH A GENERATE FLATTEN($0#'employees');
Dump B;
Expected result should be :
([firstName#John,lastName#Doe])
([firstName#Anna,lastName#Smith])
([firstName#Peter,lastName#Jones])
As mentioned in the comments by siva, the answer is basically that you do need to change your input to a single line.
JsonLoader or elephantbird loader will always works only with single
line . It will not work with multiline. You need to convert your input
to single line before passing to pig. One workaround would be write a
shell script and call the logic to replace multiline to single line
using 'SED' command and then call the pig script in the shell script.
This link will help you how to call pig thru shell script.
I have a shell script with this code:
sqlldr $ws_usr_eva/$ws_pass_eva#$ws_esq_eva CONTROL=/$HOME/controlfiles/CONTROL_FILE.CTL LOG=/$HOME/batch/log/LOG_FILE.$fecfile.log DATA=/$HOME/batch/input/INPUT_FILE_$fecfile.txt > /$HOME/batch/log/result_loader_eva.ora
The variables $ws_usr_eva, $ws_pass_eva and $ws_esq_eva are filled before I execute the sqlldr. I already check the content and its ok.
So when I run the script, it show me: LRM-00112: multiple values not allowed for parameter 'control'
I tried to run the script without the variables, and it works fine:
sqlldr user/pwd#schema CONTROL=/$HOME/controlfiles/CONTROL_FILE.CTL LOG=/$HOME/batch/log/LOG_FILE.$fecfile.log DATA=/$HOME/batch/input/INPUT_FILE_$fecfile.txt > /$HOME/batch/log/result_loader_eva.ora
I have to use variables in the sqlldr, because it reads them from a configuration file.
I also try with sqlldr userid=$ws_usr_eva/$ws_pass_eva#$ws_esq_eva ... but it didn't work.
Can you help me?
Thanks in advance.
Incase it helps anybody, you need to enclose your paths (data, control, log) in single qoutes. Double qoutes will not work (although sqlldr will not complain, but rather simply ignore the double qoutes).
In your shell, try to escape your variables::
sqlldr "${ws_usr_eva}/${ws_pass_eva}#${ws_esq_eva}" "CONTROL=${HOME}/controlfiles/CONTROL_FILE.CTL" "LOG=${HOME}/batch/log/LOG_FILE.${fecfile}.log" "DATA=${HOME}/batch/input/INPUT_FILE_${fecfile}.txt" > "${HOME}/batch/log/result_loader_eva.ora"