Pig without Hadoop on Windows 7 - windows

I am trying to run PigUnit tests on a Windows 7 machine before running the actual pig script on a Ubuntu cluster and I start to think that my understanding of "withouthadoop" is not correct.
Do I need to install Hadoop to locally run a PigUnit test on a Windows 7 machine?
I installed:
eclipse Juno & ant
cygwin
I set up:
JAVA_HOME=C:\Program Files\Java\jdk1.6.0_39
PIG_HOME=C:\Users\john.doe\Java\eclipse\pig
PIG_CLASSPATH=%PIG_HOME%\bin
I created using eclipse's Ant builder jar-all and pigunit-jar:
pig.jar
pig-withouthadoop.jar
pigunit.jar
Still when I type pig -x local in cygwin I get:
$./pig -x local
cygpath: can't convert empty path
Exception in thread "main" java.io.IOException: Error opening job jar: /usr/lib/pig/pig-withouthadoop.jar
at org.apache.hadoop.util.RunJar.main(RunJar.java:135)
Caused by: java.io.FileNotFoundException: \usr\lib\pig\pig-withouthadoop.jar (the systen cannot find the given path)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(ZipFile.java:127)
at java.util.jar.JarFile.<init>(JarFile.java:136)
at java.util.jar.JarFile.<init>(JarFile.java:73)
at org.apache.hadoop.util.RunJar.main(RunJar.java:133)
When I try to run the test from http://pig.apache.org/docs/r0.10.0/test.html#pigunit from within eclipse using the option "Run as JUnit", I get:
java.io.IOException
at org.apache.pig.pigunit.pig.PigServer.registerScript(PigServer.java:62)
at org.apache.pig.pigunit.PigTest.registerScript(PigTest.java:171)
at org.apache.pig.pigunit.PigTest.assertOutput(PigTest.java:267)
at org.apache.pig.pigunit.PigTest.assertOutput(PigTest.java:262)
at da.utils.pigunit.PigUnitExample.testTop2Queries(PigUnitExample.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
I am starting to think that I missed some crucial basic information to run Pig on Windows, I have to say also that I am no experienced user with Windows 7 and cygwin, I come from the Unix world.

Don't fight it. Install Hadoop HDInsight server on Windows from the Web Platform installer:
http://www.microsoft.com/web/downloads/platform.aspx
It doesn't take long or take up that much space, and the whole shebang is just set up and running for you. I can't get Pig scripts to take parameters, and there's no HBase, but you get HDFS, Pig, Hive. You can even get a whole local cluster going if just follow:: http://social.msdn.microsoft.com/Forums/en-US/hdinsight/thread/885efc22-fb67-4df8-8648-4ff38098dac6/

I have installed pig 0.12 in cygwin (I run windows 7 64-bit) without installing hadoop. As far as I can see, the steps I followed where:
Install Cygwin64 (with Perl package)
Download pig-0.12.1.tar.gz, copy to home folder
Extract to home folder in cygwin:
$ tar xzf pig-0.12.1.tar.gz
Export JAVA_HOME:
$ export JAVA_HOME=/cygdrive/c/Program\ Files/Java/jre6/
Add pig to path:
$ export PATH=~/pig-0.12.1/bin/:$PATH
Rename pig-0.12.1.jar to pig.jar:
$ cp pig-0.12.1.jar pig.jar
Export PIG_CLASSPATH:
$ export PIG_CLASSPATH=~/pig-0.12.1/pig.jar
Run pig in local mode (start Grunt):
$ pig -x local
There will be a warning:
"cygpath: cannot create short name of C:\cygwin64\home\xxx\pig-0.12.1\logs"
We can remove it simply by running:
$ mkdir logs

Like you, I'm trying at the moment to get a functioning Pig installation on a Windows PC using cygwin in order to learn Pig Latin using small datasets on a single JVM. Not a huge ask, you would have thought, but the pain is almost unbearable. I come from a Windows background and the UNIX part is the steep learning curve for me. The pig-withouthadoop jar doesn't contain hadoop, so hadoop needs to be already installed on your machine to use it; the pig.jar contains pig's own version of hadoop and so is the one to use if hadoop is not already installed on your machine. This is the way I understand it, and it seems to be born out by dumping a list of the contents of each .jar to a text file and viewing the results in Notepad++ .
When you type pig -x local at cygwin's dollar prompt, the bash command script 'pig' is invoked and run. Have a look at it (from your PIG_HOME) with $cd bin $ cat pig. I've been right through it these last few days with vim (!) and near the end of the code is a little fork for cygwin users, in order to cast environment variables, that up until now have been in Unix format, into a form that the Windows version of java.exe will understand when 'exec java ...' is called right at the end of the script. Without this conversion, the Windows java.exe won't understand its parameters:
'#' cygwin path translation
if $cygwin; then
CLASSPATH=cygpath -p -w "$CLASSPATH"
PIG_HOME=cygpath -d "£PIG_HOME"
PIG_LOG_DIR=cygpath -d "$PIG_LOG_DIR"
fi
Cygpath is a cygwin utility that converts UNIX-style file paths into Windows-style file paths, and vice versa. The error message: "cygpath: can't convert empty path" must come from here, I think. Check that CLASSPATH, PIG_HOME and PIG_LOG_DIR aren't empty, perhaps by placing your own echo commands in the script.
On my machine and installation, there was an error generated here, but not the same as yours. I found replacing -w and -d with -m, which makes cygpath use the C:/Program Files/Java... syntax conversion, worked. But then other problems appear, which I shall leave for my own question.

According to this note [1], it is not possible to use the Hadoop native libraries on windows 7 directly using Cygwin:
Hadoop native library is supported only on *nix platforms only. Unfortunately it is known not to work on Cygwin [...]
I have traced down the error message cygpath: can't convert empty path to the line JAVA_LIBRARY_PATH=cygpath -w "$JAVA_LIBRARY_PATH" in hadoop-config.sh which I commented out following the advice from [2]
Comment some translation in hadoop-config.sh.
#if $cygwin; then
#HADOOP_PREFIX=`cygpath -w "$HADOOP_PREFIX"`
#HADOOP_LOG_DIR=`cygpath -w "$HADOOP_LOG_DIR"`
#JAVA_LIBRARY_PATH=`cygpath -w "$JAVA_LIBRARY_PATH"`
#fi
Now I get the following error:
Error before Pig is launched -- ERROR 2999: Unexpected internal error.
java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation
So the conclusion I draw from this, is that Pig, even in local mode, requires the HDFS. And the HDFS requires the Hadoop native libraries. And the native libraries are known not to work on Cygwin. Hence: IMHO, Pig cannot run using Cygwin as it is.

To run PigUnit on Windows 7. I don't install Cygwin.
Thanks for Konstantin Kudryavtsev i use his FixHadoopOnWindows.runFix()
http://simpletoad.blogspot.com/2013/05/pigunit-issue-on-windows.html
I call the runFix in my setUp, for example
private static PigTest test;
#BeforeClass
public static void setUp() throws IOException, ParseException {
try {
FixHadoopOnWindows.runFix();
// TODO: load pig script properly
test = new PigTest("src/pig/example.pig");
//test.override(...);
}
catch (Exception e) {
}
}
Use maven, need the following dependency
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.pig</groupId>
<artifactId>pig</artifactId>
<version>0.15.0</version>
</dependency>
<dependency>
<groupId>org.jboss.forge</groupId>
<artifactId>forge-javassist</artifactId>
<version>2</version>
</dependency>
<dependency>
<groupId>org.apache.pig</groupId>
<artifactId>pigunit</artifactId>
<version>0.15.0</version>
<scope>test</scope>
</dependency>

Related

M1 Mac Think Or Swim Native Installation

Im trying to install Think or Swim on my M1 Max MacBook Pro using this guide
https://www.reddit.com/r/thinkorswim/comments/oojac1/guide_running_thinkorswim_natively_on_apple/
On step 6 I keep getting this error, any help appreciated thank you !
java -jar launcher.jar
java.lang.reflect.InvocationTargetExceptionat java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77)at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.base/java.lang.reflect.Method.invoke(Method.java:568)at com.devexperts.jnlp.Launcher.run(Launcher.java:30)at java.base/java.lang.Thread.run(Thread.java:833)Caused by: java.lang.IllegalAccessError: class com.devexperts.jnlp.utils.URLManager$1 (in unnamed module u/0x182a48cb) cannot access class sun.security.util.HostnameChecker (in module java.base) because module java.base does not export sun.security.util to unnamed module u/0x182a48cbat com.devexperts.jnlp.utils.URLManager$1.verify(URLManager.java:48)at java.base/sun.net.www.protocol.https.HttpsClient.checkURLSpoofing(HttpsClient.java:653)at java.base/sun.net.www.protocol.https.HttpsClient.afterConnect(HttpsClient.java:594)at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:183)at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1665)at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1589)at java.base/java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:529)at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getResponseCode(HttpsURLConnectionImpl.java:308)at com.devexperts.jnlp.updater.HttpResponse.(HttpResponse.java:26)at com.devexperts.jnlp.updater.HttpRequest.doRequest(HttpRequest.java:82)at com.devexperts.jnlp.updater.HttpRequest.doGetRequest(HttpRequest.java:63)at com.devexperts.jnlp.utils.Utils.getVersion(Utils.java:201)at com.devexperts.jnlp.updater.ModuleManager.isUptodate(ModuleManager.java:363)at com.devexperts.jnlp.UpdateManager.isModuleUptodate(UpdateManager.java:154)at com.devexperts.jnlp.UpdateManager.main(UpdateManager.java:442)... 6 more
Before step 6, do this;
Type "cd thinkorswim"
This tells the computer where the folder is I believe.
Next problem you may encounter, is finding the file path for the usergui. Change the 1970.0.70 to the folder after "usergui".
EG:
"sudo cp ~/Downloads/jna-platform.jar ~/Downloads/thinkorswim/usergui/1970.0.70/jna-platform-3.5.2.jar"
Make sure you go to wherever your file is (I kept mine in downloads permanently) downloads-> thinkorswim-> usergui-> 1971.1.2 is the file path for mine.
"sudo cp ~/Downloads/jna-platform.jar ~/Downloads/thinkorswim/usergui/1971.1.2/jna-platform-3.5.2.jar"
Once you sort everything out, keep these 3 commands close:
cd ~/Downloads (or wherever the folder is)
cd thinkorswim
sudo java -jar launcher.jar
This is how you're going to relaunch TOS from now onwards.
Enjoy the smoothness. It's truly significant.

Unable to run SparkR in Rstudio

I cant use sparkR in Rstudio because im getting some error: Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, :
JVM is not ready after 10 seconds
I have tried to search for the solution but cant find one. Here is how I have tried to setup sparkR:
Sys.setenv(SPARK_HOME="C/Users/alibaba555/Downloads/spark") # The path to your spark installation
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library("SparkR", lib.loc="C/Users/alibaba555/Downloads/spark/R") # The path to the lib folder in the spark location
library(SparkR)
sparkR.session(master="local[*]",sparkConfig=list(spark.driver.memory="2g")*
Now execution starst with a message:
Launching java with spark-submit command
C/Users/alibaba555/Downloads/spark/bin/spark-submit2.cmd
sparkr-shell
C:\Users\ALIBAB~1\AppData\Local\Temp\Rtmp00FFkx\backend_port1b90491e4622
And finally after a few minutes it returns an error message:
Error in sparkR.sparkContext(master, appName, sparkHome,
sparkConfigMap, : JVM is not ready after 10 seconds
Thanks!
It looks like the path to your spark library is wrong. It should be something like: library("SparkR", lib.loc="C/Users/alibaba555/Downloads/spark/R/lib")
I'm not sure if that will fix your problem, but it could help. Also, what versions of Spark/SparkR and Scala are you using? Did you build from source?
What seemed to be causing my issues boiled down to the working directory of our users being a networked mapped drive.
Changing the working directory fixed the issue.
If by chance you are also using databricks-connect make sure that the .databricks-connect file is copied into the %HOME% of each user who will be running Rstudio or set up databricks-connect for each of them.

Downloading spark-csv in Windows

I am a beginner in the Spark world, and want to do my Machine Learning algorithms using SparkR.
I installed Spark in standalone mode in my laptop (Win 7 64-bit) and I am available to run Spark (1.6.1), Pyspark and begin SparkR in Windows following this effective guide: link . Once I started SparkR I began with the famous Flights example:
#Set proxy
Sys.setenv(http_proxy="http://user:password#proxy.companyname.es:8080/")
#Set SPARK_HOME
Sys.setenv(SPARK_HOME="C:/Users/amartinezsistac/spark-1.6.1-bin-hadoop2.4")
#Load SparkR and its library
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"),"R", "lib"), .libPaths()))
library(SparkR)
#Set Spark Context and SQL Context
sc = sparkR.init(master="local")
sqlContext <- sparkRSQL.init(sc)
#Read Data
link <- "s3n://mortar-example-data/airline-data"
flights <- read.df(sqlContext, link, source = "com.databricks.spark.csv", header= "true")
Nevertheless, I receive the next error message after the last line:
Error in invokeJava(isStatic = TRUE, className, methodName, ...) :
java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:102)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.api.r.SQLUtils$.loadDF(SQLUtils.scala:160)
at org.apache.spark.sql.api.r.SQLUtils.loadDF(SQLUtils.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:141)
at org.apache.spark.api.r.RBackendHandler.ch
It seems like the reason is that I do not have installed the read-csv package, which can be downloaded from this page (Github link). As well as in Stack, in spark-packages.org website, (link) the advice is to do: $SPARK_HOME/bin/spark-shell --packages com.databricks:spark-csv_2.11:1.4.0 which is for a Linux installation.
My question is: How could I run this code line from Windows 7 cmd in order to download this package?
I also tried an alternate solution for my error message (Github) without success:
#In master you don't need spark-csv.
#CSV data source is built into SparkSQL. Just use it as follows:
flights <- read.df(sqlContext, "out/data.txt", source = "com.databricks.spark.csv", delimiter="\t", header="true", inferSchema="true")
Thanks in advance to everyone.
It is the same for Windows. When you start spark-shell from the bin directory, start it this way:
spark-shell --packages com.databricks:spark-csv_2.11:1.4.0

Eclipse still thinks I have RVM installed, but I don't

I'm using Eclim to get auto-completion for Java, Ruby, etc in Vim. It starts an instance of Eclipse. Eclipse still thinks I have RVM installed for some reason (I use rbenv now). Any idea how I should get rid of this configuration problem or work-around this error?
2014-06-22 22:43:10,123 INFO [org.eclim.plugin.jdt.PluginResources] Setting 'JRE_SRC' to '/Library/Java/JavaVirtualMachines/jdk1.8.0_05.jdk/Contents/Home/src.zip'
org.eclipse.core.runtime.CoreException: Exception occurred executing command line.
at org.eclipse.debug.core.DebugPlugin.exec(DebugPlugin.java:875)
at org.eclipse.dltk.internal.launching.execution.LocalExecEnvironment.exec(LocalExecEnvironment.java:72)
at org.eclipse.dltk.launching.ScriptLaunchUtil.runScriptWithInterpreter(ScriptLaunchUtil.java:85)
at org.eclipse.dltk.ruby.internal.launching.RubyGenericInstall$BuiltinsHelper.generateLines(RubyGenericInstall.java:70)
at org.eclipse.dltk.ruby.internal.launching.RubyGenericInstall$BuiltinsHelper.load(RubyGenericInstall.java:171)
at org.eclipse.dltk.ruby.internal.launching.RubyGenericInstall$BuiltinsHelper.getSources(RubyGenericInstall.java:144)
at org.eclipse.dltk.ruby.internal.launching.RubyGenericInstall.getBuiltinModules(RubyGenericInstall.java:246)
at org.eclipse.dltk.internal.core.BuiltinProjectFragment.isSupported(BuiltinProjectFragment.java:97)
at org.eclipse.dltk.internal.core.ScriptProject.computeProjectFragments(ScriptProject.java:673)
at org.eclipse.dltk.internal.core.ScriptProject.computeProjectFragments(ScriptProject.java:605)
at org.eclipse.dltk.internal.core.ScriptProject.computeProjectFragments(ScriptProject.java:565)
at org.eclipse.dltk.internal.core.ScriptProject.getAllProjectFragments(ScriptProject.java:2921)
at org.eclipse.dltk.internal.core.ScriptProject.getAllProjectFragments(ScriptProject.java:2915)
at org.eclipse.dltk.core.search.indexing.core.ProjectRequest.run(ProjectRequest.java:67)
at org.eclipse.dltk.core.search.indexing.AbstractJob.execute(AbstractJob.java:76)
at org.eclipse.dltk.internal.core.search.processing.JobManager.run(JobManager.java:467)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Cannot run program "/Users/ivan/.rvm/rubies/ruby-2.0.0-p195/bin/ruby" (in directory "/var/folders/nh/07hs5mmj0hs7fdq3181dwpbc0000gn/T/dltk60850.tmp/scripts"): error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)
at java.lang.Runtime.exec(Runtime.java:620)
at org.eclipse.debug.core.DebugPlugin.exec(DebugPlugin.java:871)
... 16 more
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:185)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1023)
... 18 more
Eclim uses .buildpath for projects to set configurations. Check to make sure there aren't any remnant rvm / ruby configuration files left in any of your current active/existing projects in Eclipse.
Eclim uses .buildpath files to set the configuration for each project. There was an old project on my hard drive that had a reference to an rvm ruby in the text of the .buildpath file. It can be really hard to track down all old projects without the use of a file find utility such as locate. For this particular issue, I used locate buildpath | xargs grep rvm to search each buildpath file for the string rvm. Because the stack trace does not point to the project or build path file, only by doing this kind of search was I able to resolve the issue.

Leiningen repl EOF exception in project

Just installed Leiningen 2.1.2 (lein.bat) on Windows XP in D:\lein\, added this dir to path.
Then I started repl
D:\lein>lein repl
and it runs fine.
Also it runs in other dir and can execute commands well.
Then i made sample project 'helloworld':
D:\lein>lein new app helloworld
Lein made project dir with sample app.
Then I go to project dir with
D:\lein>cd helloworld
And now i run command inside project folder:
D:\lein\helloworld>lein repl
and get this error:
Exception in thread "main" clojure.lang.LispReader$ReaderException: java.lang.Ru
ntimeException: EOF while reading string
at clojure.lang.LispReader.read(LispReader.java:220)
at clojure.core$read.invoke(core.clj:3407)
at clojure.core$read.invoke(core.clj:3405)
at clojure.main$eval_opt$fn__6602.invoke(main.clj:306)
at clojure.main$eval_opt.invoke(main.clj:306)
at clojure.main$initialize.invoke(main.clj:327)
at clojure.main$script_opt.invoke(main.clj:353)
at clojure.main$main.doInvoke(main.clj:440)
at clojure.lang.RestFn.invoke(RestFn.java:3894)
at clojure.lang.Var.invoke(Var.java:527)
at clojure.lang.AFn.applyToHelper(AFn.java:410)
at clojure.lang.Var.applyTo(Var.java:532)
at clojure.main.main(main.java:37)
Caused by: java.lang.RuntimeException: EOF while reading string
at clojure.lang.Util.runtimeException(Util.java:219)
at clojure.lang.LispReader$StringReader.invoke(LispReader.java:461)
at clojure.lang.LispReader.readDelimitedList(LispReader.java:1148)
at clojure.lang.LispReader$ListReader.invoke(LispReader.java:982)
at clojure.lang.LispReader.readDelimitedList(LispReader.java:1148)
at clojure.lang.LispReader$ListReader.invoke(LispReader.java:982)
at clojure.lang.LispReader.readDelimitedList(LispReader.java:1148)
at clojure.lang.LispReader$ListReader.invoke(LispReader.java:982)
at clojure.lang.LispReader.readDelimitedList(LispReader.java:1148)
at clojure.lang.LispReader$ListReader.invoke(LispReader.java:982)
at clojure.lang.LispReader.readDelimitedList(LispReader.java:1148)
at clojure.lang.LispReader$ListReader.invoke(LispReader.java:982)
at clojure.lang.LispReader.read(LispReader.java:185)
... 12 more
Exception in thread "Thread-1" clojure.lang.ExceptionInfo: Subprocess failed {:e
xit-code 1}
at clojure.core$ex_info.invoke(core.clj:4327)
at leiningen.core.eval$fn__2654.invoke(eval.clj:213)
at clojure.lang.MultiFn.invoke(MultiFn.java:231)
at leiningen.core.eval$eval_in_project.invoke(eval.clj:283)
at leiningen.repl$start_server.invoke(repl.clj:117)
at leiningen.repl$server$fn__6110.invoke(repl.clj:173)
at clojure.lang.AFn.applyToHelper(AFn.java:159)
at clojure.lang.AFn.applyTo(AFn.java:151)
at clojure.core$apply.invoke(core.clj:617)
at clojure.core$with_bindings_STAR_.doInvoke(core.clj:1788)
at clojure.lang.RestFn.invoke(RestFn.java:425)
at clojure.lang.AFn.applyToHelper(AFn.java:163)
at clojure.lang.RestFn.applyTo(RestFn.java:132)
at clojure.core$apply.invoke(core.clj:621)
at clojure.core$bound_fn_STAR_$fn__4102.doInvoke(core.clj:1810)
at clojure.lang.RestFn.invoke(RestFn.java:397)
at clojure.lang.AFn.run(AFn.java:24)
at java.lang.Thread.run(Unknown Source)
REPL server launch timed out.
I feel I missed something or my system messed somehow. Anyone have ideas?
SOLVED
Installed JDK instead of JRE and it works ok.
ALSO
Problem can raise when 'java.exe' from jre-s comes first on the path, i have cleaned system by recursively looking 'where' java.exe is and removing its presence from path everywhere except jdk path.
It can be :
under windows\system32 folder /just delete it from there
under jdk-s path-s /just remove that path-s from $PATH variable
You have to change lein version in your script lein.bat from 2.1.3 to 2.1.0 LEIN_VERSION=2.1.0 then enter the command lein self-install in the command prompt
This worked for me:
remove directory %home%.lein with all contents/files under it
download win installer http://leiningen-win-installer.djpowell.net/
install leiningen
run repl like this:
[WINKEY + R]
CMD.EXE [ENTER]
C:>\ lein repl
Upgrading from jdk 1.6 to jdk 1.7 fixed that problem for me.

Resources