I am trying to run Pentaho Data Integration (ver. 8.3) in my Windows machine and it is not working.
These are the steps I tried to make it work:
Tried rebooting the machine without success.
Also tried to run the Spoon.bat command directly from the directory where Pentaho is located, but it did not work.
Checked if my java installation changed since the last time it worked, it did not, what can be happening?
In a support chat I read someone was able to fix the problem by clearing the cache, but did not explain how to do it, how do I clean the cache?
Have you installed >jdk java 1.8 environment?
And you should open powershell or other terminal to check: java -version
These are the steps to clear the cache in a Windows environment:
Go to C:\Users\youruser.kettle
Look for the file db.cache-* (I have PDI version 8.3, my file is named db.cache-8.3.0.0-371)
Edit the file with any editor (i.e. Notepad) and erase all content
It worked for me!
First, you should run .\data-integration\set-pentaho-env.bat; It will set pdi environment. The most important is the Java HOME, and JAVA Version. PDI 8.3 can only run in JAVA 1.8 and above.
Related
I am a newbie here. I just love to code and develop own programs...Day before yesterday I got an idea to set Hadoop on windows. I just fetched all the stacks but could not successfully install it. I am attaching screenshots together with my query. My windows version is 8.1 64 bit.
The snapshot that you provided says that the JAVA_HOME is not set correctly, can you make sure that JAVA_HOME is set properly in your system?
Please verify javac and java commands works from your command prompt.
OR can you please provide hadoop-env.cmd content? so that we can find out the root cause?
It would be nice if I could download the source code of spark from github, then build it with sbt on my windows machine, and use IntelliJ to make little modifications to the code base. I have installed spark before on windows quite a few times, but I just use the packaged tarball and not the source code. Has anyone built the source code on a windows machine before?
You need to account also for the simple differences in \n\r and \n. So you should use dos2Unix utility for Linux and make sure that you are using an up to date version of Cygwin when installing and running hadoop utils.
I found the spark developer tools page and it was very helpful. I needed "build/sbt compile"
http://spark.apache.org/developer-tools.html#reducing-build-times
An update was downloaded automatically by my 2.6.3.RELEASE Build 201411281425.
STS (Spring Tool Suite) asks to install it and when I click on the pop-up window it does some things and then stops with the following message which seem to indicate that it wants to delete itself.
I can understand why this fails but I am not sure why STS would think that this was possible.
I could not find any instructions about manually installing the zip file that is available as a download as an alternative way to upgrade my installation.
How do I fix the automatic install or manually install the zip?
(I am on Windows 7)
Error message:
An error occurred while uninstalling
session context was:
(profile=DefaultProfile, phase=org.eclipse.equinox.internal.p2.engine.phases.Uninstall,
operand=[R]org.springsource.sts.ide.executable.win32.win32.x86_64
3.6.3.201411281415-RELEASE-e44
--> null,
action=org.eclipse.equinox.internal.p2.touchpoint.natives.actions.CleanupzipAction).
Backup of file C:\RAMDrive\spring\STS.exe failed.
File that was copied to backup could not be deleted: C:\RAMDrive\spring\STS.exe
Start the STS you want to upgrade and before you click on "Check for Upgrades", with STS still running, rename the STS.exe file you just started, to something else, like "STS_old.exe". That should do it.
The message sounds strange, I've never seen this before. To install a fresh copy of STS, just download the ZIP file from the download page (the one that matches your operating system and pick the right 32bit or 64bit one, depending on your OS and the JDK you are using). Then unzip, and start STS.exe. That's it.
I downloaded apache-tomcat-7.0.54 of http://tomcat.apache.org/download-70.cgi, the binary distributions 32-bit Windows zip (pgp, md5).
I went to apache-tomcat-7.0.54\bin\startup.bat on my machine; and did two clicks on this file but when I try localhost:8080, Tomcat is not up and doesn´t show errors.
I had installed JDK 1.6 and I have other version of Tomcat 5.5 and when I try \apache-tomcat-5.5.27\bin\startup.bat the tomcat works perfectly
sorry for my english is a little bad. I wait your help, thanks so much.
Try to open "Command Prompt", go do apache folder and try to start up it using prompt. Post results, please.
You could post the log file too.
I am new to using coverity and this might not be a very challenging question, but I would appreciate it greatly if someone could guide me through the process of setting up the .
I first ran the following command:
cov-configure --compiler /usr/bin/gcc --comptype gcc
This created a few files pertaining to the above command in my /config directory.
The real problem occurs when I run the cov-install-gui command to setup the defect manager and the database, I am not sure what to input for the --datadir option. When I passed in an empty directory (as a mere attempt), it complains saying that coverity_db does not exist within the empty directory.
Its not clear to me as to where I can find the coverity_db directory or how to install it?
I feel like I am missing something from the cov-configure command, but I am not sure.
Also I am using, Linux CentOS 5.4 and Coverity prevent 4.5
Thanks in advance
You are using an old and no longer supported version of Coverity Prevent (4.5 or older) since you are referencing the Defect Manager.
Current version is 6.0 so you should not be using the version that you are.
The answer to your question is that data directory is any directory that will be used to write the results and GUI files, so you can just specify any file path that doesn't already exist and it will create the directory and the files it needs in that directory.