Cloudfoundry step up error - caused by Chef - ruby

I followed the Readme on https://github.com/cloudfoundry/vcap
It should be work fine...
but I got the error like this :
Does anyone know what's going on?
I run on Ubuntu10.04 ...

I have not encountered this problem with the latest version of VCAP, how long has it been since you updated the copy of the VCAP source on the Ubuntu instance?
Can you also post the configuration file you are using? if any.

Related

Spark does't run in Windows anymore

I have Windows 10 and I followed this guide to install Spark and make it work on my OS, as long as using Jupyter Notebook tool. I used this command to instantiate the master and import the packages I needed for my job:
pyspark --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --master local[2]
However, later, I figured out that any worker wasn't instantiated according to the aforementioned guide and my tasks were really slow. Therefore, taking inspiration from this, since I could not find any other way to connect workers to the Cluster manager due to the fact it was run by Docker, I tried to set up everything manually with the following commands:
bin\spark-class org.apache.spark.deploy.master.Master
The master was correctly instantiated, so I continued by the next command:
bin\spark-class org.apache.spark.deploy.worker.Worker spark://<master_ip>:<port> --host <IP_ADDR>
Which returned me the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/04/01 14:14:21 INFO Master: Started daemon with process name: 8168#DESKTOP-A7EPMQG
21/04/01 14:14:21 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1006)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.master.MasterArguments.<init>(MasterArguments.scala:57)
at org.apache.spark.deploy.master.Master$.main(Master.scala:1123)
at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #60015ef5
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 6 more
From that moment on, none of the commands I used to run before were working anymore, and they returned the message you can see. I guess I messed up some Java stuff, but I do not understand what and where, honestly.
My java version is:
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)
I got the same error just now, the issue seems with Java version.
I installed java, python, spark etc. All latest versions... !
Followed steps mentioned in the below link..
https://phoenixnap.com/kb/install-spark-on-windows-10
Got the same error as you.. !
Downloaded Java SE 8 version from Oracle site..
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
Downloaded jdk-8u281-windows-x64.exe
Reset the JAVA_HOME.
Started spark-shell - it opened perfectly without any issues.
FYI: I don't have neither java or spark experience, if anyone feels something is wrong please correct me. Just that it worked for me, so providing the same solution here.. :)
Thanks,
Karun
I got a similar error on MacOS. The problem was with Java (I was using JDK 17), had to downgrade or use a different version.
Ended up using this:
https://adoptium.net/releases.html?variant=openjdk11
Download and install. Might have to remove your JDK17 version.
Easiest solution :
Latest version of Java (JDK) is not supported by Spark.
Please try installing JDK version 8. This will solve the error.

Kyma restart issue in local

I have installed Kyma version 1.13.0 on Windows, it's working fine if I don't restart my machine or minikube. But when I restart minikube by following steps provided in the below link. Kyma is not working.
https://kyma-project.io/docs/latest/root/kyma#installation-install-kyma-locally-stop-and-restart-kyma-without-reinstalling
I need to reinstall kyma again to make it work.
Any help would be appreciated
This sounds similar to what I get on my Windows machine.
This is the error that I get after restarting minikube.
stderr:
error execution phase addon/coredns: unable to patch the CoreDNS deployment: Timeout: request did not complete within requested timeout 30s
To see the stack trace of this error execute with --v=5 or higher
If you get same error, this has been reported as a bug.
https://github.com/kyma-project/cli/issues/455
My solution to this issue is to get the kyma working by issuing provision command twice, so give it a try.

RuntimeError: Java not found

I hava download JDK&deploy the Java_HOME and so on,i can use "javac" in Command,but when i use it like
nlp=StanfordCoreNLP(r'stanfordnlp',lang='zh')
there has a problem:
builtins.RuntimeError: Java not found.
Maybe you run out of memory. According to my experience, shutting down Java and restarting it can solve this issue.

Sonar installation server

I tried installing Sonar as mentioned in the sonar installation tutorial
but am unable to do so due to server error. This is the error I am getting:
"Unable to get version of server http://localhost:9000: Query: http://localhost:9000/api/server/index"
Can someone please help?
Thanks
You can find some posts on my blog about installing/upgrading SonarQube http://qualilogy.com/en/category/sonar/sonar-installation/
I recommend you start with a simple installation without any plugin and then add the plugin and try with Eclispe.
Don't hesitate to ask for further precision (telling where your installation is failing).

facing No xulrunner found running on port=5000 error when trying to access console from ssu app

I'm trying to run ssu APP https://github.com/wesabe/ssu on ubuntu 10.04. bin/server command executed without any issues but when I am trying to access console [script/console], I am facing below error :
No xulrunner found running on port=5000!
I have verified all the services that are running in the system, In that list I could not find xulrunner service running on 5000 port.
Can anyone please let me know what might be the possible solution to fix this issue.
Thank you
I found the problem, below mentioned fix as resolved the issue :
In application/chrome/content/wesabe/download/Controller.coffee
changed Server = require 'io/http/Server'
to
Server = require 'io/http/server'

Resources