Fuseki 2: Security issue "Information Exposure" - jetty-9

I'm working with Apache Fuseki 2.3.1. on Linux RedHat as a standalone server:
>> java -Xmx16384M -jar fuseki-server.jar --port=8080 --loc=/space/tdb /ds
The safety tests team has raised an anomaly of "Information Exposure" (CWE-200 - http://cwe.mitre.org/data/definitions/200.html), in particular the Fuseki and JETTY versions are showed.
For example, if I submit an incorrect query, it's shown:
Error 400: ...
Fuseki - version 2.3.1 ....
Does anyone know how to prevent this issue?

This issue was solved by Jena/Fuseki team and will be released on next version of Fuseki (2.4.0).
See:
Suppress output of "Server:" with version information.

Related

Spark does't run in Windows anymore

I have Windows 10 and I followed this guide to install Spark and make it work on my OS, as long as using Jupyter Notebook tool. I used this command to instantiate the master and import the packages I needed for my job:
pyspark --packages graphframes:graphframes:0.8.1-spark3.0-s_2.12 --master local[2]
However, later, I figured out that any worker wasn't instantiated according to the aforementioned guide and my tasks were really slow. Therefore, taking inspiration from this, since I could not find any other way to connect workers to the Cluster manager due to the fact it was run by Docker, I tried to set up everything manually with the following commands:
bin\spark-class org.apache.spark.deploy.master.Master
The master was correctly instantiated, so I continued by the next command:
bin\spark-class org.apache.spark.deploy.worker.Worker spark://<master_ip>:<port> --host <IP_ADDR>
Which returned me the following error:
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
21/04/01 14:14:21 INFO Master: Started daemon with process name: 8168#DESKTOP-A7EPMQG
21/04/01 14:14:21 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[main,5,main]
java.lang.ExceptionInInitializerError
at org.apache.spark.unsafe.array.ByteArrayMethods.<clinit>(ByteArrayMethods.java:54)
at org.apache.spark.internal.config.package$.<init>(package.scala:1006)
at org.apache.spark.internal.config.package$.<clinit>(package.scala)
at org.apache.spark.deploy.master.MasterArguments.<init>(MasterArguments.scala:57)
at org.apache.spark.deploy.master.Master$.main(Master.scala:1123)
at org.apache.spark.deploy.master.Master.main(Master.scala)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make private java.nio.DirectByteBuffer(long,int) accessible: module java.base does not "opens java.nio" to unnamed module #60015ef5
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:357)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:297)
at java.base/java.lang.reflect.Constructor.checkCanSetAccessible(Constructor.java:188)
at java.base/java.lang.reflect.Constructor.setAccessible(Constructor.java:181)
at org.apache.spark.unsafe.Platform.<clinit>(Platform.java:56)
... 6 more
From that moment on, none of the commands I used to run before were working anymore, and they returned the message you can see. I guess I messed up some Java stuff, but I do not understand what and where, honestly.
My java version is:
java version "16" 2021-03-16
Java(TM) SE Runtime Environment (build 16+36-2231)
Java HotSpot(TM) 64-Bit Server VM (build 16+36-2231, mixed mode, sharing)
I got the same error just now, the issue seems with Java version.
I installed java, python, spark etc. All latest versions... !
Followed steps mentioned in the below link..
https://phoenixnap.com/kb/install-spark-on-windows-10
Got the same error as you.. !
Downloaded Java SE 8 version from Oracle site..
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html
Downloaded jdk-8u281-windows-x64.exe
Reset the JAVA_HOME.
Started spark-shell - it opened perfectly without any issues.
FYI: I don't have neither java or spark experience, if anyone feels something is wrong please correct me. Just that it worked for me, so providing the same solution here.. :)
Thanks,
Karun
I got a similar error on MacOS. The problem was with Java (I was using JDK 17), had to downgrade or use a different version.
Ended up using this:
https://adoptium.net/releases.html?variant=openjdk11
Download and install. Might have to remove your JDK17 version.
Easiest solution :
Latest version of Java (JDK) is not supported by Spark.
Please try installing JDK version 8. This will solve the error.

SolrCore is loading running as Windows Service

Logged into Windows Server 2016 as Administrator, I can run Solr from the command line: bin\solr.cmd start -p 8983 -f
I have configured a Solr to run as a Windows Service - running as the same user, with the same command, same startup directory, etc. - however under load, the following error comes back from the upstream application (Sitecore xConnect, though this shouldn't make a difference)
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore is loading,code=503}
To reiterate, everything works fine when Solr is started from the command line, only when it's run as a Windows Service does it error.
Solr version: 6.6.3
Windows version: Server 2016
Environment: AWS (m5.large EC2 instance)
Sitecore compatibility table says to use Solr 6.6.1 with Sitecore, You should still use 6.6.2 as it fixes a bug in Solr 6.6.1 that can affect the installation of SIF. Read here
I recommend you try again with Solr 6.6.2
It turns out that the service was configured to run without the -f flag. So the process would continually stop and re-spawn.

non-JRMP server at remote endpoint

I am trying to figure out how to use Oracle nosql. I have downloaded and installed version 4.3.11 (with examples). I have started kvlite both with default params and with the following:
java -jar lib/kvstore.jar kvlite -port 5000 -root kvroot -host
When I run the examples as described at https://docs.oracle.com/cd/E26161_02/html/GettingStartedGuide/verifykvlite.html, exceptions are thrown.
Unfortunately, I cannot post the stacktrace as it is on another server that is not accessible from here.
Some of the errors are:
Could not contact any RepNode at: [localhost:5000]
non-JRMP server at remote endpoint
Any assistance would be appreciated.
-Raymond
I suspect you are trying to connect to a secured store without specifying the secured connection parameters. Oracle NoSQL has enabled security by default. The simplest thing you can try is to start kvlite with security disabled.
java -Xmx256m -Xms256m -jar KVHOME/lib/kvstore.jar kvlite -secure-config disable
also, I noticed you were looking at docs for older version. The latest NoSQL is now 4.4.6 and the docs are breathing here - http://docs.oracle.com/cd/NOSQL/html/GettingStartedGuide/kvlite-usage.html
HTH,

Why the SonarQube server fails to start with following error message "Database relates to a more recent version"

I'm using Sonar 3.7.2 with JTDS driver for MSSQL database. While starting the sonar in windows gives following error:
2015.06.26 02:54:53 INFO o.s.s.p.ServerImpl SonarQube Server / 3.7.2 / 1feffde9f95897aa000a7123ba54a8c8757b40d8
2015.06.26 02:54:53 INFO o.s.c.p.Database Create JDBC datasource for jdbc:jtds:sqlserver://enbuild03/sonar;SelectMethod=Cursor
2015.06.26 02:54:54 **ERROR o.s.s.p.Platform Database relates to a more recent version of sonar. Please check your settings**.
org.sonar.api.utils.MessageException: Database relates to a more recent version of sonar. Please check your settings.
2015.06.26 02:54:57 INFO jruby.rack jruby 1.6.8 (ruby-1.8.7-p357) (2012-09-18 1772b40) (Java HotSpot(TM) 64-Bit Server VM 1.6.0_43) [Windows Server 2008 R2-amd64-java]
2015.06.26 02:54:57 INFO jruby.rack using a shared (threadsafe!) runtime
I'm stuck here since sonar server not even starting because of the above bold error...
Any help will be appreciated ???
It somehow seems that a newer version of sonar has been run against the db you created. Can you try creating a new DB and see if that works?
If you have run some latest version of sonarqube and then downgraded the sonarqube; you are likely to get this error, if both versions are mapped to same database.
If you view the database there will be some tables in that db, once you wipe out the content of DB and restart sonarqube this error will be gone.

Elasticsearch server stops due to java.io.IOException break

I am facing problems with Elasticsearch.
I am unable to get the results. I checked in log files i got the following error:
ERROR:
2014-10-30 08:52:46,971][DEBUG][action.search.type ] [Lianda] [135] Failed to execute fetch phase
[Error: Runtime.getRuntime().exec("cd").getInputStream(): Cannot run program "cd": java.io.IOException: error=2, No such file or directory]
[Near : {... w InputStreamReader(Runtime.getRuntime().exec("cd" ....}]
Below are the version I am using:
elastic search version: 0.90.5
java version: 1.6.0_33 64 bit
plugin installed: phonetic
The strange thing is that, whenever I am getting this error, I restart the elastic search server and its works.
So I think something is getting overloaded.
Based on seeing the Runtime.getRuntime().exec() call, it could be related to a dynamic scripting vulnerability in the defaults of elastic search prior to version 1.2. See this document on scripting security.
If that is the source of your problem, you can put in a fix in your current version (or upgrade to a newer version). From the link above:
If you are running an Elasticsearch node prior to the 1.2.x release,
you can make this change on your system by putting the following
setting into elasticsearch.yml:
script.disable_dynamic: true
Then restart each node in your cluster. Dynamic scripting will now be
disabled. If you are running Elasticsearch 1.2.x or later, dynamic
scripting is already disabled by default.

Resources