XSB Runtime errors - MulVal - prolog

I'm trying to convert an Nessus scan xml file to MulVal input using a given conversion script and get the following runtime errors:
Error[XSB/Runtime/P]: [Permission (Operation) redefine on imported predicate: lists : member / 2] in compile/1
Error[XSB/Runtime/P]: [Existence (No procedure usermod : vulProperty / 3 exists)]
..and a few more similar 'no procedure usermod : ...' errors
I haven't worked with XSB/Prolog before so if anyone has any idea whats going on or if you want to see some of the source code please let me know

Related

“ error: "The parameter 'map.net.xml' is not allowed in this context" , when I want to create " .poly.xml" file” in sumo

I have a question about the polyconvert while I was importing a map
from Openstreetmap.
I successfully completed the netconvert commnad and try to get a map.poly
After I execute command:
polyconvert --net-file map.net.xml --osm-files map.osm
--type-file typemap.xml -o map.poly.xml
after that show these errors:
Error: The parameter 'map.net.xml' is not allowed in this context.
Switch or parameter name expected.
Error: Could not parse commandline options. Quitting (on error).
please help me solve this problem.
thanks

"semmni" is properly set. (more details) Expected Value : 128 Actual Value : 0

i'm trying to install Oracle11g, and this happened, is there a way to fix this?
i had tried to reboot and run the script runfixup.sh still can't resolve the problem.
I'm trying to install Oracle 11gR2 on Oracle Linux 7.4.
While the installer is performing prerequisite checks, we are getting error:
This is a prerequisite condition to test whether the OS kernel parameter semmni is properly set.
More details :
Expected Value : 128
Actual Value : 0
Now if I execute as root:
/sbin/sysctl -a | grep sem
kernel.sem = 32000 1024000000 500 128
Which means that semmni=128.
Can somebody tell me what I'm I doing wrong?
You need to issue the following command
[root#localhost ~]# /sbin/sysctl -p
the changes to take effect.
And then the value(the rightmost one returning below) might be checked by issuing
[root#localhost ~]# more /proc/sys/kernel/sem
32000 1024000000 500 128

Logstash: "type" : "index_not_found_exception"

I am following the Logstash tutorial20.
This command returns the following error:
The following is my configure:
I have two questions:
What is the right get? Who can give me a correct statement?
Why can I see an index was created from the Plgin Head after I run the XGET? --I have never used XPUT.

hive output consists of these 2 warnings at the end. How do I suppress these 2 warnings

Hive query output that is using UDFs consists of these 2 warnings at the end. How do I suppress these 2 warnings. Please note that the 2 warnings come right after the output as part of output.
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
hadoop version
Hadoop 2.6.0-cdh5.4.0
hive --version
Hive 1.1.0-cdh5.4.0
If you use beeline instead of Hive the error goes away. Not the best solution, but I'm planning to post to the CDH user group asking the same question to see if it's a bug that can be fixed.
This error occurs due to adding of assembly jar which which contains classes from icl-over-slf4j.jar (which is causing the stdout messages) and slf4j-log4j12.jar.
You can try couple of things to begin with:
Try removing the assembly jar, in case if using.
Look at the following link: https://issues.apache.org/jira/browse/HIVE-12179
This suggest that we can trigger a flag in Hive where spark-assembly is loaded only if HIVE_ADD_SPARK_ASSEMBLY = "true".
https://community.hortonworks.com/questions/34311/warning-message-in-hive-output-after-upgrading-to.html :
Although there is a workaround if to avoid any end time changes and that is to manually remove the 2 lines from the end of the files using shell script.
Have tried to set HIVE_ADD_SPARK_ASSEMBLY=false, but it didn't work.
Finally, I found a post question at Cloudera community. See: https://community.cloudera.com/t5/Support-Questions/Warning-message-in-Hive-output-after-upgrading-to-hive/td-p/157141
You could try the follow command, it works for me!
hive -S -d ns=$hiveDB -d tab=$t -d dunsCol=$c1 -d phase="$ph1" -d error=$c2 -d ts=$eColumnArray -d reporting_window=$rDate -f $dir'select_count.hsql' | grep -v "^WARN" > $gOutPut 2> /dev/null

How to bypass permission denied error?

The following example writes a point shapefile to disc. However, I get an error when the script tries to write a shapefile to C:/. I am able to write to a external hard drive though (G:/). The following is the error I receive in R:
Error in file(out.name, "wb") : cannot open the connection In
addition: Warning message: In file(out.name, "wb") : cannot open file
'c:/test.shp': Permission denied
How can I bypass or resolve this error?
# available from: cran.r-project.org/web/packages/shapefiles/shapefiles.pdf
# Samples of using the convert.to.shapefile function to write out simple shapefiles
# from basic R data.frames
require(shapefiles)
require(maptools)
dd <- data.frame(Id=c(1,2),X=c(3,5),Y=c(9,6))
ddTable <- data.frame(Id=c(1,2),Name=c("Item1","Item2"))
ddShapefile <- convert.to.shapefile(dd, ddTable, "Id", 1)
write.shapefile(ddShapefile, "C:/test", arcgis=T)
shape <- readShapePoints("C:/test")
plot(shape)
Simple answer, do not write to the root-level directory of the system volume.
There are a few good reasons to create files/directories at the root of C:, but this isn't one of them. Use C:/Temp/test instead.

Resources