I came across a strange problem about non-root users in Linux (CentOS).
I'm able to compile & run a Java Program through below commands properly :
[root#cuda1 hadoop-0.20.2]# javac EnumDevices.java
[root#cuda1 hadoop-0.20.2]# java EnumDevices
Total number of devices: 1
Name: Tesla C1060
Version: 1.3
Clock rate: 1296000 MHz
Threads per block: 512
But I need to run it through other user [B]hadoop[/B] in CentOS
[hadoop#ws37-mah-lin hadoop-0.20.2]$ javac EnumDevices.java
[hadoop#ws37-mah-lin hadoop-0.20.2]$ java EnumDevices
NVIDIA: could not open the device file /dev/nvidiactl (Permission denied).
Exception in thread "main" CUDA Driver error: 100
at jcuda.CUDA.setError(CUDA.java:1874)
at jcuda.CUDA.init(CUDA.java:62)
at jcuda.CUDA.<init>(CUDA.java:42)
at EnumDevices.main(EnumDevices.java:20)
[hadoop#ws37-mah-lin hadoop-0.20.2]$
Actually I need to run a map-reduce code but first if it runs through simple then I will go for it.
Please guide me how to solve this issue as CLASSPATH is same through all users.
Looks like you're running into a problem with device file permissions. Hadoop has nothing to do with this, neither does the Java classpath. This might be useful:
http://www.linuxquestions.org/questions/slackware-14/could-not-open-dev-nvidiactl-310026/
Related
I'm trying to get the demo for the AEC DMO working (found here). It works, but only on certain machines. On those machines it fails on, AllocateStreamingResources fails with error code 0x80004005. The exact line of code is here.
I ran dependency walker on the .exe the demo code produces and on the machines it fails on, no dependency failures were detected. The code just doesn't do anything after reporting AllocateStreamingResources failed.
I'm running with the following parameters: -out mic_out.pcm -mod 0 -spkdev 0 -micdev 0.
All machines have functional speakers and microphones. Sound is playing out of the speakers when I run the application. Any thoughts?
I have solved similar problem by:
Uninstall audio device in the Device Manager
Reboot
I'm doing a university project on QNX RTOS (using an academic license). I'm following Building a BSP.
So far I've managed to build (using bios_mkusbimage script in the BSP archive) .img file for x86_64 target. Then I converted .img to .vdi file (VBoxManage convertdd input.img output.vdi) and finally loaded it. The result is:
Or, as text:
Loading IFS...decompressing...done
System page at phys:000000000010c000 user:ffff808000003000 kern 808000006000
Starting next program at vffff80000007388b MFLAGS=1 .11 ClockCycles offsets within tolerance elcome to Q. Neutrino SDP 7.0 on x8664 system
Starting slogger2 server ...
Starting PCI server ...
Set PCI device list ...
Starting EIDE block driver ...
unable to access /dev/hd0t179 'ot3Tt11:7.n:nele7i7la:TensleieCted
Starting USD host ...
Starting devb-umass o audio device has been detected
Starting input services ...
Starting serial driver ...
Starting consoles ...
Starting shells ...
#
The OS seems to boot successfully, but I'm unable to type anything.
I'm looking either for a way to fix the keyboard input, or get a SSH/telnet/... connection to the QNX shell.
I'm trying to run those Flink Benchmarks:
https://github.com/dataArtisans/flink-benchmarks
I've generated the jar file using maven with that command:
mvn clean package -Pbuild-jar
Then I'm trying to run the benchmark on a Flink Cluster with that command:
./bin/flink run -c org.apache.flink.benchmark.WindowBenchmarks ~/flinkBenchmarks/target/flink-hackathon-benchmarks-0.1.jar
I've used the -c option to add to the classpath the Main of the benchmark (WindowBenchmarks) I want to run.
Finally, I get that error:
# JMH version: 1.19
# VM version: JDK 1.8.0_151, VM 25.151-b12
# VM invoker: /usr/lib/jvm/java-8-oracle/jre/bin/java
# VM options: -Dlog.file=/home/user/flink-1.3.2/flink-dist/target/flink-1.3.2-bin/flink-1.3.2/log/flink-user-client-mypc.log -Dlog4j.configuration=file:/home/user/flink-1.3.2/flink-dist/target/flink-1.3.2-bin/flink-1.3.2/conf/log4j-cli.properties -Dlogback.configurationFile=file:/home/user/flink-1.3.2/flink-dist/target/flink-1.3.2-bin/flink-1.3.2/conf/logback.xml -Djava.rmi.server.hostname=127.0.0.1 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false
# Warmup: 10 iterations, 1 s each
# Measurement: 10 iterations, 1 s each
# Timeout: 10 min per iteration
# Threads: 1 thread, will synchronize iterations
# Benchmark mode: Throughput, ops/time
# Benchmark: org.apache.flink.benchmark.WindowBenchmarks.sessionWindow
# Run progress: 0.00% complete, ETA 00:04:00
# Fork: 1 of 3
Error: Could not find or load main class org.openjdk.jmh.runner.ForkedMain
<forked VM failed with exit code 1>
<stdout last='20 lines'>
</stdout>
<stderr last='20 lines'>
Error: Could not find or load main class org.openjdk.jmh.runner.ForkedMain
</stderr>
# Run complete. Total time: 00:00:00
Benchmark Mode Cnt Score Error Units
The program didn't contain a Flink job. Perhaps you forgot to call execute() on the execution environment.
I don't have any previous experience with Flink and Maven so I find out what is missing. My first thought was that it's a missing dependencies error, but they look fine. Any suggestions?
Thank you in advance!
flink-benchmarks is a repository that contains sets of micro benchmarks designed to run on single machine, not on the cluster. The main functions defined in the various classes (test cases) are 'JMH' runners, not Flink programs. As such you can either execute whole benchmark suite (which takes ~1hour):
mvn -Dflink.version=1.5.0 clean install exec:exec
or if you want to execute just one benchmark, the best approach is to execute selected main function manually. For example from your IDE (don't forget about selecting flink.version, default value for the property is defined in pom.xml).
There is also a possibility to execute single benchmark from console, but I haven't tried it for very long time.
I cant use sparkR in Rstudio because im getting some error: Error in sparkR.sparkContext(master, appName, sparkHome, sparkConfigMap, :
JVM is not ready after 10 seconds
I have tried to search for the solution but cant find one. Here is how I have tried to setup sparkR:
Sys.setenv(SPARK_HOME="C/Users/alibaba555/Downloads/spark") # The path to your spark installation
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library("SparkR", lib.loc="C/Users/alibaba555/Downloads/spark/R") # The path to the lib folder in the spark location
library(SparkR)
sparkR.session(master="local[*]",sparkConfig=list(spark.driver.memory="2g")*
Now execution starst with a message:
Launching java with spark-submit command
C/Users/alibaba555/Downloads/spark/bin/spark-submit2.cmd
sparkr-shell
C:\Users\ALIBAB~1\AppData\Local\Temp\Rtmp00FFkx\backend_port1b90491e4622
And finally after a few minutes it returns an error message:
Error in sparkR.sparkContext(master, appName, sparkHome,
sparkConfigMap, : JVM is not ready after 10 seconds
Thanks!
It looks like the path to your spark library is wrong. It should be something like: library("SparkR", lib.loc="C/Users/alibaba555/Downloads/spark/R/lib")
I'm not sure if that will fix your problem, but it could help. Also, what versions of Spark/SparkR and Scala are you using? Did you build from source?
What seemed to be causing my issues boiled down to the working directory of our users being a networked mapped drive.
Changing the working directory fixed the issue.
If by chance you are also using databricks-connect make sure that the .databricks-connect file is copied into the %HOME% of each user who will be running Rstudio or set up databricks-connect for each of them.
I have followed step by step the tutorial to install Veins, but when I tried running the example scenario (final step) I ended up with the above error.
The whole error was:
Error in module (cModule) RSUExampleScenario (id=1) during network
setup: Class "Veins::ObstacleControl" not found -- perhaps its code
was not linked in, or the class wasn't registered with
Register_Class(), or in the case of modules and channels, with
Define_Module()/Define_Channel().
TRAPPING on the exception above, due to a debug-on-errors=true
configuration option. Is your debugger ready?
Simulation terminated with exit code: -2147483645 Working directory:
C:/Users/user/src/veins-4.3/examples/veins Command line:
../../../omnetpp-4.6/bin/opp_run.exe -r 0 -n .;../../src/veins
--tkenv-image-path=../../images -l ../../src/veins omnetpp.ini
I don't think I have missed a step during the tutorial as I have tried it two times. I did not make any change in anything, I've just strictly followed the tutorial like a robot, so I cannot provide an MCVE with more details than the tutorial.
Here is what I'm using:
- Windows 7 Pro 64 bits
- SUMO 0.25.0 64 bits
All other steps of the tutorial successfully worked until the final step.
I assume this error occurs when running Veins via the OMNeT++ IDE. Or, if you have compiled it with GCC (The error does not happen if you use CLANG)
There are two ways to bypass this error:
Use the .run as executable from your examples directory, which calls veins/run and includes all the required libraries:
Use opp_run as executable and set dynamic libraries to the directory where libveins.so is located (usually src/veins)
PS: to answer #ChristopSommer questions: Veins::ObstacleControl appears in opp_run -l src/veins -h classes
This could be a solution too, but I never tested it: Compiler flags in Eclipse