bitbake clean fails with read-only NFS SSTATE CACHE - caching

I am trying to set-up a read-only SSTATE cache where several jobs will the reading from this cache to accelerate the build.
bitbake -c clean <recipe-name> fails. I would like to clean the current work directory for the current recipe and not clean the SSTATE cache. (I have a master job that populates this cache). Did anyone have come across this issue? Of-course, we can remove the SSTATE_DIR and issue the command but I was wondering if there are better solutions. Thanks
LOGS:
ERROR: Build of do_clean failed
ERROR: Traceback (most recent call last):
...
File "sstate_eventhandler(e)", line 13, in sstate_eventhandler
...
OSError: [Errno 30] Read-only file system: '/mnt/nfs/yocto_build/sstate-cache/d6'
ERROR: Task 0 ( recipe-name.bb, do_clean) failed with exit code '1'

If its a read-only sstate archive, use SSTATE_MIRRORS instead.

Related

go application build with bazel can't link when running inside container

I am trying to containerize my application build, though, when running the build that uses bazel with bazel-gazelle inside a container I will get this error:
$ bazel run --spawn_strategy=local //:gazelle --verbose_failures
INFO: Analyzed target //:gazelle (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/BUILD.bazel:43:15: GoToolchainBinary external/go_sdk/builder [for host] failed: (Exit 1): go failed: error executing command
(cd /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/execroot/__main__ && \
exec env - \
GOROOT_FINAL=GOROOT \
external/go_sdk/bin/go tool link -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a)
# Configuration: e0f1106e28100863b4221c55fca6feb935acec078da5376e291cf644e275dae5
# Execution platform: #local_config_platform//:host
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument
Target //:gazelle failed to build
INFO: Elapsed time: 2.302s, Critical Path: 0.35s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
I tried to run it standalone:
$ /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/bin/go tool link -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument
and still got no success.
Never had this kind of link problem and the linker don't provide much more information. Tried to install all packages I could think of and still no luck.
For context:
Running Ubuntu 20.04 LTS
Docker 20.10.9
Bazel 4.2.2
Rules GO v0.31.0
Bazel Gazelle v0.25.0
Also tried to run it with the strace, though I don't think I am skilled enough to find meaningful information from the tool output.
#edit
For more context:
e$ /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/bin/go tool link -v -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a
HEADER = -H5 -T0x401000 -R0x1000
searching for runtime.a in /opt/go/pkg/linux_amd64/runtime.a
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument

"/anaconda/lib/python3.6/site-packages/nbstripout.py": /anaconda/bin/python: No such file or directory

Any advice how to fix the following error message(s)?
Every time that I am using Terminal with Mac OS Catalina 10.15.1 I get the following output
"/anaconda/bin/python" "/anaconda/lib/python3.6/site-packages/nbstripout.py": /anaconda/bin/python: No such file or directory
error: external filter '"/anaconda/bin/python" "/anaconda/lib/python3.6/site-packages/nbstripout.py"' failed 127
error: external filter '"/anaconda/bin/python" "/anaconda/lib/python3.6/site-packages/nbstripout.py"' failed
fatal: Google Drive/Drive on my xxxx.ipynb: clean filter 'nbstripout' failed
This happened to me when I moved my git repo. I had to go into .git/config and edit the [diff .ipynb] section with the updated directory.

Running Pyspark on Pycharm

On a Mac (v. 10.14.5), I am trying to run PySpark programs in PyCharm (professional edition, v. 19.2).
I know my simple PySpark program is fine, because when I run it with spark-submit outside PyCharm from the terminal, using Spark I installed via brew, it works as expected. I have tried linking PyCharm to this version of Spark, but am getting other issues.
I followed multiple instructions online to install pyspark within Pycharm (Preferences -> Project Interpreter), and set the SPARK_HOME environment variable to the appropriate venv directory (Run -> Edit Configurations -> Environment Variables). For example, this stackoverflow thread.
But, I get an error message when I run the program:
Failed to find Spark jars directory (/Users/rahul/PycharmProjects/spark-demoII/venv/assembly/target/scala-2.12/jars).
You need to build Spark with the target "package" before running this program.
Traceback (most recent call last):
File "/Users/rahul/PycharmProjects/spark-demoII/run.py", line 6, in <module>
sc = SparkContext("local", "SimpleApp")
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/context.py", line 133, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/context.py", line 316, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/java_gateway.py", line 46, in launch_gateway
return _launch_gateway(conf)
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/java_gateway.py", line 108, in _launch_gateway
raise Exception("Java gateway process exited before sending its port number")
Exception: Java gateway process exited before sending its port number
Process finished with exit code 1
Anyone know how to get PyCharm to run Pyspark programs on a similar machine?
In response to #pissal suggestion:
I tried that previously but that version of spark does work. I tried it again anyway: after switching to a virtual environment, I did a pip install pyspark. To ensure that this version of spark works, I ran a spark-submit run.py (outside of PyCharm), and here is the error message.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/rahul/.virtualenvs/test1/lib/python3.7/site-packages/pyspark/jars/spark-unsafe_2.11-2.4.4.jar) to method java.nio.Bits.unaligned()
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2422)
at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:79)
at org.apache.spark.deploy.SparkSubmit.secMgr$lzycompute$1(SparkSubmit.scala:348)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$secMgr$1(SparkSubmit.scala:348)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:356)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:356)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:355)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3720)
at java.base/java.lang.String.substring(String.java:1909)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:52)
... 25 more
So the reason this was happening was that pyspark has not been updated to use the latest version of Java. After removing Java version 13, I made sure my home brew installation of spark uses java version 1.8. Then added the following to the Environment Variables in Run -> Edit Configurations in Pycharm:
SPARK_HOME=/usr/local/Cellar/apache-spark/2.4.4/libexec
With these settings I can run pyspark jobs in PyCharm.

Haskell on Windows - Stack fails to fetch package index

I'm trying to install Haskell on Windows. Downloaded the installer and just clicked through everything, then tried to use Stack to install a package, ran it from a temporary folder in which everything has write access:
C:\t>stack install hfmt
Using latest snapshot resolver: lts-8.3
Writing implicit global project config file to: C:\sr\global-project\stack.yaml
Note: You can change the snapshot via the resolver field there.
Downloaded lts-8.3 build plan.
Fetching package index ...=.git""=="gui" was unexpected at this time.
C:\sr\indices\Hackage\git-update\all-cabal-hashes>#if ""--git-dir=.git""=="gui" #goto gui
Process exited with ExitFailure 255: C:\Program Files (x86)\Git\cmd\git.CMD --git-dir=.git fetch --tags
Failed to fetch package index, retrying.
removeDirectoryRecursive: permission denied (Access is denied.)
What's going wrong, and how can I fix it? Or should I forget about Stack and just use Cabal instead?
Tried rerunning the command as administrator. This time the response was instant:
C:\t>stack install hfmt
Fetching package index ...=.git""=="gui" was unexpected at this time.
C:\sr\indices\Hackage\git-update\all-cabal-hashes>#if ""--git-dir=.git""=="gui" #goto gui
Process exited with ExitFailure 255: C:\Program Files (x86)\Git\cmd\git.CMD --git-dir=.git fetch --tags
Failed to fetch package index, retrying.
removeDirectoryRecursive: permission denied (Access is denied.)

Can't start deluge

i'm using windowsXp, sp3 ,setup deluge-1.2.0_rc3-win32-setup.exe, but when i try \deluge\deluge-python\deluge.exe , an error happen
[error] init:1982 Dll load failed: The specified module could not be found.
...
...
..
ImportError: Dll load failed: The specified module could not be found.
[error] xxxxxx ui:147 There was an error whilst launching the request UI: gtk
[error] xxxxxx ui:148 Look at the traceback above for more information
so, deluge don't start :(
i had no prob in installing. I gave complete install. Didn't give recommended. And make sure you select proper directory for installing dll files. (in my case it was the recommended /bin option). try reinstalling.

Resources