Weird Haskell 'stack' error: "can't load .so/.DLL" "not a writable segment" - macos

I am trying to install ghc-mod so that I can use ide-haskell in Atom.
The instructions say to use stack build ghc-mod. It seems that GHC 8.2+ is not supported by ghc-mod, so I set my resolver to lts-9.21.
When running stack build ghc-mod, I keep getting this error (emphasis mine; not using code formatting because line wrap helps readability):
aeson > : can't load .so/.DLL for: /Users/timoffex/.stack/snapshots/x86_64-osx/db354248ca37308313a93487c93190e1d5b819629b60b38b68871c9a691e52b9/8.0.2/lib/x86_64-osx-ghc-8.0.2/libHStime-locale-compat-0.1.1.3-KZ1jqNx8uhlHjmuPPj6V1Y-ghc8.0.2.dylib (dlopen(/Users/timoffex/.stack/snapshots/x86_64-osx/db354248ca37308313a93487c93190e1d5b819629b60b38b68871c9a691e52b9/8.0.2/lib/x86_64-osx-ghc-8.0.2/libHStime-locale-compat-0.1.1.3-KZ1jqNx8uhlHjmuPPj6V1Y-ghc8.0.2.dylib, 5): REBASE_OPCODE_SET_SEGMENT_AND_OFFSET_ULEB has segment 2 which is not a writable segment (__LINKEDIT) in /Users/timoffex/.stack/snapshots/x86_64-osx/db354248ca37308313a93487c93190e1d5b819629b60b38b68871c9a691e52b9/8.0.2/lib/x86_64-osx-ghc-8.0.2/libHStime-locale-compat-0.1.1.3-KZ1jqNx8uhlHjmuPPj6V1Y-ghc8.0.2.dylib)
... (later)
-- While building package aeson-1.1.2.0 using:
/Users/timoffex/.stack/setup-exe-cache/x86_64-osx/Cabal-simple_mPHDZzAJ_1.24.2.0_ghc-8.0.2 --builddir=.stack-work/dist/x86_64-osx/Cabal-1.24.2.0 build --ghc-options ""
Process exited with code: ExitFailure 1
Progress 1/4
Here's a snippet from the above that looks weird to me:
REBASE_OPCODE_SET_SEGMENT_AND_OFFSET_ULEB has segment 2 which is not a writable segment
I am running macOS Catalina 10.15.3.
I can't find any mention of this online except for this open GitHub issue: https://github.com/facebook/duckling/issues/446
I also tried lts-7.24. I get the exact same error, except it happens while building profunctors.
What could be the problem? Where can I file a bug?

I had the same error in a project using resolver: lts-9.17.
What fixed it for me is to update stack.yaml and use this line:
resolver: lts-10.9

Related

Access denied when building esp-idf-sys for Rust project for ESP32 on Windows 10

I'm attempting to set up this Rust template project to get started doing Rust development for ESP32: https://github.com/esp-rs/esp-idf-template
I've installed the Rustup esp toolchain, as described here: https://github.com/esp-rs/rust-build
At the Generate the Project step, I chose these parameters:
Configure project to use Dev Containers = false
ESP-IDF native build version = v4.4
Rust toolchain = esp
STD Support = true
MCU = esp32
At the Build step, I get this output (second run displayed, first run compiles a long list of dependencies before this point):
C:\Users\Me\boop\doop>cargo build
Compiling esp-idf-sys v0.31.6
error: failed to run custom build command for `esp-idf-sys v0.31.6`
Caused by:
process didn't exit successfully: `C:\Users\Me\boop\doop\target\debug\build\esp-idf-sys-cafd80a349bfdbb2\build-script-build` (exit code: 1)
--- stdout
cargo:rerun-if-env-changed=IDF_PATH
cargo:rerun-if-env-changed=ESP_IDF_TOOLS_INSTALL_DIR
cargo:rerun-if-env-changed=ESP_IDF_VERSION
cargo:rerun-if-env-changed=ESP_IDF_REPOSITORY
cargo:rerun-if-env-changed=ESP_IDF_SDKCONFIG_DEFAULTS
cargo:rerun-if-env-changed=ESP_IDF_SDKCONFIG
cargo:rerun-if-env-changed=MCU
IDF_PYTHON_ENV_PATH=C:\Users\Me\boop\doop\.embuild\espressif\python_env\idf4.4_py3.10_env
PATH=C:\Users\Me\boop\doop\.embuild\espressif\tools\esp32ulp-elf\2.28.51-esp-20191205\esp32ulp-elf-binutils\bin;C:\Users\Me\boop\doop\.embuild\espressif\tools\cmake\3.23.1\bin;C:\Users\Me\boop\doop\.embuild\espressif\tools\ninja\1.10.2\;C:\Users\Me\boop\doop\.embuild\espressif\python_env\idf4.4_py3.10_env\Scripts;C:\Users\Me\boop\doop\.embuild\espressif\esp-idf\release-v4.4\tools;%PATH%
Current system platform: win64
Skipping xtensa-esp32-elf#esp-2021r2-patch3-8.4.0 (already installed)
Skipping cmake#3.23.1 (already installed)
Skipping ninja#1.10.2 (already installed)
Skipping esp32ulp-elf#2.28.51-esp-20191205 (already installed)
IDF_PYTHON_ENV_PATH=C:\Users\Me\boop\doop\.embuild\espressif\python_env\idf4.4_py3.10_env
PATH=C:\Users\Me\boop\doop\.embuild\espressif\tools\esp32ulp-elf\2.28.51-esp-20191205\esp32ulp-elf-binutils\bin;C:\Users\Me\boop\doop\.embuild\espressif\tools\cmake\3.23.1\bin;C:\Users\Me\boop\doop\.embuild\espressif\tools\ninja\1.10.2\;C:\Users\Me\boop\doop\.embuild\espressif\python_env\idf4.4_py3.10_env\Scripts;C:\Users\Me\boop\doop\.embuild\espressif\esp-idf\release-v4.4\tools;%PATH%
--- stderr
Using managed esp-idf repository: EspIdfRemote { repo_url: None, git_ref: Branch("release/v4.4") }
fatal: No names found, cannot describe anything.
Using esp-idf v4.4.1 at 'C:\Users\Me\boop\doop\.embuild\espressif\esp-idf\release-v4.4'
fatal: No names found, cannot describe anything.
Error: Access is denied. (os error 5)
I get the same error when I choose ESP-IDF native build version = v4.3.2, although without the fatal: No names found, cannot describe anything. messages.
I get an identical error when attempting to build this Rust ESP32 demo project: https://github.com/ivmarkov/rust-esp32-std-demo
This was run as Administrator.
In my search for a solution, I found this: Why os.rename sometimes returns error access is denied python Per the top answer, I disabled "Show frequently used folders in Quick access" in File explorer, but unfortunately the build error has not changed.
What access is being denied, and what could be causing the denial, even when run as Administrator?
Secondarily, what is the cause and meaning of the fatal: No names found, cannot describe anything. messages?
I've been working through the same issue today.
It is caused by a change in the embuild dependency: https://github.com/esp-rs/embuild/commit/d8f8da228f1e1e6c105074d96617a8601092f633
Trying to write permission data to an open file causes the 'os error 5
I've submitted a PR to the project: https://github.com/esp-rs/embuild/pull/56
Now merged, cargo update and you should be good!

SpringToolSuite 4 and proper OSGi clean

I have SpringToolSuite 4 installed in macOS 11 "Big Sur" running in aarch64 of Zulu11 OpenJDK 11.0.11+9-LTS . I get this error below when starting up and have added the "-clean" parameter to the two ways I use to start up.
From an Applescript: open /Applications/SpringToolSuite4.app" & " --args -clean
Or from the command line: ./SpringToolSuite4 -clean
It would seem the clean parameter is successfully sent as '-Dosgi.clean="true"' is on the last line below. However, it doesn't seem to help as this is reappearing at every restart.
Are there other ways to do an OSGi clean?
I can't edit the ini-file because macOS will clampdown on this and render the whole installation invalid. Since the clean command seems to get passed, that is likely unnecessary.
Error message:
java.lang.UnsatisfiedLinkError: ~/Library/Caches/JNA/temp/jna13221667715595625418.tmp: dlopen(~/Library/Caches/JNA/temp/jna13221667715595625418.tmp, 1): no suitable image found. Did find:
~/Library/Caches/JNA/temp/jna13221667715595625418.tmp: no matching architecture in universal wrapper
~/Library/Caches/JNA/temp/jna13221667715595625418.tmp: no matching architecture in universal wrapper
~/Library/Caches/JNA/temp/jna13221667715595625418.tmp: dlopen(~/Library/Caches/JNA/temp/jna13221667715595625418.tmp, 1): no suitable image found. Did find:
~/Library/Caches/JNA/temp/jna13221667715595625418.tmp: no matching architecture in universal wrapper
~/Library/Caches/JNA/temp/jna13221667715595625418.tmp: no matching architecture in universal wrapper
java.base/java.lang.ClassLoader$NativeLibrary.load0(Native Method)
java.base/java.lang.ClassLoader$NativeLibrary.load(ClassLoader.java:2442)
java.base/java.lang.ClassLoader$NativeLibrary.loadLibrary(ClassLoader.java:2498)
java.base/java.lang.ClassLoader.loadLibrary0(ClassLoader.java:2694)
java.base/java.lang.ClassLoader.loadLibrary(ClassLoader.java:2627)
java.base/java.lang.Runtime.load0(Runtime.java:768)
java.base/java.lang.System.load(System.java:1837)
com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:1018)
com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:988)
com.sun.jna.Native.(Native.java:195)
com.github.dockerjava.zerodep.UnixDomainSocket.(UnixDomainSocket.java:80)
com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl$2.createSocket(ApacheDockerHttpClientImpl.java:124)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:125)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.io.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:409)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalExecRuntime.connectEndpoint(InternalExecRuntime.java:164)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalExecRuntime.connectEndpoint(InternalExecRuntime.java:174)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ConnectExec.execute(ConnectExec.java:135)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ProtocolExec.execute(ProtocolExec.java:165)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.HttpRequestRetryExec.execute(HttpRequestRetryExec.java:93)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.RedirectExec.execute(RedirectExec.java:116)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement$1.proceed(ExecChainElement.java:57)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ContentCompressionExec.execute(ContentCompressionExec.java:128)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.ExecChainElement.execute(ExecChainElement.java:51)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.InternalHttpClient.doExecute(InternalHttpClient.java:178)
com.github.dockerjava.zerodep.shaded.org.apache.hc.client5.http.impl.classic.CloseableHttpClient.execute(CloseableHttpClient.java:67)
com.github.dockerjava.zerodep.ApacheDockerHttpClientImpl.execute(ApacheDockerHttpClientImpl.java:157)
com.github.dockerjava.zerodep.ZerodepDockerHttpClient.execute(ZerodepDockerHttpClient.java:8)
com.github.dockerjava.core.DefaultInvocationBuilder.execute(DefaultInvocationBuilder.java:228)
com.github.dockerjava.core.DefaultInvocationBuilder.get(DefaultInvocationBuilder.java:202)
com.github.dockerjava.core.DefaultInvocationBuilder.get(DefaultInvocationBuilder.java:75)
com.github.dockerjava.core.exec.InfoCmdExec.exec(InfoCmdExec.java:24)
com.github.dockerjava.core.exec.InfoCmdExec.exec(InfoCmdExec.java:14)
com.github.dockerjava.core.command.AbstrDockerCmd.exec(AbstrDockerCmd.java:35)
org.springframework.ide.eclipse.boot.dash.docker.runtarget.DockerRunTarget.connect(DockerRunTarget.java:130)
org.springframework.ide.eclipse.boot.dash.cloudfoundry.RemoteBootDashModel.lambda$2(RemoteBootDashModel.java:77)
org.springframework.ide.eclipse.boot.dash.model.remote.RefreshStateTracker.call(RefreshStateTracker.java:80)
org.springframework.ide.eclipse.boot.dash.model.remote.RefreshStateTracker.lambda$3(RefreshStateTracker.java:123)
org.springsource.ide.eclipse.commons.frameworks.core.util.JobUtil$6.run(JobUtil.java:193)
org.eclipse.core.internal.jobs.Worker.run(Worker.java:63)
-Dosgi.clean="true"
The issue mentioned above is this: https://github.com/spring-projects/sts4/issues/684
It is not related to OSGi, so -clean doesn't help here. The fix will be part of STS 4.12.1 release, but you could grab a nightly CI build to get the fix right away from here: http://dist.springsource.com/snapshot/STS4/nightly-distributions.html
On a related note: Once you started STS at least one you should be able to modify the .ini file without issues. If macOS still complains about it, you can remove the quarantine flag from the app:
sudo xattr -r -d com.apple.quarantine SpringToolSuite4.app
This will avoid the error message and let you start your STS4 install even after editing the content.

How do I fix a crashpad compilation error (for windows)

tI am following the instructions listed here: https://github.com/chromium/crashpad/blob/master/doc/developing.md
I cloned crashpad from here: https://chromium.googlesource.com/crashpad/crashpad
But I get the following results:
D:/Work/GitRepos/ThirdParty/crashpad> gn gen out/default
ERROR at //build/crashpad_buildconfig.gni:56:5: Unable to load "D:/Work/GitRepos/ThirdParty/crashpad/third_party/mini_chromium/mini_chromium/build/compiler.gni".
import("../third_party/mini_chromium/mini_chromium/build/compiler.gni")
^---------------------------------------------------------------------
See //BUILD.gn:15:1: whence it was imported.
import("build/crashpad_buildconfig.gni")
^--------------------------------------
Which is correct. There are no sub-folders under: D:\Work\GitRepos\ThirdParty\crashpad\third_party\mini_chromium
There must be something missing from the configuration instructions(?) How does one build crashpad for windows?

Class "Veins::ObstacleControl" not found

I have followed step by step the tutorial to install Veins, but when I tried running the example scenario (final step) I ended up with the above error.
The whole error was:
Error in module (cModule) RSUExampleScenario (id=1) during network
setup: Class "Veins::ObstacleControl" not found -- perhaps its code
was not linked in, or the class wasn't registered with
Register_Class(), or in the case of modules and channels, with
Define_Module()/Define_Channel().
TRAPPING on the exception above, due to a debug-on-errors=true
configuration option. Is your debugger ready?
Simulation terminated with exit code: -2147483645 Working directory:
C:/Users/user/src/veins-4.3/examples/veins Command line:
../../../omnetpp-4.6/bin/opp_run.exe -r 0 -n .;../../src/veins
--tkenv-image-path=../../images -l ../../src/veins omnetpp.ini
I don't think I have missed a step during the tutorial as I have tried it two times. I did not make any change in anything, I've just strictly followed the tutorial like a robot, so I cannot provide an MCVE with more details than the tutorial.
Here is what I'm using:
- Windows 7 Pro 64 bits
- SUMO 0.25.0 64 bits
All other steps of the tutorial successfully worked until the final step.
I assume this error occurs when running Veins via the OMNeT++ IDE. Or, if you have compiled it with GCC (The error does not happen if you use CLANG)
There are two ways to bypass this error:
Use the .run as executable from your examples directory, which calls veins/run and includes all the required libraries:
Use opp_run as executable and set dynamic libraries to the directory where libveins.so is located (usually src/veins)
PS: to answer #ChristopSommer questions: Veins::ObstacleControl appears in opp_run -l src/veins -h classes
This could be a solution too, but I never tested it: Compiler flags in Eclipse

F# Microsoft.ParallelArrays not defined

So I downloaded and installed Microsoft Accelerator v2 to use ParallelArrays. I have referenced it in my project but when I try and execute the code from the module in a script file I get:
"The namespace 'ParallelArrays' is not defined
I have followed the instructions on this post:
Microsoft Accelerator library with Visual Studio F#
I've added a reference to the managed version "Microsoft.Accelerator.dll" to my F# project and then added the native "Accelerator.dll" as an item in my solution and set it's 'Copy To Output Directory' to Copy Always.
Still getting the FSI error and inline error in my script file on the '#load ...' line, however the solution builds fine, and no error in the module file.
Any ideas on what I'm missing? I'm sure it's something stupid.
Thanks,
Justin
UPDATE
I tried mydogisbox's advice, which got rid of the error above, but now when I run the code in the .fsx file I get this error instead:
--> Referenced 'F:\Work\GitHub\qf-sharp\qf-sharp\bin\Debug\Microsoft.Accelerator.dll' (file may be locked by F# Interactive process)
[Loading F:\Work\GitHub\qf-sharp\qf-sharp\MonteCarloGPU.fs]
error FS0192: internal error: F:\Work\GitHub\qf-sharp\qf-sharp\Accelerator.dll: bad cli header, rva 0
UPDATE 2
So the bad header error has dissapeared, but now I get this instead:
Microsoft.ParallelArrays.AcceleratorException: Failure to create a DirectX 9 device.
at Microsoft.ParallelArrays.ParallelArrays.ThrowNativeAcceleratorException()
at Microsoft.ParallelArrays.DX9Target..ctor()
at <StartupCode$FSI_0002>.$FSI_0002_MonteCarloGPU.main#() in F:\Work\GitHub\qf- sharp\qf-sharp\MonteCarloGPU.fs:line 14
Stopped due to error
I found this thread on MSDN however the answers proposed as fixes on that thread barely even relate to the question.
http://social.msdn.microsoft.com/Forums/vstudio/en-US/98600646-0345-4f62-a6c5-f03ac9c77179/ms-accelerator?forum=csharpgeneral
My Direct X version is 11, and I imagine that will suffice, however I tried installing DX9 however, it tells me that a newer version is detected therefore cant install.
There are special directives for referencing dlls from fsi. The #load directive loads the .fs file only. You need to use the #r directive to reference the file. You can either use the full path of the file or you can use #I to include the path to the file. More details here. Keep in mind that fsi is completely independent of your project, so all references in your project must be duplicated in fsi for it to access the same types.

Resources