I am currently trying to write some code for using a remote control with a Raspberry Pi 3.
I installed LIRC according to a tutorial and it does work, but:
In the /etc/lirc/hardware.conf I have a line:
DRIVER="default"
but when I run mode2 -d /dev/lirc0 it says: Using driver devinput on device /dev/lirc0, which is giving wrong me the output.
I suspect this is also the reason why irw shows nothing when I run it and press buttons on my remote.
When I run mode2 -d /dev/lirc0 -H default it works just fine, but I can't specifiy that when running irw. (Or anything else that depends on LIRC)
Why is LIRC ignoring the DRIVER-line?
It depends on the lirc version; the debian packaging is part of the issue.
lirc as of 0.9.0, a really old version, has been part of debian for a (too) long time while the project has advanced. The hardware.conf file is part of the debian packaging of 0.9.0 (it has never been part of the upstream project).
Some years (2?) debian finally took the step to modernize lirc, and as part of this the hardware.conf file is not used, replaced by several files. lirc_options.conf is one of them. This makes lirc on debian work ín the same way as other distributions.
The official guide to lirc configuration is http://lirc.org/html/configuration-guide.html. Please disregard anything involving hardware.conf if your lirc is beyond 0.9.0 - it is by definition docs broken beyond repair.
I found out why the standard driver was not default but devinput:
It seems the driver for LIRC to use isn't actually to be specified in hardware.conf but in /etc/lirc/lirc_options.conf.
If I now run mode2 -d /dev/lirc0 it uses default as driver.
However, this didn't, as I hoped, resolve my issues with irw.
Related
I am trying to use the latest version of the appImage-builder because appimages of my application built with the old version of appImage-builder do not run on ubuntu 22.04 anymore. So I got the order to try and see if it works with the new appImage-builder.
Currently (June 2022), only versions below 1.0 which are based on ubuntu 18.04 are available on docker (which we previously used to build our appimage).
The newer versions are available via github (https://github.com/AppImageCrafters/appimage-builder/releases).
However, I seem to be unable to execute:
appimage-builder --generate
or
appimage-builder --recipe AppImageBuilder.yml
Is there any documentation available on how to correctly use the .appimage version of appImage-builder? All I could find in https://appimage-builder.readthedocs.io/en/latest/ seems to refer to the docker version or a manually built version of appImage-builder.
Depending on the error message you get, there could be a couple of issues at play here.
If you got an error related to FUSE, then you need to install the libfuse2 package with apt install libfuse2. AppImages rely on libfuse2, but Ubuntu has stopped including it since 22.04, in favor of libfuse3.
If you get an error related to "file not found", then it could be that you do not have AppImageLauncher installed. Sadly, with type 2 AppImages the design decision was taken to modify the ELF header of the executable with 3 magic bytes at offset 8 of the executable. This means that Linux linkers will not run the file. AppImageLauncher actually copies the file to a temporary directory and zeroes out the magic number in order to be able to execute it.
A good starting point for debugging issues like this is to run the strace command, which will let you see which system call likely cause the error. Keep in mind that if you try to execute a file and you get File not found, it might mean that the linker specified by the file can not be found on the system or the ELF header is not valid. You can also run the executable by using the linker directly, which might give you more clues. For example with: /lib64/ld-linux-x86-64.so.2 <NAME-OF-YOUR-EXECUTABLE>.
Since I installed according to the guide here on wsl2 ubuntu 20.04, I've been having errors related to libstc++.so.6, specifically GLIBCXX_3.4.26 not found (required by ...) where ... refers to different files within /opt/OpenFOAM/ThirdParty-v2006/platforms/linux64/gcc-6.3.0/lib64/ ending in .so, .so.1, .so.6 and so on (for instance, when running paraFoam the error would appear with respect to about 20 such files). I am able to successfully visualize the cavity tutorial (in paraview installation on windows).
I could get the errors to go away by doing what the user laborg suggested on Jan 4 for a similar problem with julia (see here), specifically copy libstdc++.so.6 from /usr/lib/x86_64-linux-gnu to /opt/OpenFOAM/ThirdParty-v2006/platforms/linux64/gcc-6.3.0/lib64/.
The questions is whether this copy-paste solution is recommended; will it come back and haunt me later? Is the libstdc++.so.6 from system installation going to be an issue if used in the lib64 folder of openfoam?
An additional info concerning openfoam installation, foamInstallationTest shows *not installed* errors against flex, wmake, gcc, g++, icoFoam and *critical error* for gcc, g++, icoFoam; but I as given here, foamInstallationTest is not meant for installation from the tar file. Openfoam installation seems to be alright based on the running of the cavity tutorial.
ok, please don't do copy past operation to solve this problem. The error means that you haven't installed the pre request libraries in your ubuntu. It seems that you have missed the first step in the tutorial.
It is not recommended but it will not hurt as long as the GLIBC versions returned from this command
strings /opt/OpenFOAM/ThirdParty-v2006/platforms/linux64/gcc-6.3.0/lib64/libstdc++.so.6 | grep GLIBC
are a subset of the GLIBC versions from this command.
strings /usr/lib/x86_64-linux-gnu/libstdc++.so.6 | grep GLIBC
which was no doubt the case for your Ubuntu setup.
A less risky route would be to redirect the soft link /opt/OpenFOAM/ThirdParty-v2006/platforms/linux64/gcc-6.3.0/lib64/libstdc++.so.6 to point to your other libstdc++.so.6 (that way you retain both versions)
ln -sf /usr/lib/x86_64-linux-gnu/libstdc++.so.6 /opt/OpenFOAM/ThirdParty-v2006/platforms/linux64/gcc-6.3.0/lib64/libstdc++.so.6
Then, if you hit an issue, you can always reset the link back to its original target. Of course /usr/lib/x86_64-linux-gnu/libstdc++.so.6 is itself a soft link, but you can point to it all the same or you can point to its target.
I believe the issue you are hitting is a derivative of the one mentioned here https://www.cfd-online.com/Forums/main/229027-persistence-glibcxx_3-4-26-not-found.html, which would point towards the fact that it is not an installation error on your part but an issue related to the packaging of the OpenFoam binaries. I agree it would screw up the wsl2 setup owing to the way OpenFoam prepends everything to paths. Of course the safest route is to compile from source using the Ubuntu system's gcc and thereby bypass the ThirdParty.
Seeing as you are using Ubuntu in the WSL instance, could also just install the Ubuntu package directly:
https://develop.openfoam.com/Development/openfoam/-/wikis/precompiled/debian
This problem comes from this line in the tutorial:
echo "source /opt/OpenFOAM/OpenFOAM-v2012/etc/bashrc" >> ~/.bashrc
This will point to OpenFOAM's libstdc++ everytime you open a terminal (or start a WSL2 session). If your workflow is not related to OpenFOAM, that can be an issue. If you remove or comment that line in your ~/.bashrc things should get back to normal. You can use nano in WSL2.
nano ~/.bashrc
Then comment:
#source /opt/OpenFOAM/OpenFOAM-v2012/etc/bashrc
However, as OpenFOAM uses that bashrc, you will need to source the OpenFOAM bashrc in each terminal before using openFOAM.
source /opt/OpenFOAM/OpenFOAM-v2012/etc/bashrc
My personal choice is to keep that line commented and, if I have a long work session using OpenFOAM, I just use nano to uncomment it, so every shell that I open works without sourcing again.
There are more elegant or complex approaches, but I prefer this one.
This answer should be valid with the 2006 version too, the link you shared points to 2012, so I guess they just updated the tutorial. If you installed 2006, just make sure when you source comment/uncomment to use the correct name.
In the same manner, if you followed another tutorial with another tool and sourced another library, you may experience issues.
Just start by taking a look at your bashrc and cleaning it.
First thing I do after unpacking the SnowSQL Linux client is try to upgrade it. This has worked very well through at least v1.1.84. Today I downloaded v1.2.2, installed it, and got an error:
$ ~/bin/snowsql -Uv
No snowsql is available for download: url=https://sfc-repo.snowflakecomputing.com/snowsql, version=1.2
The error comes from this download. Has something changed? I get the same error even when I just try to use it with no options at all, or trying to connect by passing my account code and username.
The curl above was missing https and hence gave the wrong impression of 403 forbidden.
Some times due to a caching issue with the downloads it will not autoupgrade. There are two main components, one being bootstrap and the other one being the main snowsql component. The one you see the issue with is the main component (it is auto-downloaded when you run snowsql).
You can force new version download using snowsql -v 1.2.2 as an exmaple.
You can delete/move the .snowsql directory (~/.snowsql or ~/bin/.snowsql) to ensure a new main component version is downloaded by the new bootstrap.
You may also try using the newer versions for which the rpm is available at
https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/1.2/linux_x86_64/index.html
I noticed Snowflake has some weird firewall configs and similar errors can either happen consistently or intermittently.
The only option I'm aware of that can help if it happens consistently is to use --noup flag with your commands. This will not check for snowsql updates of course, but you can always manually download a newer version via your browser(with VPN is needed).
I am exploring a production-stable proxy for redis cluster called codis . It is a mentioned as a great alternative to twemproxy, especially as one of my needs is pipelining and twemproxy does not offer that.
However the documentation in English is still a WIP and the replies to github issues are in mandarin.
I am trying to install this on
Linux version 3.13.0-74-generic (buildd#lcy01-07) (gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) ) #118-Ubuntu SMP
I have installed go version 1.8 and I can see the folder /usr/local/go/bin. I have added this to the PATH variable as well.
However, when on executing the command go get -u -d github.com/CodisLabs/codis, I am getting the following :
package github.com/CodisLabs/codis:
no buildable Go source files in /home/ubuntu/go/src/github.com/CodisLabs/codis
You probably want to use the updated install outlined at https://github.com/CodisLabs/codis/issues/1180#issuecomment-286660086
seems the English docs might be out of date https://github.com/CodisLabs/codis/issues/1179#issuecomment-286662505
It's late and I should go to bed and maybe that's why I can't figure this out. I'm on a fedora-13 machine and I just ran
yum install gambit-c
I installed it because I want to follow along in a schemed text book.
but now that it's installed, how do I start the scheme interpreter??
It looks from the RPM listing that the binaries are named gsi, gsix, and gsc, all in /usr/bin. I suspect that gsi is the interpreter.
For more details, there's also the manual entry for gsi.
BTW: I don't know about the Fedora RPM, but I found that the Ubuntu repository's Gambit-C was quite outdated (4.0-ish), with missing features like simple compilation of stand-alone executables. The most recent version is 4.6. If your RPM's version is a few decimal places behind, I'd suggest just installing from source; it's a pretty standard configure -> make -> make install sequence. Just remember the following option when running configure:
./configure --enable-single-host
This speeds things up quite a bit.