How can I prevent stack build using nix from the command line? - haskell-stack

I'm trying to build a docker image of a Haskell application.
In my stack.yaml I have nix enabled with:
nix:
enable: true
When running 'stack build' in the docker container (that does not have nix) it errors with:
Downloading lts-13.5 build plan ...
Downloaded lts-13.5 build plan.
Executable named nix-shell not found on path:
Can I disable nix (some command line flag?) without having to modify the stack.yaml file?

You can disable nix on the command line using:
stack --no-nix build

Related

Install Kubectl Plugin on Windows

Question: What are the steps to install a kubectl plugin on Windows?
I have written a plugin standalone binary that I would like to invoke from within kubectl (following the instructions in https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/)
The documentation for installation states to perform the following steps:
"A plugin is nothing more than a standalone executable file, whose name begins with kubectl-. To install a plugin, simply move this executable file to anywhere on your PATH."
This works fine on Mac and Linux, but performing those instructions on Windows does not seem to work. Running "kubectl plugin list" does not list my plugin and I cannot invoke it from within kubectl. I even tried adding my binary to the .kube directory autogenerated by kubectl, and it does not detect the plugin.
Several discussions on github reference this issue, without providing a response of how to install a kubectl plugin on Windows (ex: https://github.com/kubernetes/kubernetes/issues/73289). And after performing a lengthy google/stackoverflow search, there don't seem to be any tutorials/solutions that I (or my teammates) could locate. Any help would be much appreciated! Thank you.
In my case I don't have an issue with installing a plugin on Windows 10 machine (by simply including it on my PATH). Here is the output of 'kubectl plugin list':
c:\opt\bin>kubectl plugin list
The following kubectl-compatible plugins are available:
c:\opt\bin\kubectl-getbuildver.bat
- warning: c:\opt\bin\kubectl-getbuildver.bat identified as a kubectl plugin, but it is not executable
c:\opt\bin\kubectl-hello.exe
c:\opt\bin\kubectl-helloworld.p6
- warning: c:\opt\bin\kubectl-helloworld.p6 identified as a kubectl plugin, but it is not executable
error: 2 plugin warnings were found
Instead I'm encountering a known github issue: 'not supported by windows' error, while invoking my plugin with kubectl (v1.13.4).
c:\opt\bin>kubectl hello
not supported by windows
c:\opt\bin>kubectl-hello.exe
Tuesday
*kubectl-hello.exe - is console application written in csharp. I tried also to use Windows batch file and Perl6 program as plugins, but none of these worked out on Windows.
I think only .exe file extensions are considered as executables by kubectl when it searches for plugins in the $PATH when running in Windows environment.
I tested by creating a simple HelloWorld App as a single file executable, added it to my system's $PATH and it got picked up and executed correctly.
kubectl krew like brew to manage the kubectl plugin. You can try it. It supports Window.
https://github.com/kubernetes-sigs/krew

In devtool command line tool is finish subcommand removed or replaced?

The only relevant and easy to understand example I found to use devtool in a Yocto Workflow was the Video from Tim Orling in Embedded Linux Conference
In his workflow:
He uses devtool add to add nano
devtool build to build it
devtool deploy-target to deploy it on qemu
devtool undeploy-target to remove nano
devtool finish nano ../meta-foo
I tried doing the same thing but there is no subcommand finish in devtool
when I try devtool finish --help
devtool finish --help
ERROR: argument <subcommand>: invalid choice: 'finish' (choose from 'create-workspace', 'add', 'modify', 'extract', 'sync', 'update-recipe', 'status', 'reset', 'build-image', 'deploy-target', 'undeploy-target', 'build', 'search', 'upgrade', 'edit-recipe', 'configure-help')
What is the equivalent subcommand for devtool finish. Is it devtool reset?
Build Host Environment
Ubuntu 16.04.4 LTS virtual machine
Bitbake version: 1.30.0
devtool information
usage: devtool [--basepath BASEPATH] [--bbpath BBPATH] [-d] [-q]
[--color COLOR] [-h]
<subcommand> ...
OpenEmbedded development tool
optional arguments:
--basepath BASEPATH Base directory of SDK / build directory
--bbpath BBPATH Explicitly specify the BBPATH, rather than getting it
from the metadata
-d, --debug Enable debug output
-q, --quiet Print only errors
--color COLOR Colorize output (where COLOR is auto, always, never)
-h, --help show this help message and exit
subcommands:
Beginning work on a recipe:
add Add a new recipe
modify Modify the source for an existing recipe
upgrade Upgrade an existing recipe
Getting information:
status Show workspace status
search Search available recipes
Working on a recipe in the workspace:
build Build a recipe
edit-recipe Edit a recipe file in your workspace
configure-help Get help on configure script options
update-recipe Apply changes from external source tree to recipe
reset Remove a recipe from your workspace
Testing changes on target:
deploy-target Deploy recipe output files to live target machine
undeploy-target Undeploy recipe output files in live target machine
build-image Build image including workspace recipe packages
Advanced:
create-workspace Set up workspace in an alternative location
extract Extract the source for an existing recipe
sync Synchronize the source tree for an existing recipe
Use devtool <subcommand> --help to get help on a specific command
Krogoth
for krogoth according to the Yocto 2.1.2 Development Manual devtool manual
The created recipes need to be manually placed into the custom meta-foo layer.
For other versions devtool finish recipe-name ../meta-foo should do it for you.

Using pkg-config with Haskell Stack's Docker integration

I'm trying to build a Haskell Stack project whose extra-deps includes opencv, which in itself depends on OpenCV 3.0 (presently only buildable from source).
I'm following the Docker integration guidelines, and using my own image which builds upon fpco/stack-build:lts-9.20 and installs OpenCV 3.0 (Dockerfile.stack.opencv).
If I build my image I can confirm that opencv is installed and visible to pkg-config:
$ docker build -t stack-opencv -f Dockerfile.stack.opencv .
$ docker run stack-opencv pkg-config --modversion opencv
3.3.1
However if I specify this image in my stack.yml:
docker:
image: stack-opencv
Attempting to stack build yields:
Configuring opencv-0.0.2.0...
setup: The pkg-config package 'opencv' version >=3.0.0 is required but it
could not be found.
I've run the build without the Docker integration, and it completes successfully.
The Dockerfile is passing CMAKE_INSTALL_PREFIX=$HOME/usr.
When running docker build the the root user is used, and thus $HOME is set to /root.
However when doing stack build the stack user is used, they do not have permission to see /root, and thus pkg-config cannot find opencv.
By removing the -D CMAKE_INSTALL_PREFIX=$HOME/usr flag from cmake, the default prefix (/usr/local) is used instead. This is also accessible to the stack user, and thus pkg-config can find it during a stack build.

How to make the output of a snap "parts:" available to "apps:"?

apps:
library-sample:
command: library_sample
parts:
library:
source: https://github.com/the/sample.git
plugin: cmake
When snapcraft runs the cmake install, "library" will be installed on the system (as I would expect). Also, cmake will also produce a test application in a samples folder under the build directory.
I would like to promote the sample (generated by the "part") to be an installed app within the snap package.
How do I use snap YAML to move from a nested directory under the build folder, into the snaps /bin folder?
You can do this by utilizing Snapcraft's scriptlets. Specifically, the install scriptlet. They essentially allow you to modify the behavior of the build process by customizing sections of it. In the build lifecycle step, snapcraft esentially runs cmake && make && make install, but make install doesn't do everything you want it to do. The install scriptlet runs right after make install, so you can do something like this:
parts:
library:
source: https://github.com/the/sample.git
plugin: cmake
install: |
cp -r path/to/samples $SNAPCRAFT_PART_INSTALL/
Now clean the build step with snapcraft clean -s build and run snapcraft again. Then the samples directory will end up in the final snap.

Building small container for running compiled go code

From
https://docs.docker.com/articles/baseimages/
I am trying to build a base image to run compiled go code, from:
https://github.com/tianon/dockerfiles/tree/master/true
I have tried to copy into docker the true.goThen: exec: "/true": permission denied
Also tried to bash into it, then: "bash"Then: executable file not found in $PATH
Also tried to use the debootstrap raring raring > /dev/null Then: "bash": executable file not found in $PATH
How do you do this?
Thanks
I'm not sure I entirely follow.
The Dockerfile from the linked project builds an image with nothing in it except an executable - there will be no shell or compiler, so running bash will be impossible. It does this by using the special scratch base image, which is simply a completely empty filesystem.
If you clone the repository and build the image using the Dockerfile (docker build -t go-image .), it will simply copy the executable directly into the image (note the Dockerfile copies the executable true-asm, not the source code true.go). If you then use docker run to start the image, it will run it (docker run go-image).
Does that make sense? The code is compiled locally (or by another container) and the compiled, stand-alone executable is placed by itself into the image.
Generally, you don't want to do this and definitely not when you're beginning - it will be easier for you to use a golang or debian image which will include basic tools such as a shell.

Resources