Can I relocate `.stack-work`? - haskell-stack

I want to develop under Windows, but have a guest Ubuntu system set up that ideally should perform the build and testing.
I still want to use all build tools and I think sharing .stack-work will probably a mess. So is there some terminal switch which makes stack use another directory?

Well, should have gone through stack --help first. There's --work-dir:
--work-dir WORK-DIR Override work directory (default: .stack-work)

Related

How to use git in system with no fork(2)?

Is it possible to properly use git when working fork(2) is not available? If so, how?
MinGW environment dies not provide fork(2) implementation. Some git sub-processes (chiefly fetch-pack) rely on it, and will fail if it does.
Is it possible to go around that without patching out its usage? Maybe telling server to disable its side-band capabilities?

Chef-InSpec: Resources for windows OS

I'm just starting to learn Inspec. I'm wondering to know is there any resources for check installed the driver(e.x virtio-win) or kernel in windows?
or is that possible to looking in one directory and say that test.sys exists?
Check the docs. What you see is what you get. I have no idea how to query a running a Windows kernel for which modules are loaded but if you know how to do that you can probably use a command resource. Otherwise just check for files like normal.
I found that there are no resources for windows kernel. So, I just used driverquery to check the drivers.

docker on OSX slow volumes

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

Not able to install Gentoo Linux

Here is my situation, when I download Gentoo and start to run it and downloaded the stage III Tarball from links and then tried to extract it a stream of white sentences flows down my screen really fast for about a minute just like in the YouTube tutorial I was viewing. However, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talkingaboutHowever, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talking about. Please help
Sorry you're having this issue, though in general, I truly believe Gentoo Handbook is quite well written and even a newbie can follow it... Here are some advices that I hope I can give you (most important is, digest the handbook and follow it carefully, not that I'm saying "RTFM", it's just that for Gentoo, handbook is essential and without it, we can get lost if you're just starting).
From my experiences, the "stream of white sentences" I'd presume would be from verbose un-tar'ing your stage3. Usually, I only want to see the errors so my suggestions is to remove the "v" (i.e. from "tar xjvpf" to "tar xjpf") so that only errors would appear when un-tar'ing. The caveat to this is that you'll be wondering if it hung or is busy un-tar'ing. Use Alt-F1 and Alt-F2 (if on console/tty mode back-and-forth) to log in on another TTY and do 'ps -auxf' to see if it's still tar'ing. If you're using GUI Terminal, just open another tab and 'ps auxf', you get the picture...
Also, learn the commands "df", it'll come in handy. If you're running out of disk space, perhaps you're trying to install/untar stage3 to your ramdisk (grin) rather than your mounted (i.e. "/mnt/gentoo"). Mount your root '/' device to '/mnt/gentoo' and cd to that mounted path then try it (don't forget to mount your '/boot' as well as your proc, dev, sys, etc before you chroot - again, follow the handbook as carefully as you can - oh also, distro such as Debian hybrids including Ubuntu uses symlink to shm, so read that part about 'rm /dev/shm' and follow it carefully; if you're using Gentoo LiveCD, you can ignore that part).
Other useful commands if you're confused (or new to) mounting devices would be to learn to experiment with commands such as 'lsblk' and 'mount' (by itself) to inspect the sizes of your partition (again, use of 'df' comes in handy as well) as well as what is your device (i.e. /dev/sda1 versus /dev/sdb1). Hint: when you do 'mkfs', use "-L" (or for some file system, it's "-N") to label/name your devices, so that when you use commands such as 'mount' or 'lsblk', you can spot them easier. If you're using GUI/desktop versions of some distro, hopefully there are tools such as "gparted" which can give you visual information in GUI of your devices which can be helpful. One think I'd advise you to stay away from if you're just starting, is to avoid RAID (i.e. mdadm) until you're comfortable with how grub/lilo works. Get your kernel (Gentoo-sources) compiled and MBR written (i.e. grub-install), try booting and have fun first (oh also, if you can avoid GUI like installing Gnome/KDE from the get-go, avoid it as well - you'll get into issues such as "should I use SystemD or OpenRC" and then get hit by the obstacle of some gnome parts needs you to use systemd but you've chosen openrc, and so on).
If I may add my opinions, in my opinion, Gentoo (also Arch and FreeBSD) is an excellent place to start if you want to learn the inside of Linux application workings (library dependencies, why packages are important rather than downloading each libs manually and compile them one by one, etc). I hope this won't discourage you from switching to another distro, but if it does frustrate you on installation and all you want to do is test-drive Linux, there are much easier distro that you'd not have to understand USE and other compilation mechanisms (if you have an old i586, it makes sense to build it with pick-and-chose libraries so that leaner can be faster, but if you have fast machine, why compile binaries when somebody who is expert at it already have done it for you?). SUSE and Fedora/RedHat/CentOS used to be the least frustrating for it was able to find/detect hardwares (legacy and new) but these days, I usually tell people, "if you know how to install Windows, you can install Ubuntu" so that too may be a good way to wet your feet. Good luck!
0_o wow, well.. how about some 411 like size of your hdd and exactly how you partioned it? Linux will look for specific directorys and if missing will instead start to install into the root dir. How you partion is an importent first step. Once you got a generally good partion setup most linux installs will go fine. Most basic tables include /, /home,/var and a swap.

simple gui based gdb debugging over ssh

I ssh into a linux VM which is setup remotely. I use Vim to write my code. For debugging however, I use netbeans through X11 which can sometimes be painfully slow. I tried using gdb buts its an efficiency killer. I love to hover over my variable and get to now their value rather that doing p variable_name , plus I like see and navigate through the code. Is there something light simple gui based debugging tool I can use. I have tried to use clewn http://clewn.sourceforge.net/ , but that doesnt work because it has a missing netbeans_intg feature. Is there any other similar vim gui based debugging tool ?
You can try ddd
which is a gui for gdb, I think it's lighter than netbeans.
cgdb is an interface to gdb but it is not a graphical one. It does not offer the possibility of hovering over a variable, but it shows you a window with the source code.
Well, I was in sort of your situation sometime ago, and you can have a look at my question about using gdb with remote sources.
First of all, your problem with netbeans_intg feature is related to vim which has been compiled with no support for it. If you can rebuild vim yourself, you can then enable it. Otherwise, as you can see in the answer that I gave myself to my question, you can leverage clewn's remote-vim capabilities.
In a nutshell, you can have a "local" vim (i.e. on a desktop/laptop machine presumably), which must still be built with netbeans_intg support, but now it is a vim under your complete control (i.e. it's on "your" machine), while clewn will run on the linux host where gdb and your debuggee will run.
You can then keep the source files on your desktop/laptop and have the remote clewn sort of "drive" your local vim to the proper source files while debugging.
IOW: clewn will get information out of gdb to know exactly which file/line you're into and connect to remote vim and tell it: "hey, go grab this file and show it around this line", highlighting current line, breakpoints etc.
This is a great solution for when you have far-away deployed systems and you need to debug them with minimum impact on the host where they are running, and presumably no option to transfer there all of your source files.
I don't know if this fits in any way with what you're trying to do, but it did really change things for me.
Hth,
Andrea.
Check out GDB server. Theoretcially, you should be able to start gdb on your linux machine in server mode and connect via GUI of your choice. As long as that GUI supports remote gdb connections, which Netbeans does.

Resources