I'm sure this will become a more and more relevant question as the advent of Docker's reign comes to fruition (Hail the whale!). I've isolated my Postgres instance to a Docker container for development purposes. I hate actually creating users on my system, and having to go kill start up commands that I didn't really want.
Now my problem is, I'm trying to work with Ecto and Ecto needs psql to do what it does. How do I install psql and not the whole elephant? I imagine the best answer would tackle Linux, Windows, and Mac, but I'm most interested in Mac OS X as there doesn't seem to be a brew formula for installing just psql.
Related
I'm writing a Windows application that interacts with a WSL distribution/instance which has Docker running. Also, in some cases, my application will also run commands/processes directly in the WSL/Linux distribution.
I would like for that WSL distribution to be "managed" by my application. By that, I mean that:
The distribution should be installed by my application so that users don't have to have knowledge of WSL itself. If the users have to install the WSL distribution themselves, it's entirely possible that they could misconfigure it. It's also possible that some users might not be able to get it up and running in the first place.
The user should have no control over my application's managed WSL distribution. They would not be able to:
Shutdown the instance when it is running under my application.
Uninstall the instance without uninstalling my application.
Preferably the user would not even see the distribution at all.
Can I create/install a distribution that is managed by my application in this way?
It's obvious that I could simply run a batch script, import the instance, and then just run it like that. But this seems verbose and, as mentioned, would still be visible to the end user.
Well, I definitely can't offer you a direct answer that meets those requirements, and I don't think it's possible, at least not currently.
WSL distributions are always visible to the end user. And they can always be --terminate'd or even --unregister'd. Even Docker Desktop's distribution are subject to these limitations.
You can see the Docker Desktop distros with wsl.exe -l -v, returning something like:
NAME STATE VERSION
* Tumbleweed Stopped 2
...
Ubuntu-22.04 Running 2
Artix-dinit Stopped 2
docker-desktop Stopped 2
docker-desktop-data Stopped 2
...
side-note: I have entirely too many distributions ;-)
But ... Docker Desktop does overwrite the distro with a new version when you upgrade, so any changes to it will (at least eventually) be overwritten.
So that does provide a possibility:
Ship your managed distro in tar form in/with your application.
When starting the application for the first time, create (--import) the distribution from the tarball.
Make sure that nothing in the distribution writes to/modifies the filesystem. You might could even set it read-only in some way, but I haven't tried that.
Perform a checksum of the distribution vhdx when starting each time, and confirm that it hasn't been modified.
If it has been modified, then delete it and re-import.
Alternatively, as mentioned in the comments, there may be a way to do this with a Hyper-V VM (but only, of course, on Windows Professional or higher). The WSL2 VM itself is hidden and managed in some way, and the same appears to be the case with BlueStacks.
I'm still not sure how they do this. From watching the Event Viewer, it appears that a new Hyper-V "Partition" is created when starting WSL2. This does not appear to be related to any "disk partitioning", so I believe that it is some type of Hyper-V partition that is hidden from the user.
I'm trying to run cb_share_config from an xterm to import some color themes using:
"sudo cb_share_config"
which results in:
"Unable to initialize gtk, is DISPLAY set properly?"
This doesn't make sense to me since I'm running it locally, not through ssh or anything. I didn't think I needed to set the the display. Everything I've searched for is related to connecting to a server, which I'm not doing.
Code::Blocks version 16.01
OpenSUSE Leap version 42.3
Thanks in advance.
Ok so I've solved the problem or rather avoided it altogether by just running the tool directly from a file manager (Thunar). I'm still not entirely satisfied with Windows-like solution but it works. If anyone has any insights as to why I couldn't run it through a terminal I'd like to hear it but I suspect it might be a better question for a linux board.
I'm using nix as a package manager on OSX. I've installed postgres. Now I'd like to start and stop the postgres server (and other related utilities). I can write a script to do this manually, and edit my config. But, is there a "best practice" way to do this on OSX? E.g. I found postgres configs under ~/.nix-profile/share, are there startup scrips for OSX somewhere?
I've not come across anything related to the nix project for running services in the nixpkgs repo directly on OS X.
If you just have a few services you want to run I believe you'd have to put the scripts together yourself as you suggest.
Alternatively disnix should do what you're after but it might be a bit overkill just for one machine.
Another option would be to deploy a nixos configuration into a (optionally headless) virtualbox instance with nixops.
I'm using this setup myself for a different use case, but it should certainly also support yours.
disnix, nixos and nixops are documented together with nix/nixpkgs on the nixos.org page.
During a long period of time I have installed some apps/scripts with the terminal, many were tests to understand some tools, and now I use just a few of them (very few).
What I would like to accomplish is to wipe them all in order to "restore" OSX installs and, later on I will reinstall the one I need.
I'm not sure I'm using the right terminology, I installed things via terminal like:
nodej
meteor
ionic
And other stuff, some with the use of curl ..., others with OSX Installer (NodeJS for example) and others with cordova I guess.
How to remove them all?
I am using boot2docker which creates a VM of linux on OSX and allows you to use regular docker.
I am following thistutorial to try and use google's recent "deep dreams" image software. There is a python implementation on github that was released and become very popular. Not knowing python or the frameworks it comes with I decided to look for a simpler way to run it. So this guy made a container in docker to run it and explains how to set it up in that link. Unfortunately I am running into a bug where it denies the IP certificate. AS you can see..
As recommended by Sabin, this is what i get when i run the curl command.