I need to setup docker on my windows 10 OS. I've previously been a linux guy where everything just works so this is a pain for me. It works as expected but within the dockerfile there are calls that uses /bin/bash which makes the build fail.
I've tried to setup a VM with hyper-v but then i stopped because i figured there must be an easier way. I found the bash.exe in windows. i can't run as sudo but i guess that doesn't matter here as long as i run bash as administrator. wget works here but the docker program cant be found when i run docker --version.
Is the easiest way to run bash scripts on win10 with the bash.exe? And why can't docker run in the bash terminal (there is no .bashrc)?
I would use Linux on the Windows box. The two systems can access each other's file systems (\\wsl$, /mnt/c). I use Ubuntu, and it's as if I was using a "normal" Ubuntu box in all aspects, so far. I'd be surprised if your process didn't work here.
Alternatively, there's Cygwin. When running Cygwin, you're kinda in a VM, kinda not. It's a bit blurry. As such, it's not as robust as WSL. But it might do the trick, and it's a lot less "heavy" than installing a full Linux distro.
Related
I'm so sorry because I know this is a dumb question, but I've been trying to figure this out for about 2 hours and I can't figure it out. I've created a bash file that uses some other programs (tcpdump, tshark). The bash runs as it should but on every line that I use tshark, tcpdump, etc. it says "command not found".
I'm using Cygwin on my Windows 7 VM. All of the files are in the same folder and I I've tried adding the locations of the other programs to the PATH variable. I tried commands such as export PATH=$PATH:filelocation but when I do $PATH those results aren't showing. How can I get these commands to be recognized?
Thank you.
current errors
Cygwin is not a Linux distro, therefore, you don't have all the functionality like you would if you had a Linux installation.
You could try one of the following.
1) Use Virtualbox to make a VM of some Linux distro and use bash there. You could use Ubuntu server, which has no GUI.
2) Use this site to find packages that will add functionality to Cygwin.
3)Upgrade to Windows 10 and have a native (sort of) bash to use.
I've been working with Docker for Windows, attempting to create a Windows Container that can run cygwin as the shell within the container itself. I haven't had any luck getting this going yet. Here's the Dockerfile that I've been messing with.
# escape=`
FROM microsoft/windowsservercore
SHELL ["powershell", "-command"]
RUN Invoke-WebRequest https://chocolatey.org/install.ps1 -UseBasicParsing | Invoke-Expression
RUN choco install cygwin -y
RUN refreshenv
RUN [Environment]::SetEnvironmentVariable('Path', $env:Path + ';C:\tools\cygwin\bin', [EnvironmentVariableTarget]::Machine)
I've tried setting the ENTRYPOINT and CMD to try and get into cygwin, but neither seems to do anything. I've also attached to the container with docker run -it and fired off the cygwin command to get into the shell, but it doesn't appear to do anything. I don't get an error, it just returns to the command prompt as if nothing happened.
Is it possible to run another shell in the Windows Container, or am I just doing something incorrectly?
Thanks!
You don't "attach" to a container with docker run: you start a container with it.
In your case, as seen here, docker run -it is the right approach.
You can try as an entry point using c:\cygwin\bin\bash, as seen in this issue.
As commented in issue 32330:
Don't get me wrong, cygwin should work in Docker Windows containers.
But, it's also a little paradoxical that containers were painstakingly wrought into Windows, modeled on containers on Linux, only for people to then want to run Linux-utils in these newly minted Docker Windows containers...
That same issue is still unresolved, with new case seen in May and June 2018:
We have an environment that compiles with Visual Studio but still we want to use git and some very useful commands taken from linux.
Also we use of-the-shelve utilities (e.g. git-repo) that uses linux commands (e.g. curl, grep,...)
Some builds require Cygwin like ICU (a cross-platform Unicode based globalization library), and worst: our builds require building it from source.
You can see an example of a crash in MSYS2-packages issue 1239:
Step 5/5 : RUN "C:\\msys64\\usr\\bin\\ls.exe"
---> Running in 5d7867a1f8da
The command 'cmd /S /C "C:\\msys64\\usr\\bin\\ls.exe"' returned a non-zero code: 3221225794
This can get more information on the crash:
PS C:\msys64\usr\bin>
Get-EventLog -Index 28,29,30 -LogName "Application" | Format-List -Property *
The workaround was:
PS > xcopy /S C:\Git C:\Git_Copy
PS > C:\Git_Copy\usr\bin\sh.exe --version > v.txt
PS > type v.txt
As mentioned in that thread, the output gets lost somewhere in the container, thus sending it to a text file.
After playing with it for a long time, my findings were the following:
If your Cygwin utilities are crashing your container, you need to use process isolation. See https://learn.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/version-compatibility for the requirements (essentially you need to use Windows Server 2016 and a build-matching Docker Image). I spent some time trying to understand the reason why hyper-v isolation doesn't work and so far I didn't come to any conclusion;
If your Cygwin utilities apparently do nothing - but they don't crash the container - you need to remove the -t flag (the -i flag is still ok) or alternatively play with stdout redirection. Apparently there seems to be an issue with MSYS2 when it deals with some pseudo-ttys. You can verify that programs still run if you redirect stdout to a file (e.g. whoami won't output anything when you run it without any stdout redirection, but whoami > out.txt will output the expected result to a file). It might be possible to fix this by replacing the pseudo-tty but I didn't try it. I suspect that the problem is an invalid handle somewhere inside the MSYS2 libs - as other console apps can print things to the terminal - but I didn't verify this.
Hope it helps to all of you having the same problem.
I was able to get a preinstalled (copied from the host) copy of Cygwin to work in a nanoserver-based container with these two steps:
Using Żubrówka's recommendation for no -t in the docker run cmd-line (when running docker interactively)
Copying the host's (Windows Server 2016) kernel32.dll to the container's c:\windows\system32
I found serveral versions of kernel32.dll on my system, and used the one from c:\windows\system32 with md5 hash d8948a7af764f7153b3e396ad44992ff
This also made a large variety of other executables work. Note that without a tty, using the container is even more cumbersome, and the bash shell doesn't render the prompt. However, scripts (via Jenkins, in my case) that rely on cygwin components work fine.
If that doesn't help, try this guide, it helped me a lot. If your windows application (other than cygwin) is legitimately missing DLLs, the instructions in this guide can help. It never occurred to me that SysInternals' procmon.exe can be run on the host and still report events from the container!
I am in the process of installing jupyterhub. I successfully install jupyterhub using:
python3 -m pip install jupyterhub
npm install -g configurable-http-proxy
However, when I run jupyterhub -h in the Windows command prompt it gives:
"jupyterhub" is not recognized as an internal or external command, operable
program or batch file.
I added C:\Users\User\AppData\Local\Continuum\Anaconda3\Lib\site-packages\jupyterhub\ to my user environment variable, however still receive the message. What path should I be using?
Please note that according to this, Jupyterhub is not officially supported for Windows yet.
That aside, you could dockerize it to make your life easier. For this error, please check if you can see the executable in C:\Program Files\Continuum Analytics\Anaconda3\scripts. The lib directory you're specifying contains python source files and not the executables.
I had this same issue, and I saw this occurred because jupyterhub is a python script rather than an executable. So to run this on Windows I needed to execute it like python C:\Program Files\Continuum Analytics\Anaconda3\scripts\jupyterhub.
However, I still was unable to run jupyterhub on Windows because it depends on the pwd module, which is a Unix/Linux only module.
As others have said, Windows is not a supported platform. JupyterHub is best used on Linux-like platforms where you have Docker or something similar to conatainerize each user's session.
A good alternative is to install Oracle VirtualBox and run a local VM. I run a 64-bit Ubuntu and it's quite good performance. It makes things much easier to run JupyterHub on. Asides depending on pwd, there are also assumptions around user-creation and other activities that Windows isn't going to handle well.
In short, if you want to run on native Windows, you're going to become the first JupyterHub Windows contributor. I looked at doing it but it looked like too much effort.
The upside of running a VM is that behaviour in the VM is going to more closely resemble what you have running on the server anyway. If you don't plan running on a server, then just "jupyter notebook", as this is all JupyterHub ends up running...
I'm interested in web development on the Node.js platform. My host OS is Windows 7. What would be the preferd way to set up a development environment. Run it directly on the host or in a linux based virtual machine? What are the pros and cons between these two methods?
If I go with a VM, can I still run the text editor and web browser in Windows (for performance reasons)?
Eh from experience, use Linux Docker.
edit Use Docker. bake in your dependencies, mount your project at run time, pin to a particular version of LTS node only. I'd take a 2gb docker image over un-runnable project leading to days lost being forced to upgrade to new packages. - 2018/04/10
But from someone whose spent the last 8 years developing in a linux based environment, and having spent the last 6 months developing software using nodejs in a windows dot net environment, here are my discoveries, shocking or otherwise...
Problems on windows:
can't effectively utilise docker Latest version of the docker toolkit solves this as far as I'm concerned. ymmv.
most node modules require node_gyp, which on the surface doesn't seem problematic (since gyp is supposed to be cross platform compiler), except when you delve into what it takes to get this working on windows: nothing short of installing visual studio will work. This sucks for me due to several reasons:
I'm normally on linux, so I never want to have to use visual studio.
It's entirely the most ridiculous idea that compiling something on windows requires at minimum a 3GB installation of an IDE... not libs but an entirely monolithic piece of GUI software I'll never ever launch.
the windows equivalent of debians build-essentials is actually a disparate sprawling ill named collection of gui only installers scattered across the internet all requiring a specific installation sequence. This, compared to sudo apt-get install build-essentials is overly time consuming and fraught with hidden gotchas.
developing on windows will allow you the bad habit of mixed case path names, unless your team either has a strict policy that is followed/enforced this will be a slippery slope to problems later on.
while windows supports more than 256 characters in paths, important tooling through out does not. enter stage left: rimraf and robocopy... ugh.
the windows terminal sucks... so does the default shell: cmd.exe...
Powershell is far too verbose in it's syntax and not to my taste... Installing Cmder aleviates this somewhat, however the only way for Cmder to interface with cmd.exe is to basically copy keystrokes to a hidden windows terminal running cmd.exe. (lolwut). Cmder works a lot better with shells that a more modular (zsh, bash, etc).. update: I now use powershell with pshazz and scoop, which is actually pleasant to use.
Having still improved the shell and terminal situation, nodejs for
windows will still assume your environment variables are %OF% %THE%
%WINDOWS% %VARIETY%... not the $UNIX $STYLE. So you'll basically be
using bower and npm mostly from cmd.exe... more ugh. I dont' seem to be having this issue anymore since I've incorporated a mix of cross-env and commander or yargs.
You'll also need to install python for windows, not a problem because choco exists and has you back there. update: have a look at boxstarter, will help automate your new machine setup with recipes (or you could actually graduate to using ansible or salt).
experienced python, ruby developers will tell you that old projects will need the version of their engine silo'd for when you need to revisit them (upgrading to newer versions is mostly not expedient or practical, read: rabbit holes), so you'll want something like rvm and virtualenv...
nvm (which only works on unix systems linux and macosx) because it's
a collection of bash scripts. I recommend using ZSH as your shell along with Zgen and Tarrasch/zsh-autoenv plugin.
nodeenv, which is more likely... a python program that integrates with virtualenv. Some people like this. I have no problem with it, but our team uses nvm.
however, you're better off with nvm-windows because "reasons". scratch that, use nodist on windows... bar far the better choice, you won't need to worry about some kind of autoenv since nodist by design handles this.
Installing on Windows:
install chocolatey
choco install cmder nodejs python2 choco install python2
install http://scoop.sh, then use it to install pshazz.
remove any versions of node manually installed globally.
install nvm-windows install nodist.
install visual-studio 2012 express, then never launch it if you treasure your cpu cycles. this may be overkill as microsoft have released an equivalent to build-essentials.
install windows 7/10 64bit sdk
Problems on Linux:
tldr; use nvm. for more reasons other than the below.
you'll have to set the global npm node_modules path to a user owned directory (I've started using ~/.local/share/npm). Pleasantly, this is something I found the windows installation of nodejs got right (probably not intentionally). A non issue when using nvm.
Ubuntu already has a binary called node, so #!/usr/bin/env node will by default not run nodejs. luckily debian systems have a neat management tool for controlling what the env binary emits: update-alternatives. ignore suggestions to use symlinks here, which will only cause problems later on in subtle ways. also a non issue when using nvm.
Installing on Linux :
$ sudo apt-get install git-core git-flow build-essentials python-dev python- pip
$ curl https://raw.githubusercontent.com/creationix/nvm/v0.20.0/install.sh | bash
$ npm config set prefix ~/.local/share/npm
$ nvm install stable
$ nvm alias default stable
references:
https://groups.google.com/forum/?fromgroups#!msg/msysgit/9YIR6jlNB0Q/zHhPN3tejFkJ
https://github.com/creationix/nvm
http://bliker.github.io/cmder/
https://github.com/coreybutler/nvm-windows
https://github.com/Tarrasch/zsh-autoenv
https://github.com/lukesampson/pshazz
http://scoop.sh
https://github.com/marcelklehr/nodist
We have a system via which we just use a config file, which handles all our problems like path differences ("c:\blarg" vs "~user/blarg") and, as a bonus, lets us control differences between debug and production environments.
Node.js is cross platform, so we totally have developers working on all sorts of computers, and it's no problem at all.
This is an example config file I use on a file storage project:
/**
* All of these are mandatory except for log_level (which defaults to "info", 1)
* and log_echo_to_console (which defaults to false)
*/
exports.config = {
log_level: 0,
log_file: "/path/to/send.log",
request_log_file: "/path/to/send_requests.log",
log_echo_to_console: true,
port_number: 8088,
no_notification_emails: true,
image_url_base: "http://s3.amazonaws.com/", // MAKE SURE THIS ENDS IN "/"
tmp_file_folder:"/tmp/",
s3_info: {
key: 'xxxxxx',
secret: 'yyyyy',
file_bucket: 'sendtransfer/',
},
backend_info: {
db_info: {
server: "localhost",
user: "db_user",
password: "secret",
database: "SendRemote",
pooled_connections: 125,
idle_timeout_millis: 30000
},
memcache_info: {
host: "127.0.0.1",
port: "31111",
pooled_connections: 200,
timeout: 20000
}
},
debug_server: true
};
For Windows machines, just change the paths. It's all good!
Then in code, you can just type:
var local = require('local.config.js');
fs.writeFile(local.config.log_file);
// etc
Embrace multiculturalism!!!
I am also on Windows 7 and use Virtualbox with a Linux ( debian ) guest, i would recommand it because I for myself am faster doing some stuff in the commandline then clicking arround in Windows.
Another nice feature is that if you put your VM on an USB stick you can take it with you and use it everywhere where a Virtualbox Host is installed, so you can take your whole development environment with you.
It's no problem at all to use your favourite text editor or browser in Windows, just install samba and mount your home directory into Windows.
Same goes for your browser since the VM is just another machine in your LAN, instead of pointing your browser to localhost point it to the VMs Ip and you are fine.
Obvious con here is if you don't have any experience with Linux yet you should probably stick to windows because it will take you some time to get into it.
just my two cents maybe even less:
I'll suggest you a third option: to double install windows/ubuntu setup (preferably ubuntu dist which is most gui friendly) and research this option as well this way you would be more familiar with the linux/unix and even iOS which will even make you understand windows better and a better programmer. Sometimes the virtual box is too slow, while linux is very efficient with resources.
If you have the ability to install a virtual machine,you can also give a go to installing a linux distribution and get yourself familiar with this language/system of OS which a lot of the web is structured upon
I really enjoy coding node.js on windows using git bash:
http://blog.nodester.com/post/19902515151/tips-for-windows-users
It's seems faster then and easier then running VirtualBox. Given that I still use Virtual Box for testing before going to production.
Is it possible to run a windows xp bat script remotely from a ubuntu machine via command line?
This is possible if you have an ssh server running on the WinXP machine. It is trivial to set up such a server if you have installed Cygwin. This is well described here.
Then from an Ubuntu command line (or cron job) you run
ssh user#winxp command
Make sure your .bat has executable permissions.
I think that the technology you are after is WMI. I see that there is an ubuntu package called wmi-client, which you can sudo apt-get install and attempt to you. Some quick searches and I'm unable to find details, but maybe that will get you somewhere....
You would probably use something like remote desktop, and if you did it that way, it would work, but your question isn't very specific