enter image description here
I have windows with linux subsystem and I am trying to run druid. I am getting a message CANNOT CREATE FIFO. What should I do to avoid it?
I faced the same issue myself. Was trying to run through WSL Ubuntu. It seems that FIFO file can't be created over mounted drive i.e /mnt/c/.
Workaround for this, you'd have to copy the entire Druid installation folder over to any internal folder eg. /usr/share/ and launch from there.
Related
When I try to run any kubectl command including kubectl version, I get a pop-up saying "This app can't run on your PC, To find a version for your PC, check with the software publisher" when this is closed, the terminal shows "access denied"
The weird thing is, when I run the "kubectl version" command in the directory where I have downloaded kubectl.exe, it works fine.
I have even added this path to my PATH variables.
thank you for the answer, #rally
apparently, in my machine, it was an issue of administrative rights during installation. My workplace's IT added the permission and it worked for me.
Adding this answer here so that if anyone else comes across this problem they can try this solution as well.
Not knowing what exactly you downloaded, i would suggest you to delete everying in the folder and follow the instructions for installing kubectl for Windows from here:
https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/
Note: downloading the .exe is not enough. You need a kubeconfig file "config", which contains the configuration to access your cluster.
kubectl looks for this file in a hidden folder under your user profile directory. c:\users<me>.kube.
Just to let you try, i would suggest you to activate Kubernetes in your Docker-Desktop installation. I guess you have this installed. If not install it from the Dockersite. https://www.docker.com/products/docker-desktop/
Activating Kubernetes inside Docker-desktop, will install also kubectl and save the config in the .kube folder.
After the installation finished, in a new terminal:
kubectl get node
You should see the 1 node in the kubernetes-docker-desktop cluster.
Now if you want to access another cluster, you need the kubeconfig-file for that cluster. If you have it, just rename the config in the .kube folder (to not loose it) and put the other config inside.
If the new config file is correct you should be able to access that cluster.
The config file can be structured to hold more than one cluster configuration and you can switch between them using a so called context.
Here you can get the information how to do that, according to your needs:
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Hope this can help you, starting with KUbernetes.
So I have an interesting issue that I just can't figure out why I'm getting this and what to do.
So basically I store all my development projects on my Synology NAS for local access between my various devices. There has never been a problem with this until I started playing around with Elixir and more importantly Phoenix. The issue I am getting is when running mix phx.server. I get the following
[warn] Phoenix is unable to create symlinks. Phoenix' code reloader will run considerably faster if symlinks are allowed. On Windows, the lack of symlinks may even cause empty assets to be served. Luckily, you can address this issue by starting your Windows terminal at least once with "Run as Administrator" and then running your Phoenix application.
[info] Running DiscussWeb.Endpoint with cowboy 2.7.0 at 0.0.0.0:4000 (http)
[error] Could not start node watcher because script "z:/elHP/assets/node_modules/webpack/bin/webpack.js" does not exist. Your Phoenix application is still running, however assets won't be compiled. You may fix this by running "cd assets && npm install".
[info] Access DiscussWeb.Endpoint at http://localhost:4000
So I tried as it stated and ran it in CMD as admin but to no avail. After some further inspection I tried to create the symlinks manually but every time I tried I would get a Access is denied. error (yes this is elevated CMD).
c:\> mklink "z:\elHP\deps\phoenix" "z:\elHP\assets\node_modules\phoenix"
Access is denied.
So I believe it is something to do with the fact that the symlinks are trying to be created on the NAS because if I move the project and host it locally it will work. Now I know what you're thinking. Yes, I could just store them locally on my PC but I like to have them available between PCs without having to transfer files or rely on git etc. (i.e. offline access), not to mention that the NAS has a full backup routine.
What I have tried:
Setting guest read write access on the SMB share
Adding to /etc/samba/smb.conf on my Synology NAS:
[global]
unix extensions = no
[share]
follow symlinks = yes
wide links = yes
Extra logging on SMB to see what is happening when I try it (nothing extra logged)
Creating a symbolic link from my MAC (works)
Setting all of fsutil behavior query SymlinkEvaluation to enabled
At the moment I am stuck and unsure of what to try next, or even if it is possible. Considering just using NFS instead but will I face the same issues with SMB?
P.S I faced a similar issue with Python venvs a while ago, just a straight-up Access is denied. error and just gave up and moved just the venv locally and kept the bulk of the code on the NAS. (This actually ended up beingthe best solution for that because the environments of each device on my network clashed etc.)
Any ideas are greatly appreciated.
I am working in Zalenium, the execution is running. I have a requirement where I have to upload a file from a Windows machine where my dockers are running in a container. Can anyone help? I will explain the scenario in details if I am not clear
You can check "Mounting volumes/folders across containers" under https://opensource.zalando.com/zalenium/#docker, after that you can upload the file referencing the directory inside the container.
I'm trying to mount a network folder with a Docker container on Windows 10 with the following syntax. Using UNC paths does not work. I'm running it under Hyper-V and the stable version of Docker.
docker run -v \\some\windows\network\path:/some/local/container
Before I was using Docker Toolbox, and I could map a network share to an internal folder with VirtualBox. I've tried adding the network share as a drive, but it doesn't show up as an available drive under the settings panel.
Currently I'm using mklink to mirror a local folder to the network folder, but I'd like to not depend on this as a solution.
Do this with Windows based containers
Go to Microsoft documentation https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts.
There you'll find information about how to mount a network drive as a volume in a windows container.
Do this with Linux based containers
Is currently (as of 2019-11-13) not possible. BUT you can use a plugin: https://github.com/ContainX/docker-volume-netshare
I didn't use it, so I have no experience with it. Just found it during my research and wanted to add this as a potential solution.
Recommended solution
While researching on this topic I felt that you should probably mount the drive from within the container. You can pass required credentials either via file or parameters.
Example for credentials as file
You would require to install the package cifs-utils in the container, add
COPY ./.smbcredentials /.smbcredentials
to the Dockerfile and then run the following command after the container is started:
sudo mount -t cifs -o file_mode=0600,dir_mode=0755,credentials=/.smbcredentials //192.168.1.XXX/share /mnt
Potential duplicate
There was another stackoverflow thread on this topic here:
Docker add network drive as volume on windows
The answer provided there (https://stackoverflow.com/a/57510166/12338776) didn't work for me though.
I'm looking for a solution for monitoring a folder for new file creation and then execute shell command upon the created file. The scenario is I have a host machine that runs a virtual machine and they share a folder. What I want is when I create or copy a new file to that shared folder on my host machine, on the VM, the system should be able to detect those changes. I have tried incron and inotify but they only work when I do the copy, create as a user in the VM. Thanks
Method 1 in this answer may help: Bash script, watch folder, execute command
Just run that script in your VM, and you should be able to detect changes made by the host.