Even though this question might look like a duplicate one, i seem to be having a peculiar problem here.
Scenario 1:The project folder in the /users directory
I get the below error when i tried to start my docker image:
docker: Error response from daemon: Mounts denied:
The path /users/myUserName/myApp/backend/build/pacts is not shared from OS X and is not known to Docker.
The exact same command passed a few days back and has suddenly stopped working
Scenario 2: The project folder is in the /Documents folder
The docker run command which threw the same error as Scenario 1 now somehow seems to work fine.
The docker preferences has /Users in the list of shared directories and still doesnt work.
(Image attached)
Docker preferences
Macos version : Mojave(10.14.6)
Note: Whenever the docker run command throws the error in Scenario 1, simply shifting the project to a new location (like /Downloads) seems to work fine.Even though this fixes the issue temporarily, i am curious to know why this error occurs even though the default preferences are as expected.
The path is case sensitive. The paths /users/myUserName/myApp/backend/build/pacts and /Users/myUserName/myApp/backend/build/pacts are different inside of docker while MacOS treats them as the same.
To fix, you likely need to cd /Users/myUserName/myApp/ before running your command.
Related
I'm trying to map windows folder to docker container. Path is C:/myfolder.
Using -v command line option works fine, but I'd like to do it using dockerfile.
RUN --mount=source="myfolder",target="/workspace/myfolder"
This works ok for local folder, but not if I specify an absolute path
RUN --mount=source="C:\myfolder",target="/workspace/myfolder"
This returns error "failed to compute cache key: "/c:myfolder" not found: not found"
I tried to use different path syntaxes, such as //c/myfolder and others, but always encounter error in docker build. Same result with relative paths such as "../myfolder", etc. What is the correct way of doing this?
I set up an app using Laravel Sail, and hosted it in my C:Users/User/my_app folder, however API endpoints were terribly slow (around 7s to respond).
I decided to move my application to the WSL filesystem. I copied my_app folder to \\wsl$\Ubuntu-20.04\home folder. However, when I type ./vendor/bin/sail up command nothing happens. No error message, no "command not found message", nothing.
I tried changing home/my_app permissions as well as vendor/bin/sail permissions but it has not helped me. I have no idea how to solve this problem as I am not receiving any message from the console.
I think I solved the issue with copying the files from Windows to WSL using cp command run from WSL console (cp /mnt/c/users/..... ).
However I stubmled upon this error Laravel & Docker: The stream or file "/var/www/html/storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied which I solved using the answers from this github thread https://github.com/aschmelyun/docker-compose-laravel/issues/49.
Now my endpoint response times are usually under 100ms.
previously, prior to a docker update, i think, i would build an docker image and then tunnel in locally with the below command. it has always worked. but now, it doesn't.
this code worked (to tunnel into a local directory on my computer so the docker can access it)
docker run -it -v [directory]:/inside-container [image id] bash
now, it throws this error:
docker: invalid reference format.
See 'docker run --help'.
i cannot understand what changed.
any suggestions?
A "reference" is a pointer to an image.
"Invalid reference format" error frequently happens when an invalid arg gets parsed as the image name or invalid characters from copy/pasting from a source that changes dashes and quotes.
https://sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow-lightning.html#29
Doesn't your dir contain spaces?
Double-check the syntax, like quotes, hidden symbols etc.
there were issues with the versioning of docker, and when i got the latest updated, everything worked again. very strange that exactly the same code-line didn't run on a prior version. i'm not sure how i feel about all that, but it's corrected now on version: Docker version 19.03.13
I've been trying to use cisco's libacvp on my windows (10, 64 bit) computer for some time now. I have installed OpenSSL 1.1.1g and Docker (19.03.12).
While trying to run docker build -t libacvp_w_openssl111 . in cmd I've been getting the error unable to prepare context: unable to evaluate symlinks in context path: EvalSymlinks: too many links.
I looked through this post (which seemed to get a lot of attention) but the only solution that was found there was to check that the Dockerfile was in the correct directory (also relative to the current directory from cmd), with no file extension and capitalized correctly, which it was. Beyond that - no help.
Any thoughts?
Thanks
It looks you're executing docker build in a folder that has a lot of symlinks, some of them which are pointing out of current dir (unable to prepare context error message)
Try creating a new empty folder, copy Dockerfile there and execute again your docker build -t libacvp_w_openssl111 .
I had exactly the same problem but in my case it was a matter of long path where the Dockerfile was sitting. When I moved it to root folder of another drive it worked like a charm.
I am new here and i will try to explain my question kindly ignore any mistakes.
I am using git version git-2.8.2
It worked fine for one day then this problem occurs.
I am using gcloud repository.
First I tried gcloud clone command then this error occurs
Then to make sure git is there I tired git command then this error occurs
Then I double check by opening git Bash but same error was there too.
I tried reinstalling changing directory. but nothing works.
I face the same problem after I try to avoid memory leak in Windows 10. If you happened to change the regedit like me, just type regedit in the search then go to
HKEY_LOCAL_MACHINE -> SYSTEM -> ControlSet001 -> Services -> Null
change the value of Start to 1.
I accidentally bumped into the same problem when I was sorting out the services running on my computer with Windows 10.
fatal: open /dev/null or dup failed: No such file or directory
The reason was that I deleted the service named 'Null' that had no description as I thought that was a virus service.
Thus, when I found my git unable to operate, I reckoned the deleted service.
According to a solution provided on some site I tried to run the service again using cmd.exe
sc config Null start= system
sc start Null
but it said the service hadn't been existed in the list.
Thankfully, there are some kind folks who shares the information of the default services running on Windows 10 and the description necessary for the successful bringing back the service.
So as to get the service back in the list:
press Win + R
type regedit
go to HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services section
Create Null folder and all the params it needs.
Restart your computer.
Now you got your Null service back and your git back as well.
Hope this helps.
I solved my problem accidentally. I would like to share it with everyone.
It was not a problem of git or gcloud or source tree.
Actually I have forcefully stooped windows update from installing which causes this problem.
Now when I install windows updates again this problem is fixed now.
Maybe this helps someone.
the similar situation in chrooted linux tree is fixable following way:
cd inside the folder where you are preparing the chroot dir, then
mount -o bind /dev dev/
then only chroot inside
I had this weird bug just now. I went back a dir and tried git init, it worked there.
I re-ran zsh and tried again in the dir it errored in originally and it worked. shrugs