When i want to start my DDEV Project an Container stucks at creating
Container ddev-oszimt-lf12a-v2-db Started
Error Message:
Failed waiting for web/db containers to become ready: db container failed: log=, err=health check timed out after 2m0s: labels map[com.ddev.site-name:oszimt-lf12a-v2 com.docker.compose.service:db] timed out without becoming healthy, status=
Its an Error i also had with some other projects.
In the Error Log is no information about this.
What could the Problem be and how do i fix it?
This isn't a very good place to study problems with specific projects, our Discord Channel is much better, or the DDEV issue queue.
But I'll try to give you some ideas about how to study and debug this.
Go to the Troubleshooting section of the docs. Work through it step-by-step.
As it says there, try the simplest possible project and see what the results are.
If the problem is particular to one particular project, see if you can remove customizations like .ddev/docker-compose.*.yaml files and config.*.yaml and non-standard things in the config.yaml file.
To find out what the causes the healthcheck timeout, see the docs on this exact problem, in your case the db container is timing out. So first, ddev logs -s db to see if something happened, and second docker inspect --format "{{json .State.Health }}" ddev-<projectname>-db.
For more help, you'll need to provide more information with things like your OS, Docker Provider, etc, and the easiest way to do that is to run ddev debug test and capture the output and put it in a gist on gist.github.com, then come over to discord with a link to that.
Related
I tried to play with LowCardinality setting, I got a message saying that this is an experimental feature and I have to SET allow_experimental_low_cardinality_type = 1 in order to use it.
I executed this command inside clickhouse-client and then I restarted the server. But I got
clickhouse-server.service: Unit entered failed state
Now I am trying to find out how to disable this setting and make my clickhouse-server start again.
Can you help with this please ?
PS: The version I use is the 18.12.17 and I installed it on Linux Ubuntu 16.04
ClickHouse has different layers for settings. If you used SET <setting> = <value> then you set it for current session. You don't need to restart ClickHouse. Please, take a look here.
I suppose you faced with another problem during starting your server. There a bunch of reasons why. So, firstly try to recollect what were done in configs since last restart (because you have just applied changes by restarting server).
Digging into logs also an awesome idea. Don't hesitate to check other similar issues on github.com, for example like this one
I am trying to use h2o steam (running on localhost) to deploy a model. After importing the model from h2o flow, clicking the "deploy model" option in the "models" section of the project, filling out the resulting dialog box, and clicking the "deploy" button, the following messages are displayed:
At first I thought that it was because maybe I needed to start up the service builder on my own, so I started it up following the docs here, but still got the same error. Any suggestions would be appreciated. Thanks :)
Just make sure jetty HTTP server is running locally by executing the following in your shell:
java -jar var/master/assets/jetty-runner.jar var/master/assets/ROOT.war
Looking here, it seems like I would need to "override" some kind of default browser restriction for accessing localhost:8080 (which is what I assume steam is trying to do to launch the service builder (I don't know much about networking related stuff)). I got around this by launching steam with the command:
$ ./steam serve master --prediction-service-host=localhost --prediction-service-port-range=12345:22345
where the ports are some arbitrary range between (1025, 65535) which I got by word-searching the a page of the steam source code (line 182 as of the date of this posting).
Doing this lets me deploy the models through the steam dialog without any error messages. Again, I don't know much about networking related stuff, so if anyone has a better way to solve this problem (ie. allow access of localhost:8080) please post or comment. Thanks.
I'm currently trying to deploy Eremetic (version 0.28.0) on top of Marathon using the configuration provided as an example. I actually have been able to deploy it once, but suddenly, after trying to redeploy it, the framework stays inactive.
By inspecting the logs I noticed a constant attempt to connect to some service that apparently never succeeds because of some authentication problem.
2017/08/14 12:30:45 Connected to [REDACTED_MESOS_MASTER_ADDRESS]
2017/08/14 12:30:45 Authentication failed: EOF
It looks like the service returning an error is ZooKeeper and more precisely it looks like the error can be traced back to this line in the Go ZooKeeper library. ZooKeeper however seems to work: I've tried to query it directly with zkCli and to run a small Spark job (where the Mesos master is given with zk:// URL) and everything seems to work.
Unfortunately I'm not able to diagnose the problem further, what could it be?
It turned out to be a configuration problem. The master URL was simply wrong and this is how the error was reported.
I know there are about a thousand answers to various permutations of this question but none of the fifteen or so that I've tried have worked.
I'm running on Mac OS Sierra and using Minikube 0.17.1 as well as kubectl 1.5.3.
We run our own private Docker registry that is insecure as it is not open to the internet. (This is not my choice or in my control so it's not open for discussion). This is my first foray into Kubernetes and actually container orchestration altogether. I also have a very intermediate level of knowledge about Docker in general so I'm drowning in terminology/platform soup here.
When I execute
kubectl run perf-ui --image=X.X.X.X/performance/perf-ui:master
I see
image pull failed for X.X.X.X/performance/perf-ui:master, this may be because there are no credentials on this request. details: (Error response from daemon: Get https://X.X.X.X/v1/_ping: dial tcp X.X.X.X:443: getsockopt: connection refused)
We have an Ubuntu box that accesses the same registry (not using Kubernetes, just Docker) that works just fine. This is likely due to
DOCKER_OPTS="--insecure-registry X.X.X.X"
being in /etc/default/docker.
I made a similar change using the UI of Docker for Mac. I don't know where this change persisted in a config file. After this change a docker pull worked on my laptop!!! Again, this is just using Docker not Kubernetes. The interesting part is I got the same "Connection refused error" (as it tries to access via HTTPS) on my Mac as I get in the Minikube VM and after the change the pull worked. I feel like I'm on to something there.
After sshing into minikube (the VM created my minikube start) using
minikube ssh
I added the following content to /var/lib/boot2docker/profile
export EXTRA_ARGS="$EXTRA_ARGS --insecure-registry 10.129.100.3
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 10.129.100.3
As you can infer, nothing has worked. I know I've tried other things but they too have failed.
I know this isn't the most comprehensive explanation but I've been digging into this for the past 4 hours.
The bottom line is docker pulls work from our Ubuntu box with the config file setup correctly and from my Mac with the setting configured properly.
How can I enable the same setting in my "Linux 2.6" VM that was created by Minikube?
If someone knows the answer I would be forever grateful.
Thank you in advance!
Thank you to Janos for your alternative solution. I'm confident that is the right choice for some use cases.
It turns out that what I needed was a good night sleep and the following command to start Minikube in the first place:
minikube start --insecure-registry="X.X.X.X"
#intelfx says that adding a port won't be necessary. I'm inclined to believe them but if your registry is on a non-standard port just keep it in mind in case things still aren't working.
In the end it was, in fact, a matter of telling Docker to use an insecure registry but it was not clear how to tell this to Docker when I was not controlling it directly.
I know that seems simple but after you've tried a hundred things you're almost hallucinating so you're not in a great state to make rational decisions. I'm sorry for the dumb post but I'm willing to bet this will help at least one person one day, which makes it worth it.
Thanks SO!
The flag --insecure-registry doesn't work on the existing cluster on MacOS. You need to do minikube delete, it's not enough just to stop the cluster with kubectl stop.
I spent plenty of time to figure this out and then I found this comment at https://github.com/kubernetes/minikube/issues/604:
the --insecure-registry flag is ignored if the
machine already existed (even if it is stopped). You must first
minikube delete if you want new flags to be respected.
You can use kube-registry-proxy from (needs some configuration):
https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml
Then you can refer to localhost:5050 as your registry. The trick is that localhost is allowed as an insecure registry by default.
So I'm following this tutorial:
https://www.playframework.com/documentation/2.3.x/Installing
It all seems installed - i.e. all the commands work but when I try and call:
activator new my-first-app play-scala
I get the following:
Fetching the latest list of templates...
Could not fetch the updated list of templates. Using the local cache.
Check your proxy settings or increase the timeout. For more details see:
http://typesafe.com/activator/docs
OK, application "another-app" is being created using the "play-scala" template.
akka.pattern.AskTimeoutException: Ask timed out on [Actor[akka://default/user/template-cache#1575831997]] after [10000 ms]
at akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:333)
at akka.actor.Scheduler$$anon$7.run(Scheduler.scala:117)
at scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:599)
at scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)
at scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:597)
at akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(Scheduler.scala:467)
at akka.actor.LightArrayRevolverScheduler$$anon$8.executeBucket$1(Scheduler.scala:419)
at akka.actor.LightArrayRevolverScheduler$$anon$8.nextTick(Scheduler.scala:423)
at akka.actor.LightArrayRevolverScheduler$$anon$8.run(Scheduler.scala:375)
at java.lang.Thread.run(Thread.java:744)
And nothing happens.
I just installed it on a PC in my house under the same network so I don't think my connection is the issue. I'm not using a proxy either..
Got any ideas? I've been trying to get this working for over a day now.
I'm on OSX Yosemite by the way.
I sometimes have timeouts too, especially while working in the university on some sloppy WLAN.
There are two types of activator, the usual light-weight one and the offline version. In the second, all repositories are present so the activator does not need to gather anything from the internet.
When you go to https://www.playframework.com/download ... look for the offline distribution (around 400MB) and install it like the normal activator.
If this solves your problem, there was something wrong with the activator trying to get something from a repository (you said that you can run the project but get server timeouts).
[EDIT]: You can also set the timeout to 30 seconds and see if this helps
activator -Dactivator.timeout=30s new "project name"