I am working through the process for activating Kerberos on the Cloudera quick-start VM. The vm begins life with hostname = "quickstart.cloudera" but I had to change it to get it into our local DNS consistently. After changing the name I was able to get everything except impala to come up. The manager is passing it --hostname=quickstart.cloudera even though everything else in the whole system knows the new name. I don't strictly have to have impala running for the tests I need to run but it's driving me nuts. Any clues?
I'm looking at a relatively fresh install of CM 5.3 with Impala 2.1 and I don't see the hostname being passed to the catalog server via cmdline flags.
Unless you're explicitly setting the hostname parameter in CM via a safety-valve configuration (I assume you're not doing this), then Impala gets the hostname to use by calling the gethostname() stdlib function (see the gethostname() man page). The log output snipped you showed is somewhat deceiving because when the flag isn't set, Impala actually sets that value manually and it shows up as if it were passed by the user.
Anyway, you should check that the hostname is properly changed on the box,
which may depend on your OS. A few things to try: check the hostname command returns the name you expect and that you've restarted the OS.
Related
I have an Ansible script to update and maintain my WildFly installation. One of my tasks in this setup is managing the MySQL-driver and in order to perform an update on that driver, I have to first disable the application that uses that driver, before I can replace the it and set up all my datasources anew.
My CLI script starts with the following lines:
if (outcome == success) of /deployment=my-app-1.1.ear:read-resource
deployment disable my-app-1.1.ear
end-if
My problem is that I am here very depending on the actual name of the application and that name can change over time since I have my version information in there.
I tried the following:
set foo=`ls /deployment`
deployment disable $foo
It did not work since when I look at foo I see that it was not my-app-1.1.ear but ["my-app-1.1.ear"] -- so I feel that I might be going in the right direction, even though I have not got it right.
I tried to play with LowCardinality setting, I got a message saying that this is an experimental feature and I have to SET allow_experimental_low_cardinality_type = 1 in order to use it.
I executed this command inside clickhouse-client and then I restarted the server. But I got
clickhouse-server.service: Unit entered failed state
Now I am trying to find out how to disable this setting and make my clickhouse-server start again.
Can you help with this please ?
PS: The version I use is the 18.12.17 and I installed it on Linux Ubuntu 16.04
ClickHouse has different layers for settings. If you used SET <setting> = <value> then you set it for current session. You don't need to restart ClickHouse. Please, take a look here.
I suppose you faced with another problem during starting your server. There a bunch of reasons why. So, firstly try to recollect what were done in configs since last restart (because you have just applied changes by restarting server).
Digging into logs also an awesome idea. Don't hesitate to check other similar issues on github.com, for example like this one
So, I am attempting to create an install script for my application (targeting Ubuntu 16). It has to create a postgresql user, grant permission to that user to authenticate via password, and grant permission to that user to authenticate locally. I only want to grant permission to do that on one database, the application database. So I need to insert the line local databasename username md5 above the lines that reject unknown connections, e.g., in the "Put your actual configuration here" section of pg_hba.conf. (pg_hba.conf uses position in the file to determine priority: first rule encountered that matches the connection gives the final result.)
To add this line, my script runs:
sudo awk '
/# Put your actual configuration here/ {
print "local databasename username md5"
}
{ print }
' /etc/postgresql/9.5/main/pg_hba.conf
# other setup
service postgresql restart
But that's less than optimal. First, the version number will change in the future, so hardcoding the directory is poor. Second, that's making a comment in someone else's project an actual structural part of the config file, which is a horrible idea from all possible points of view in all possible universes.
So my question is twopart. First, is there a good, correct, and accepted method to edit pg_hba.conf that I can use in an installation script instead of kitbashing about with text editors?
Second, if there is no good answer to the first part: is there a programmatic way to ask postgresql where it's pulling pg_hba from?
Is there a programmatic way to ask postgresql where it's pulling pg_hba from?
show hba_file;
-- or
select current_setting('hba_file');
Debian tool chain
So my question is twopart. First, is there a good, correct, and accepted method to edit pg_hba.conf that I can use in an installation script instead of kitbashing about with text editors?
Yes, however, you'll probably find it unsatisfactory.
Upstream, PostgreSQL doesn't support multiple versions and installs with their build tools. Debian does. So Debian has invented a concept of a cluster which is essentially a name and a version number.
Building a tool on Ubuntu or Debian, you should also probably use a name and version number.
Second, if there is no good answer to the first part: is there a programmatic way to ask postgresql where it's pulling pg_hba from?
Yes, there is a tool called pg_conftool. The default cluster's name is main. If you want the 9.5/main cluster. You can do this..
pg_conftool -s 9.5 main show hba_file
/etc/postgresql/9.5/main/pg_hba.conf
You can see conftool can make use of a version and name, but strictly it may not require one.
/usr/bin/pg_conftool [options] [<version> <cluster name>] [<configfile>] <command>
If you want to know more about a cluster in this context, check out check out all the binaries starting with pg_* but first and foremost pg_ctl and pg_ctlcluster (the debian wrapper)
I know there are about a thousand answers to various permutations of this question but none of the fifteen or so that I've tried have worked.
I'm running on Mac OS Sierra and using Minikube 0.17.1 as well as kubectl 1.5.3.
We run our own private Docker registry that is insecure as it is not open to the internet. (This is not my choice or in my control so it's not open for discussion). This is my first foray into Kubernetes and actually container orchestration altogether. I also have a very intermediate level of knowledge about Docker in general so I'm drowning in terminology/platform soup here.
When I execute
kubectl run perf-ui --image=X.X.X.X/performance/perf-ui:master
I see
image pull failed for X.X.X.X/performance/perf-ui:master, this may be because there are no credentials on this request. details: (Error response from daemon: Get https://X.X.X.X/v1/_ping: dial tcp X.X.X.X:443: getsockopt: connection refused)
We have an Ubuntu box that accesses the same registry (not using Kubernetes, just Docker) that works just fine. This is likely due to
DOCKER_OPTS="--insecure-registry X.X.X.X"
being in /etc/default/docker.
I made a similar change using the UI of Docker for Mac. I don't know where this change persisted in a config file. After this change a docker pull worked on my laptop!!! Again, this is just using Docker not Kubernetes. The interesting part is I got the same "Connection refused error" (as it tries to access via HTTPS) on my Mac as I get in the Minikube VM and after the change the pull worked. I feel like I'm on to something there.
After sshing into minikube (the VM created my minikube start) using
minikube ssh
I added the following content to /var/lib/boot2docker/profile
export EXTRA_ARGS="$EXTRA_ARGS --insecure-registry 10.129.100.3
export DOCKER_OPTS="$DOCKER_OPTS --insecure-registry 10.129.100.3
As you can infer, nothing has worked. I know I've tried other things but they too have failed.
I know this isn't the most comprehensive explanation but I've been digging into this for the past 4 hours.
The bottom line is docker pulls work from our Ubuntu box with the config file setup correctly and from my Mac with the setting configured properly.
How can I enable the same setting in my "Linux 2.6" VM that was created by Minikube?
If someone knows the answer I would be forever grateful.
Thank you in advance!
Thank you to Janos for your alternative solution. I'm confident that is the right choice for some use cases.
It turns out that what I needed was a good night sleep and the following command to start Minikube in the first place:
minikube start --insecure-registry="X.X.X.X"
#intelfx says that adding a port won't be necessary. I'm inclined to believe them but if your registry is on a non-standard port just keep it in mind in case things still aren't working.
In the end it was, in fact, a matter of telling Docker to use an insecure registry but it was not clear how to tell this to Docker when I was not controlling it directly.
I know that seems simple but after you've tried a hundred things you're almost hallucinating so you're not in a great state to make rational decisions. I'm sorry for the dumb post but I'm willing to bet this will help at least one person one day, which makes it worth it.
Thanks SO!
The flag --insecure-registry doesn't work on the existing cluster on MacOS. You need to do minikube delete, it's not enough just to stop the cluster with kubectl stop.
I spent plenty of time to figure this out and then I found this comment at https://github.com/kubernetes/minikube/issues/604:
the --insecure-registry flag is ignored if the
machine already existed (even if it is stopped). You must first
minikube delete if you want new flags to be respected.
You can use kube-registry-proxy from (needs some configuration):
https://github.com/kubernetes/kubernetes/blob/master/cluster/saltbase/salt/kube-registry-proxy/kube-registry-proxy.yaml
Then you can refer to localhost:5050 as your registry. The trick is that localhost is allowed as an insecure registry by default.
Actually In the command of start odoo 8 server.
It will provide "--auto-reload" option
But actually i don't know how it works and when to work.
Please if give me some guideline for that
Normally if you change your python code means, you need to restart the server in order to apply the new changes.
--auto-reload parameter is enabled means, you don't need to restart the server. It enables auto-reloading of python files and xml files without having to restart the server. It required pyinotify. It is a Python module for monitoring filesystems changes.
Just add --auto-reload in your configuration file. By default the value will be "false". You don't need to pass any extra arguments. --auto-reload is enough. If everything setup and works properly you will get
openerp.service.server: Watching addons folder /opt/odoo/v8.0/addons
openerp.service.server: AutoReload watcher running
in the server log. Don't forget to install pyinotify package.
I found this looking for the same thing, but for odoo 10. Someone will follow the same route, so:
This has been changed in odoo 10 to --dev=reload. BUT you can't specify that in /etc/init.d/odoo itself. Only from the command line.