How to specify the local directory in INTO OUTFILE-clause? - clickhouse

SELECT *
FROM tabname
INTO OUTFILE '~/results.csv'
FORMAT CSV
How to specify the outfile directory as the local workstation?

It needs to run this command from the client side (local workstation) and define the path to local file:
clickhouse-client --host ch_server --user test_user --password 12345 \
--query="select * from db_name.table_name INTO OUTFILE '/tmp/result.csv' FORMAT CSV"
To install clickhouse-client use command:
sudo apt-get install clickhouse-client

Related

How to properly run entrypoint bash script on docker?

I would like to build a docker image for dumping large SQL Server tables into S3 using the bcp tool by combining this docker and this script. Ideally I could pass table, database, user, password and s3 path as arguments for the docker run command.
The script looks like
#!/bin/bash
TABLE_NAME=$1
DATABASE=$2
USER=$3
PASSWORD=$4
S3_PATH=$5
# read sqlserver...
# write to s3...
# .....
And the Dockerfile is:
# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*
# adding custom MS repository
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
# install SQL Server drivers and tools
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql mssql-tools awscli
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN /bin/bash -c "source ~/.bashrc"
ADD ./sql2sss.sh /opt/mssql-tools/bin/sql2sss.sh
RUN chmod +x /opt/mssql-tools/bin/sql2sss.sh
RUN apt-get -y install locales
RUN locale-gen en_US.UTF-8
RUN update-locale LANG=en_US.UTF-8
ENTRYPOINT ["/opt/mssql-tools/bin/sql2sss.sh", "DB.dbo.TABLE", "SQLSERVERDB", "USER", "PASSWORD", "S3PATH"]
If I replae the entrypoint for CMD /bin/bash and run the image with -it, I can manually run the sql2sss.sh and it works properly, reading and writing to s3. However if I try to use the entrypoint as shown yelds bcp: command not found.
I also noticed if I use CMD /bin/sh in iterative mode it will produce the same error. Am I missing some configuration in order for the entrypoint to run the script properly?
Have you tried
ENV PATH="/opt/mssql-tools/bin:${PATH}"
Instead of exporting the bashrc?
As David Maze pointed out docker doesn't read dot files
Basically add your env definitions in the ENV primitive

How to run db migration script in a kubernetes pod from bash?

I would like to run database migration scripts in Ubuntu pod automatically.
How I am doing this manually:
$ kubectl run -i --tty ubuntu --image=ubuntu:focal -- bash
$ apt install -y postgresql-client
$ psql "hostaddr=addr port=5432 user=username password=pass dbname=dbname"
COPY persons(first_name, last_name, dob, email)
FROM './persons.csv'
DELIMITER ','
CSV HEADER;
$ exit
I would like to create a bash script for this purposes to run locally. Could you please advise how to script it? First command connects to a remote bash session, and I am not able to execute other commands. Definitely doing something wrong.
Thank you.
Use here documents.
#!/bin/bash
kubectl run -i --tty ubuntu --image=ubuntu:focal -- bash <<EOF
apt install -y postgresql-client
psql "hostaddr=addr port=5432 user=username password=pass dbname=dbname" <<EOF2
COPY persons(first_name, last_name, dob, email)
FROM './persons.csv'
DELIMITER ','
CSV HEADER;
EOF2
EOF
Let's assume we have a command that is supposed to be run to execute some SQL query on a postgresql server in a Kubernetes cluster:
export pgcmd="PGPASSWORD=pass1234 psql -U username -d mydatabase -h addr -p port -c \"COPY persons(first_name, last_name, dob, email) FROM './persons.csv' DELIMITER ',' CSV HEADER;\" "
or by using URL syntax
export pgcmd="psql postgresql://username:pass#addr:5432/mydatabase -c \"COPY persons(first_name, last_name, dob, email) FROM './persons.csv' DELIMITER ',' CSV HEADER;\" "
Actually, it's more convenient to use official postgres docker image instead of installing postgresql client on Ubuntu image:
(if I use the same image as used to spin up the posgresql server, I can save some time on pulling image from the repository)
kubectl run -it --rm pgclient --image=postgres -- $pgcmd
Alternatively you can run the command using posgresql pod itself
kubectl exec -it posgresql-server-pod-name -- $pgcmd
or proxy connection to the postgresql server and execute the command there
kubectl port-forward posgresql-server-pod-name 8888:5432 &
#or we can use parent object to connect
#kubectl port-forward deployment/posgresql-server-deploy-name 8888:5432 &
# save ID of the background process
proxyid=$!
# run postgres command locally
$pgcmd
# switch off port forwarding and cleanup environment variables
unset PGPASSWORD
kill $proxyid && unset proxyid

PostgreSQL: execute query from script

I'm installing PostgreSQL + POSTGIS on a CentOS 7 virtual machine using Vagrant and Virtual Box.
My Vagtantfile is the follow ...
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.network "private_network", ip: "192.168.56.2"
config.vm.provider "virtualbox" do |vb|
vb.memory = "4096"
vb.name = "Test"
end
config.vm.provision "shell", path: "./scripts/InstallPostgresqlPostgis.sh"
end
In ./scripts/InstallPostgresqlPostgis.sh there are all the commands to install PostgreSQL and, when run, PostgreSQL is installed and works.
To add POSTGIS at my PostgreSQL installation, in interactive way, I use this procedure
su postgres
----->>>>>>> HERE I'VE TO PUT THE USER PASSWORD <<<<<<<-------
psql
-- Enable PostGIS (includes raster)
CREATE EXTENSION postgis;
-- Enable Topology
CREATE EXTENSION postgis_topology;
-- Enable PostGIS Advanced 3D
-- and other geoprocessing algorithms
-- sfcgal not available with all distributions
CREATE EXTENSION postgis_sfcgal;
-- fuzzy matching needed for Tiger
CREATE EXTENSION fuzzystrmatch;
-- rule based standardizer
CREATE EXTENSION address_standardizer;
-- example rule data set
CREATE EXTENSION address_standardizer_data_us;
-- Enable US Tiger Geocoder
CREATE EXTENSION postgis_tiger_geocoder;
\q
and all works.
I've to "translate" this procedure in my InstallPostgresqlPostgis.sh that I refer in my Vagrantfile and I've tried this
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis_topology"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis_sfcgal"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION fuzzystrmatch"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION address_standardizer"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION address_standardizer_data_us"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis_tiger_geocoder"
but the result is ...
default: could not change directory to "/home/vagrant": Permission denied
default: CREATE EXTENSION
default: could not change directory to "/home/vagrant": Permission denied
default: CREATE EXTENSION
default: could not change directory to "/home/vagrant": Permission denied
default: CREATE EXTENSION
default: could not change directory to "/home/vagrant": Permission denied
default: CREATE EXTENSION
default: could not change directory to "/home/vagrant": Permission denied
default: CREATE EXTENSION
default: could not change directory to "/home/vagrant": Permission denied
default: CREATE EXTENSION
default: could not change directory to "/home/vagrant": Permission denied
default: CREATE EXTENSION
Where am I doing wrong?
Your problem is that you are executing the commands with a working directory
that is not accessible to postgres user. In fact it is the home directory of the user executing the commands (vagrant).
There are three approaches for fixing this issue:
use --login (or -i for short) option to sudo
This will cause sudo to execute the commands with settings similar to a login shell.
Especially this will (try) changing to the target user's home directory as a working directory.
change the working directory within your script using cd ~postgres
This will result in all sudo commands will being executed there.
Allow user postgres access to the home directory of user vagrant
THIS IS DANGEROUS AND ABSOLUTELY NOT RECOMMENDED!!!
I just mention it for completeness. It might be an option iff
you need such access regularly
and you have some fine grain access control at hand (e.g. ACL)
that allows ensuring postgres really is the only user being granted access.
Even then you should think thrice!
In most cases alternatives 1. or 2. are to be preferred.
I've solved in this way ...
sudo su postgres
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis_topology"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis_sfcgal"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION fuzzystrmatch"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION address_standardizer"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION address_standardizer_data_us"
sudo -u postgres -H -- psql -d postgres -c "CREATE EXTENSION postgis_tiger_geocoder"

Is it possible to skip license agreement page when splunk starts on first time in docker container?

I have created a Dockerfile when the container builds during that time I need to create multiple login users on backside of Splunk.
I am getting splunk agreement issue unable to skip/accept agreement on build container as shown below.
Dockerfile
FROM splunk/splunk:latest
ENV SPLUNK_HOME /opt/splunk
RUN apt-get update && apt-get install -y wget
COPY ./splunk-launch.conf /opt/splunk/etc/splunk-launch.conf
COPY ./splunk.license /opt/splunk/etc/licenses/enterprise/splunk.license
COPY ./My-app1 / /opt/splunk/etc/apps/My-app1
COPY ./My-app2 /opt/splunk/etc/apps/My-app2
COPY ./My-app3 /opt/splunk/etc/apps/My-app3
COPY ./splunk_user.sh /opt/splunk/bin/splunk_user.sh
RUN chmod +x /opt/splunk/bin/splunk_user.sh
RUN chown -R splunk:splunk /opt/splunk/bin/splunk_user.sh
EXPOSE 8000/tcp 8089/tcp 8191/tcp 9997/tcp 1514 8088/tcp
VOLUME [ “/opt/splunk/etc”, “/opt/splunk/var” ]
WORKDIR /opt/splunk/bin
CMD [“./splunk_user.sh”]
splunk_user.sh
./splunk add user pradeep -password passwd123 -role admin -email pradeep#gmail.com -full-name Pradeep -auth admin:changeme
./splunk add user sankar -password passwd123 -role admin -email sankar#gmail.com -full-name Sankar -auth admin:changeme
Error
From the image readme, you need to run the image with:
-e "SPLUNK_START_ARGS=--accept-license"
In your Dockerfile, that would be the equivalent of:
ENV SPLUNK_START_ARGS=--accept-license
This flag gets passed to the splunk command in their entrypoint.sh:
sudo -HEu ${SPLUNK_USER} ${SPLUNK_HOME}/bin/splunk start ${SPLUNK_START_ARGS}
Not sure try this
CMD ["y | ./splunk_user.sh"]

Bash / Docker exec: file redirection from inside a container

I can't figure out how to read content of a file from a Docker container. I want to execute content of a SQL file into my PGSQL container. I tried:
docker exec -it app_pgsql psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql
My application is mounted in /usr/src/app. But I got an error:
bash: /usr/src/app/migrations/*.sql: No such file or directory
It seems that Bash interprets this path as an host path, not a guest one. Indeed, executing the command in two times works perfectly:
docker exec -it app_pgsql
psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql
I think that's more a Bash issue than a Docker one, but I'm still stuck! :)
Try and use a shell to execute that command
sh -c 'psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql'
The full command would be:
docker exec -it app_pgsql sh -c 'psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql'
try with sh -c "your long command"
Also working when piping backup to the mysql command:
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE
You can use the database client in order to connect to you container and redirect the database file, then you can perform the restore.
Here is an example with MySQL: a container running MySQL, using the host network stack. Since that the container is using the host network stack (if you don't have any restriction on your MySQL or whatever database), you can connect via localhost and performing the commands transparently
mysql -h 127.0.0.1 -u user -pyour_passwd database_name < db_backup.sql
You can do the same with PostgresSQL (Restore a postgres backup file using the command line?):
pg_restore --host 127.0.0.1 --port 5432 --username "postgres" --dbname "mydatabase" --no-password --clean "/home/dinesh/db/mydb.backup"
Seems like that "docker exec" does not support input redirection.. I will verify this and maybe open an issue for Docker Community at GitHub, if it is applicable.

Resources