I am trying to import an SQL file with the following command using GitHub actions workflow .sql file contains some test data which will be further used to run unit test cases in laravel.
I am unable to understand if the file path is wrong or import command is wrong. I have saved SQL file on the root directory of laravel.
I am facing the following error:
Error: Process completed with exit code 1.
Below is the command used in the .yml file which is used in the workflow for laravel.
- name: Importing MYSQL file
env:
DB_HOST: 127.0.0.1
DB_CONNECTION: mysql
DB_DATABASE: test
DB_PORT: ${{ job.services.mysql.ports[3306] }}
DB_USER: root
DB_PASSWORD: password
run: mysql -u root -p password -h localhost --port=3306 test < request_data.sql
Implementing database seeds will be a bit time-consuming that's why I'm using this way to import .sql data. Also, I am a bit new to this workflow thing. let me know if there is some issue in running the command or if it is not possible to import an existing .sql file.
Please note the migrations I have run using the workflow file are running successfully.
The following code is used to create MySQL service:
jobs:
phpunit:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
ports:
- 33306:3306
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
Here's the workflow error:
Thanks in advance!
I tried to reproduce your scenario with a small .sql file but it's working fine with the mysql:5.7 Docker image.
Here's the complete workflow:
name: MySQL Import Test
on:
workflow_dispatch:
jobs:
import:
runs-on: ubuntu-latest
services:
mysql:
# https://hub.docker.com/_/mysql
image: mysql:5.7
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: test
ports:
- 33306:3306
options: --health-cmd="mysqladmin ping" --health-interval=10s --health-timeout=5s --health-retries=3
steps:
- name: Import MySQL file
env:
SQL: |
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
CREATE TABLE `person` (
`id` int(11) NOT NULL,
`name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL
);
INSERT INTO `person` (`id`, `name`, `email`)
VALUES
(111, 'abc', 'abc#email.com'),
(222, 'def', 'def#email.com'),
(333, 'ghi', 'ghi#email.com'),
(444, 'jkl', 'jkl#email.com');
ALTER TABLE `person` ADD PRIMARY KEY (`id`);
COMMIT;
run: |
mysql --host 127.0.0.1 --port 33306 -uroot -ppassword -e "SHOW DATABASES LIKE 'test';" 2>/dev/null
echo "$SQL" > person.sql
echo "--- SQL ---"
cat person.sql
echo "--- --- ---"
echo "Importing from person.sql file"
mysql --host 127.0.0.1 --port 33306 -uroot -ppassword test < person.sql 2>/dev/null
echo "Checking the imported data"
mysql --host 127.0.0.1 --port 33306 -uroot -ppassword test <<< 'SELECT id,name,email FROM person;' 2>/dev/null
Output
Database (test)
test
--- SQL ---
SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO";
START TRANSACTION;
SET time_zone = "+00:00";
CREATE TABLE `person` (
`id` int(11) NOT NULL,
`name` varchar(255) NOT NULL,
`email` varchar(255) NOT NULL
);
INSERT INTO `person` (`id`, `name`, `email`)
VALUES
(111, 'abc', 'abc#email.com'),
(222, 'def', 'def#email.com'),
(333, 'ghi', 'ghi#email.com'),
(444, 'jkl', 'jkl#email.com');
ALTER TABLE `person` ADD PRIMARY KEY (`id`);
COMMIT;
--- --- ---
Importing from person.sql file
Checking the imported data
id name email
111 abc abc#email.com
222 def def#email.com
333 ghi ghi#email.com
444 jkl jkl#email.com
Apart from that, the command i.e. sudo /etc/init.d/mysql start (and its other variants e.g. service and systemctl) is for the preinstalled MySQL. Here's the relevant issue: https://github.com/actions/runner-images/issues/576
In your scenario, as the requirement is to test on a Docker container, issuing commands for the locally installed MySQL (which is disabled by default) is not required at all.
Related
I created a container for an Oracle Express database following these instructions, with the following command:
docker run -d -e ORACLE_PWD="root" --name testdb -p 5500:5500 -p 8080:8080 -p 1521:1521 container-registry.oracle.com/database/express:21.3.0-xe
What does work
I managed to access the database from within the container with this command:
docker exec -it testdb sqlplus system/root#//localhost:1521/XE
I also managed to access to access the Oracle Enterprise Manager on localhost:5500/em using these credentials:
Username: system
Password: root
Container Name: <blank>
What doesn't work
I fail to connect using IntelliJ, and therefore the underlying JDBC library. I use the following options:
For Password, I used root again, the JDBC URL is as follows:
jdbc:oracle:thin:#localhost:1521:XE
When I click on Test connection, IntelliJ tries to connect for about a minute, before showing the following error
I did a test on my MacOS
# fire up the database. Hint, use gvenzl images instead. Much faster!
docker run -d -e ORACLE_PWD="root" --name testdb -p 5500:5500 -p 8081:8080 -p 1521:1521 container-registry.oracle.com/database/express:21.3.0-xe
# I have sqlplus installed locally on my MacOS
echo 'select dummy from dual;' | sqlplus -S system/"root"#localhost/XE
D
-
X
echo 'select dummy from dual;' | sqlplus -S system/"root"#localhost:XE
ERROR:
ORA-12545: Connect failed because target host or object does not exist
SP2-0306: Invalid option.
# so, how is JDBC behaving taking the connect string as argument
java -cp .:./ojdbc8-19.6.0.0.jar OracleJDBC "jdbc:oracle:thin:#localhost:1521:XE"
X
java -cp .:./ojdbc8-19.6.0.0.jar OracleJDBC "jdbc:oracle:thin:#localhost:XE"
java.sql.SQLRecoverableException: IO Error: Invalid number format for port number
java -cp .:./ojdbc8-19.6.0.0.jar OracleJDBC "jdbc:oracle:thin:#localhost/XE"
X
Note. Port is not needed, defaults to 1521
cat OracleJDBC.java
import java.sql.DriverManager;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.ResultSet;
public class OracleJDBC {
public static void main(String[] argv) {
try {
Class.forName("oracle.jdbc.driver.OracleDriver");
} catch (ClassNotFoundException e) {
System.out.println("Where is your Oracle JDBC Driver?");
e.printStackTrace();
return;
}
Connection connection = null;
String query = "select dummy from dual";
try {
connection = DriverManager.getConnection(argv[0], "system","root");
Statement stmt = connection.createStatement();
ResultSet rows = stmt.executeQuery(query);
while (rows.next()) {
System.out.println(rows.getString("dummy"));
}
} catch (SQLException e) {
e.printStackTrace();
}
}
}
I cannot explain why you get the error. Can it be the JDBC version you are using? I have just proven, your connection should work. That said, you are not supposed to connect using the SID construct (:SID) anymore. You will hit the root-container and not where you are supposed to store your data - in a pluggable database. The express-edition comes with the default pluggable database "XEPDB1".
echo 'select name from v$pdbs;' | sqlplus -S system/"root"#localhost/XE
NAME
------------------------------
PDB$SEED
XEPDB1
This should be your connect string:
echo 'select dummy from dual;' | sqlplus -S system/"root"#localhost/XEPDB1
D
-
X
From here you create your app schema and user so you no longer will use the power-user 'system'.
Best of luck!
I worked perfectly well when I used the same configuration, but with this image instead of the official one.
Thanks to Bjarte Brandt for pointing me to this image.
I try to run Vault with a CRC OpenShift 4.7 and helm3 but I've some problems when I try to enable the UI in https.
Add hashicorp repo :
helm repo add hashicorp https://helm.releases.hashicorp.com
Install the latest version of vault :
[[tim#localhost config]]$ helm install vault hashicorp/vault \
> --namespace vault-project \
> --set "global.openshift=true" \
> --set "server.dev.enabled=true"
Then I run oc get pods
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
I run an interactive shell session with the vault-0 pod :
oc rsh vault-project-0
Then I initialize Vault :
/ $ vault operator init --tls-skip-verify -key-shares=1 -key-threshold=1
Unseal Key 1: iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Initial Root Token: s.xVb0DvIMQRYam7oS2C0ZsHBC
Vault initialized with 1 key shares and a key threshold of 1. Please securely
distribute the key shares printed above. When the Vault is re-sealed,
restarted, or stopped, you must supply at least 1 of these keys to unseal it
before it can start servicing requests.
Vault does not store the generated master key. Without at least 1 key to
reconstruct the master key, Vault will remain permanently sealed!
It is possible to generate new unseal keys, provided you have a quorum of
existing unseal keys shares. See "vault operator rekey" for more information.
Export the token :
export VAULT_TOKEN=s.xVb0DvIMQRYam7oS2C0ZsHBC
Unseal Vault :
/ $ vault operator unseal --tls-skip-verify iE1iU5bnEsRPSkx0Jd5LWx2NMy2YH6C8bG9+Zo6/VOs=
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
Total Shares 1
Threshold 1
Version 1.6.2
Storage Type file
Cluster Name vault-cluster-21448fb0
Cluster ID e4d4649f-2187-4682-fbcb-4fc175d20a6b
HA Enabled false
I check the pods :
[tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 1/1 Running 0 35m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 35m
I'm able to get the UI without https :
In the OpenShift console, I switch to the Administrator mode and this is what I've done :
Networking part
- Routes > Create routes
Name : vault-route
Hostname : 192.168.130.11
Path :
Service : vault
Target Port : 8200 -> 8200 (TCP)
Now, if I check the URL : http://192.168.130.11/ui :
The UI is available.
In order to enable the https, I've followed the step here :
https://www.vaultproject.io/docs/platform/k8s/helm/examples/standalone-tls
But I've change the K8S commands for the OpenShift commands
# SERVICE is the name of the Vault service in Kubernetes.
# It does not have to match the actual running service, though it may help for consistency.
SERVICE=vault-server-tls
# NAMESPACE where the Vault service is running.
NAMESPACE=vault-project
# SECRET_NAME to create in the Kubernetes secrets store.
SECRET_NAME=vault-server-tls
# TMPDIR is a temporary working directory.
TMPDIR=/**tmp**
Then :
openssl genrsa -out ${TMPDIR}/vault.key 2048
Then create the csr.conf file :
[tim#localhost tmp]$ cat csr.conf
[req]
default_bits = 4096
default_md = sha256
distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no
[req_distinguished_name]
[v3_req]
keyUsage = keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = #alt_names
[alt_names]
DNS.1 = vault-project
DNS.2 = vault-project.vault-project
DNS.3 = *apps-crc.testing
DNS.4 = *api.crc.testing
IP.1 = 127.0.0.1
Create the CSR :
openssl req -new -key': openssl req -new -key ${TMPDIR}/vault.key -subj "/CN=${SERVICE}.${NAMESPACE}.apps-crc.testing" -out ${TMPDIR}/server.csr -config ${TMPDIR}/csr.conf
Create the file ** csr.yaml :
$ export CSR_NAME=vault-csr
$ cat <<EOF >${TMPDIR}/csr.yaml
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: ${CSR_NAME}
spec:
groups:
- system:authenticated
request: $(cat ${TMPDIR}/server.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
Send the CSR to OpenShfit :
oc create -f ${TMPDIR}/csr.yaml
Approve CSR :
oc adm certificate approve ${CSR_NAME}
Retrieve the certificate :
serverCert=$(oc get csr ${CSR_NAME} -o jsonpath='{.status.certificate}')
Write the certificate out to a file :
echo "${serverCert}" | openssl base64 -d -A -out ${TMPDIR}/vault.crt
Retrieve Openshift CA :
oc config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}' | base64 -d > ${TMPDIR}/vault.ca
Store the key, cert, and OpenShift CA into Kubernetes secrets :
oc create secret generic ${SECRET_NAME} \
--namespace ${NAMESPACE} \
--from-file=vault.key=/home/vault/certs/vault.key \
--from-file=vault.crt=/home/vault/certs//vault.crt \
--from-file=vault.ca=/home/vault/certs/vault.ca
The command oc get secret | grep vault :
NAME TYPE DATA AGE
vault-server-tls Opaque 3 4h15m
Edit my vault-config with the oc edit cm vault-config command:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
extraconfig-from-values.hcl: |-
disable_mlock = true
ui = true
listener "tcp" {
tls_cert_file = "/vault/certs/vault.crt"
tls_key_file = "/vault/certs/vault.key"
tls_client_ca_file = "/vault/certs/vault.ca"
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
kind: ConfigMap
metadata:
creationTimestamp: "2021-03-15T13:47:24Z"
name: vault-config
namespace: vault-project
resourceVersion: "396958"
selfLink: /api/v1/namespaces/vault-project/configmaps/vault-config
uid: 844603a1-b529-4e33-9d58-20525ea7bff
Edit the VolumeMounst, volumes and ADDR parts my statefulset :
volumeMounts:
- mountPath: /home/vault
name: home
- mountPath: /vault/certs
name: certs
volumes:
- configMap:
defaultMode: 420
name: vault-config
name: config
- emptyDir: {}
name: home
- name: certs
secret:
defaultMode: 420
secretName: vault-server-tls
name: VAULT_ADDR
value: https://127.0.0.1:8200
I delete my pods in order to take into account all my changes
oc delete pods vault-project-0
And...
tim#localhost config]$ oc get pods
NAME READY STATUS RESTARTS AGE
vault-project-0 0/1 Running 0 48m
vault-project-agent-injector-8568dbf75d-4gjnw 1/1 Running 0 6h9m
vault-project-0 is on 0/1 but running. If I describe the pods :
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 1s (x6 over 26s) kubelet Readiness probe failed: Error checking seal status: Get "https://127.0.0.1:8200/v1/sys/seal-status": http: server gave HTTP response to HTTPS client
If think that I've missed something but I don't know what...
Someone to tell me how to enable https for the vault UI with openshift ?
I'm using GitlabCI to deploy my Laravel applications.
I'm wondering how should I manage the .env file. As far as I've understood I just need to put the .env.example under version control and not the one with the real values.
I've set all the keys my app needs inside Gitlab Settings -> CI/CD -> Environment Variables and I can use them on the runner, for example to retrieve the SSH private key to connect to the remote host, but how should I deploy these variables to the remote host as well? Should I write them with bash in a "runtime generated" .env file and then copy it? Should I export them via ssh on the remote host? Which is the correct way to manage this?
If you open to another solution i propose using fabric(fabfile) i give you an example:
create .env.default with variable like :
DB_CONNECTION=mysql
DB_HOST=%(HOST)s
DB_PORT=3306
DB_DATABASE=laravel
DB_USERNAME=%(USER)s
DB_PASSWORD=%(PASSWORD)s
After installing fabric add fabfile on you project directory:
from fabric.api import env , run , put
prod_env = {
'name' : 'prod' ,
'user' : 'user_ssh',
'deploy_to' : '/path_to_project',
'hosts' : ['ip_server'],
}
def set_config(env_config):
for key in env_config:
env[key] = env_config[key]
def prod():
set_config(prod_env)
def deploy(password,host,user):
run("cd %s && git pull -r",env.deploy_to)
process_template(".env.default",".env" , { 'PASSWORD' : password , 'HOST' : host,'USER': user } )
put( ".env" , "/path_to_projet/.env" )
def process_template(template , output , context ):
import os
basename = os.path.basename(template)
output = open(output, "w+b")
text = None
with open(template) as inputfile:
text = inputfile.read()
if context:
text = text % context
#print " processed \n : %s" % text
output.write(text)
output.close()
Now you can run from you local to test script :
fab prod deploy:password="pass",user="user",host="host"
It will deploy project on your server and check if it process .env
If it works now it's time for gitlab ci this is an example file :
image: python:2.7
before_script:
- pip install 'fabric<2.0'
# Setup SSH deploy keys
- 'which ssh-agent || ( apt-get install -qq openssh-client )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
deploy_staging:
type: deploy
script:
- fab prod deploy:password="$PASSWORD",user="$USER",host="$HOST"
only:
- master
$SSH_PRIVATE_KEY,$PASSWORD,$USER,$HOST is environnement variable gitlab,you should add a $SSH_PRIVATE_KEY private key which have access to the server.
Hope i don't miss a step.
I have two nodes and am attempting to create a remote table. To set up I do the following:
on each host:
$ monetdbd create /opt/mdbdata/dbfarm
$ monetdbd set listenaddr=0.0.0.0 /opt/mdbdata/dbfarm
$ monetdbd start /opt/mdbdata/dbfarm
On the first host:
$ monetdb create w0
$ monetdb release w0
On second:
$ monetdb create mst
$ monetdb release mst
$ mclient -u monetdb -d mst
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (Dec2016-SP4)
Database: MonetDB v11.25.21 (Dec2016-SP4), 'mapi:monetdb://nkcdev11:50000/mst'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>create table usr ( id integer not null, name text not null );
operation successful (0.895ms)
sql>insert into usr values(1,'abc'),(2,'def');
2 affected rows (0.845ms)
sql>select * from usr;
+------+------+
| id | name |
+======+======+
| 1 | abc |
| 2 | def |
+------+------+
2 tuples (0.652ms)
sql>
On first:
$ mclient -u monetdb -d w0
password:
Welcome to mclient, the MonetDB/SQL interactive terminal (Dec2016-SP4)
Database: MonetDB v11.25.21 (Dec2016-SP4), 'mapi:monetdb://nkcdev10:50000/w0'
Type \q to quit, \? for a list of available commands
auto commit mode: on
sql>create remote table usr_rmt ( id integer not null, name text not null ) on 'mapi:monetdb://nkcdev11:50000/mst';
operation successful (1.222ms)
sql>select * from usr_rmt;
(mapi:monetdb://monetdb#nkcdev11/mst) Cannot register
project (
table(sys.usr_rmt) [ usr_rmt.id NOT NULL, usr_rmt.name NOT NULL ] COUNT
) [ usr_rmt.id NOT NULL, usr_rmt.name NOT NULL ] REMOTE mapi:monetdb://nkcdev11:50000/mst
sql>
$
$ monetdb discover
location
mapi:monetdb://nkcdev10:50000/w0
mapi:monetdb://nkcdev11:50000/mst
Can anyone nudge me in the right direction?
[EDIT - Solved]
The problem was self-inflicted, the remote table name must be exactly the same as the local table name, I had usr_rmt as the remote table name.
at first sight what you are trying to do ought to work.
Recently, I had similar problems with remote table access, though that was with the non-released version, see bug 6289. (The MonetDB version number mentioned in that bug report is incorrect.) What you are experiencing may or may not be the same underlying issue.
After the weekend I will check if I can reproduce your example on, on -SP4 and on the development version.
Joeri
Cannot import the below dump file created by mysqldump.exe in command line of windows
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `attachment_types` (
`ID` int(11) NOT NULL AUTO_INCREMENT,
`DESCRIPTION` varchar(50) DEFAULT NULL,
`COMMENTS` varchar(256) DEFAULT NULL,
PRIMARY KEY (`ID`),
UNIQUE KEY `UK_ATTACHMENT_TYPES___DESCRIPTION` (`DESCRIPTION`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1;
While importing the file in command line
mysql --user=root --password=root < mysqldumpfile.sql
It throws error
ERROR 1064 (42000) near ' ■/ ' at line 1
Somebody please help me.
Finally I got a solution
We need two options
--default-character-set=utf8: This insures UTF8 is used for each
field
--result-file=file.sql: This option prevents the dump data
from passing through the Operating System which likely does not
use UTF8. Instead it passes the dump data directly to the file
specified.
Using these new options your dump command would look something like this:
mysqldump -u root -p --default-character-set=utf8 --result-file=database1.backup.sql database1
While Importing you can optionally use:
mysql --user=root --password=root --default_character_set utf8 < database1.backup.sql
Source:http://nathan.rambeck.org/blog/1-preventing-encoding-issues-mysqldump
It seems that the input file (mysqldumpfile.sql) was created in UTF-8 encoding so these first 3 bytes "at line 1" invisible to you in the .SQL file is the byte order mark (BOM) sequence
So try to change default character set to UTF-8
mysql --user=root --password=root --default_character_set utf8 < mysqldumpfile.sql
If you need to import database, this is the import command required on Windows:
mysql --user=root --password=root --default_character_set utf8 database2 < database1.backup.sql