How to allow creation of database triggers - ddev

When trying to install CiviCrm in Drupal 7 I get "Could not create a database trigger".
Using a standard ddev config.yml for drupal7.
A couple of solutions are suggested at https://civicrm.stackexchange.com/questions/2770/database-trigger-error-message but I am struggling to implement them within ddev.
RE Grant - When I try ddev exec mysql GRANT..... I get failed to execute command. When I ssh into a mysql shell to try to grant privileges I get access denied for user.
RE log_bin_trust_function_creators = 1. Where would I insert that?

Update 2019-01-25: I went to check this out after you created the issue and what I suggested was inadequate. As explained there, you need to do a little custom config. Create a .ddev/mysql/trigger.cnf with these contents:
[mysqld]
log_bin_trust_function_creators=on
And the next release of ddev (mid-February) will make this the default (PR). So please report your results there. I was able to install CiviCRM with this mysql config.
Original response: True but not adequate for triggers:
The db container root user has full privileges, so you can use mysql -uroot -proot ... to do what you need to do. You can do that inside the db container ddev ssh -s db or inside the web container ddev ssh or from the host using the info in ddev describe (but using root/root). (You can also use the root user to grant additional privileges to the db user, of course.)
If you know what privileges are required, we should add them to the db user, so please make an issue requesting what you need, because we'd like this to be easier for you.

Related

sqitch deploy command fails when deploying changes to azure

Hi Guys I am trying to apply the deploy command on a database that is hosted on azure. Nevertheless, I got the following error:
sqitch deploy db:pg://cmurcia%40dataplatform:*****#dataplatform.postgres.database.azure.com:5432/dataplatform_metadata_service
Adding registry tables to db:pg://cmurcia%40dataplatform:#dataplatform.postgres.database.azure.com:5432/dataplatform_metadata_service
psql:/usr/share/perl5/App/Sqitch/Engine/pg.sql:4: ERROR: permission denied for database dataplatform_metadata_service
"/usr/bin/psql" unexpectedly returned exit value 3
I tested with psql and I can both log in and modify tables in the database that is accessed with the mentioned URI (db:pg://cmurcia%40dataplatform:*****#dataplatform.postgres.database.azure.com:5432/dataplatform_metadata_service).
I also tried
sqitch deploy -t postgresql://cmurcia%40dataplatform:Welcome0518%21#dataplatform.postgres.database.azure.com:5432/dataplatform_metadata_service
Adding registry tables to db:postgresql://cmurcia%40dataplatform:#dataplatform.postgres.database.azure.com:5432/dataplatform_metadata_service
psql:/usr/share/perl5/App/Sqitch/Engine/pg.sql:4: ERROR: permission denied for database dataplatform_metadata_service
"/usr/bin/psql" unexpectedly returned exit value
3
I would like to ask if you have any hints about how to solve this. Thank you!
FYI I am using an ubuntu linux VM hosted on azure to run the command where I installed sqitch, sqitch is working locally.
The first thing Sqitch does when it connects to a database is create the registry if it does not yet exist. Usually this is a schema named sqtich. Have a look at the Postgres registry script. Be sure you have permission to create the schema. If you don't, have someone else create it and give you the permission to create objects in it as well as your project schema.

Docker: Oracle database 18.4.0 XE wants to configure a new database on startup

I'm trying to configure an Oracle Database container. My problem is whenever I'm trying to restart the container, the startup script wants to configure a new database and failing to do so, because there already is a database configured on the specified volume.
What can I do let the container know that I'd like to use my existing database?
The start script is the stock one that I downloaded from the Oracle GitHub:
Link
UPDATE: So apparently, the problem arises when /etc/init.d/oracle-xe-18c start returns that no database has been configured, which triggers the startup script to try and configure one.
UPDATE 2: I tried creating the db without any environment variables passed and after restarting the container, the database is up and running. This is an annoying workaround, but this is the one that seems to work. If you have other ideas, please let me know
I think that you should connect to the linux image with:
docker exec -ti containerid bash
Once there you should check manually for the following:
if $ORACLE_BASE/oradata/$ORACLE_SID exists as it does the script and if $ORACLE_BASE/admin/$ORACLE_SID/adump does not.
Another thing that you should execute manually is
/etc/init.d/oracle-xe-18c start | grep -qc "Oracle Database is not configured
UPDATE AFTER COMMENT=====
I don't have the script but you should run it with bash -x to see what is the script looking for in order to debug what's going on
What makes no sense is that you are saying that $ORACLE_BASE/admin/$ORACLE_SID/adump does not exist but if the docker deployed and you have a database running, the first time the script run it should have created this.
I think I understand the source of the problem from start to finish.
The thing I overlooked in the documentation is that the Express Edition of Oracle Database does not support a SID/PBD other than the default. However, the configuration script (seemingly /etc/init.d/oracle-xe-18c, but not surly) was only partially made with this fact in mind. Which means that if I set the ORACLE_SID and/or ORACLE_PWD environmental variables when installing, the database will be up and running, with 2 suspicious errors, when trying to copy 2 files.
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/spfileROPIDB.ora': No such file or directory
mv: cannot stat '/opt/oracle/product/18c/dbhomeXE/dbs/orapwROPIDB': No such file or directory
When stopping and restarting the docker container, I'll get an error message, because the configuration script created folder/file names according to those variables, however, the docker image is built in a way that only supports the default names, causing it to try and reconfigure a new database, but seeing that one already exists.
I hope it makes sense.

How can ddev automatically create additional databases?

This is a followup question to How can I create and load a second database in ddev?. It is about doing that task automatically.
One use case for this is developing a migration to Drupal from another MySQL database, and collaborating with others on the migration. If the database name can be set by ddev, additional developers can get the database created automatically, and additional databases can be added to their settings.local.php, using known values.
Try this in your project's config.yaml:
hooks:
post-start:
- exec: mysql -uroot -proot -hdb -e "CREATE DATABASE IF NOT EXISTS another_db; GRANT ALL ON another_db.* TO 'db'#'%';"

Using Laravel Artisan and file permissions

I'm new to Laravel and I find this framework awesome.
Artisan is also great but a have a little problem using it.
Let's say that I create a new Controller with Artisan like this
php artisan make:controller Test
There will be a new file in app/Http/Controllers named Test and the permission on this file will be root:root
When I want to edit this file with my editor over ftp I can't because I'm not logged as root.
Is there any ways to tell Artisan to create files with www-data group for example (without doing an chown command) ?
Since you have root shell access, the following command will execute another one using the www-data user-
sudo -u www-data php artisan make:controller Test
Replace www-data with whatever the username your web server operates under, or the username you login to the FTP service with.
When you do this, the controller will be owned by www-data, which is what you want.
Note: do not ever run commands copy-pasted from the internet without knowing exactly what they do, especially in a root shell.
In this case, the -u parameter tells sudo to execute the command as a specific user, not as the root user.
From the manpage:
-u user, --user=user
Run the command as a user other than the default target user (usually root ). The user may be
either a user name or a numeric user ID (UID) prefixed with the ‘#’ character (e.g. #0 for UID
0). When running commands as a UID, many shells require that the ‘#’ be escaped with a backslash
(‘\’). Some security policies may restrict UIDs to those listed in the password database. The
sudoers policy allows UIDs that are not in the password database as long as the targetpw option
is not set. Other security policies may not support this.
I know this is a really old post but I'd also really advise anyone agains editing your Laravel files over FTP. I used to do this in my pre-Laravel days and it NEVER ended well.
Editing over FTP can have all kinds of problems- dropping connection mid-edit being the least of them. Security and live development errors being a much larger concern.
Develop on your local or dev environment, commit/push to git, then either pipeline to your server or handle your FTP uploads and cleanup after the fact. Pipelines are your best bet if your host will allow them. We use Atlassian BitBucket for ours but the set-up and deployment should be relatively similar for most hosts. Check with your host for documentation on their pipeline set-up:
https://www.atlassian.com/continuous-delivery/tutorials/bitbucket-pipelines
There's also some tutorials online for pipelining straight to FTP (if on a shared host, say):
https://www.savjee.be/2016/06/Deploying-website-to-ftp-or-amazon-s3-with-BitBucket-Pipelines/
It is because you ran a command from root user, try to run the command from the user which you using for edit the project via ftp.

How can stop Homebrew installing Postgres as root?

I have a Postgres permissions problem, every time i brew install postgres it does so as root user resulting in permissions denial on initdb, createdb and or anything else i try.
I sudo chown the ownership of /usr/local/var/postgres and it seems to change and allow me manual entry into the directory from cmd line, which then only consists of a server.log file listing the error:
postgres cannot access the server configuration file "/usr/local/var/postgres/postgresql.conf": No such file or directory
I then go to initdb and it returns:
The files belonging to this database system will be owned by user "jamesbkemp".
This user must also own the server process.
The database cluster will be initialized with locale "en_GB.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: directory "/usr/local/var/postgres" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/usr/local/var/postgres" or run initdb
with an argument other than "/usr/local/var/postgres"
I then go back to look at /usr/local/var/postgresand the owner has changed back to root. I really am at a loss after many hours on this as to what's going on. Any ideas folks?
Postgresql install as non root is a pain if not impossible, because it was not designed this way: it is a multi-user service.
The same thing here: apache2 as non-root - you would have to build the server yourself changing the configuration a lot.
Let me add that for an experienced datacenter operator this is a strange idea, like driving a race car in your appartment.

Resources