How can I use drush 8 with Drupal 8 or Drupal 9? - ddev

I am upgrading a site and don't want to site-install (composer-install) drush at all, but I need it. I know that on my Drupal 7 project drush8 is installed, but I'd like to use it in my Drupal8+ project as well, without changing the project.

drush8 is installed in the web container as /usr/local/bin/drush8, but on Drupal8+ it's not linked to drush because the recommended technique is to site-install it (ddev composer require drush/drush) but you can just symlink drush8 to /usr/local/bin/drush and you'll immediately have ddev drush working with drush 8.
There are two ways to do this:
Use a custom .ddev/web-build/Dockerfile:
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN ln -s /usr/local/bin/drush8 /usr/local/bin/drush
Use a post-start hook to do the linking. Add this to your .ddev/config.yaml:
hooks:
post-start:
- exec: ln -s /usr/local/bin/drush8 /usr/local/bin/drush
The first way (Dockerfile) is probably better because it only happens once, whereas the second way (config.yaml post-start) happens every time you do a ddev start.
Note that if you just want to use a site-installed drush that is in a nonstandard location, you can do that using the similar recipe at https://stackoverflow.com/a/69399975/215713

Related

How to programmatically create symlinks in ddev?

A site (no composer, else I'd do it there) has a few symlinks inside the container that are required for it to work.
How do I tell ddev to create those symlinks on ddev start?
I'm sure it's right before my eyes, but I don't find it. Google gives me nothing, maybe the answer is too obvious? Do a ln -s on first run?
First, I would probably create the symlink in my repo and check it into git. This would have problems on Windows (but symlinks in general are risky on Windows).
You'll want to use relative symlinks so that the relative path can be followed either inside the container or on the host.
So, use a post-start hook with exec (to do it inside the web container) if you have to:
hooks:
post-start:
- exec: ln -sf ../vendor/bin/behat behat
Or (especially if you're not on Windows) you could also do it on the host with either a pre-start or post-start hook:
hooks:
pre-start:
- exec-host: ln -sf ../vendor/bin/behat behat
Beware that the default directory for exec in the web container is not necessarily the project root, it may be the docroot (as it is with Drupal).

Install wkhtmltopdf on CPanel server without root acces

I'm trying to use wkhtmltopdf on my CPanel server. But I haven't root access and I can't put it in /usr/local/bin or /usr/bin/ .
So I just put the script on my /home/perso/wkhtmltopdf and made a chmod +x wkhtmltopdf.
But if I try to execute it, for example like this: ./wkhtmltopdf http://www.google.com test.pdf I get a
bash: ./wkhtmltopdf: cannot execute binary file
Any idea how can I place my script in order to be able to execute it ?
To run wkhtmltopdf, you need to install on the server.
Try to ask your hosting provider.
Usually for security, providers disable the execution of binary scripts by ordinary users.

Laravel 5 + Homestead + HHVM + PGSQL = Driver not found

My PHP project uses PGSQL. It runs successfully from Homestead on my dev machine. As soon as I add hhvm: true to my project in homestead.yaml, and provision -- my web app throws a PDOException with a driver is not found. The exception goes away as I remove the hhvm:true and re-provision homestead.
Obviously HHVM's config does not include the PGSQL driver.
How do I correct that?
You don't give a lot of details about your setup, so it's not clear if you have the Postgres driver installed. Postgres isn't supported right out of the box. You have to build and/or install it yourself.
Facebook has an "official" list of HHVM extensions. PGSQL is not (yet) integrated into HHVM proper, but Facebook's page points to the external GitHub project, which is here:
Postgres Extension for HHVM
Below is a summary of the project instructions; you can read them yourself in the README.md files.
Build from source
If you want to build it from source, you will need the hhvm-dev and libpq-dev packages to be installed. Once they have been installed, the following commands will build the extension:
$ cd /path/to/source
$ hphpize
$ cmake .
$ make
This will produce a pgsql.so file, the dynamically-loadable extension. Copy this file to /etc/hhvm/pgsql.so.
Pre-built binaries
If you don't want to build it, there are pre-built binary versions for some of the popular distros in the separate "releases" branch here: Releases.
Again, copy the downloaded pgsql.so file to /etc/hhvm/pgsql.so.
Configuration
Whether you build from source or install binaries, you need to tell HHVM where to find it. Edit your config file (generally /etc/hhvm/php.ini) and add these if they're not present:
extension_dir = /etc/hhvm
hhvm.extensions[pgsql] = pgsql.so
You can check that everything is working by running
hhvm --php -r 'var_dump(function_exists("pg_connect"));'
If everything is working fine, this will output bool(true).
You may need to restart HHVM to have the server pick up the extension.

Does reinstalling Macports remove/destroy contents of /opt/local/ directory structure?

I'm running a MacPro G5 w/ 10.5.8. I ran:
sudo port selfupdate and then ran:
sudo port upgrade outdated
When it was all finished, I rebooted and apache2 was broken and would not serve PHP file any longer. If I replace the httpd.conf file, it will serve html files.
I finally gave up and restored the backup and the machine is running again. The problem is I have a second machine that I did the same thing on and don't want to go through the same process. I've read several posts about uninstalling/reinstalling Macports like this one from Kirk Roybal How to do a clean reinstall with macports?, but it doesn't say if this process will destroy or reset the contents of the /opt/local/ directory. Especially the MySQL DBs and htdocs contents. Does anyone know if this process is linked or destructive? I'll make backups of everything, of course, before trying anything.
SOLUTION:
I got it to work by making the php53 install work instead of going backwards. Here's what I did:
sudo port select --set php php53 (set MacPorts to use php53 instead of php5)
sudo port installed (Check to make sure php53-apache2handler is installed)
sudo port install php53-apache2handler (It wasn't and yours probably isn't either)
once that's done installing
php -v (check the version of PHP that's running)
cd /opt/local/apache2/modules
sudo /opt/local/apache2/bin/apxs -a -e -n php mod_php53.so (activates php within apache)
this should append
LoadModule php5_module modules/mod_php53.so
to your /opt/local/apache2/conf/httpd.conf file (check it now)
There will also be a line like this
LoadModule php5_module modules/mod_php5.so
Comment it out or remove it so is doesn't interfere with new install
It should also move a copy of mod_php53.so to
/opt/local/apache2/modules
If it's not there, see php53-apache2handler install above.
Check your httpd.conf file for errors
/opt/local/apache2/bin/httpd -S
Finally, create/edit php.ini file to tell apache2 how to connect to MySQL database
cd /opt/local/etc/php53
sudo cp php.ini-production php.ini (for production machine use development for dev machine)
sudo cp php.ini php.ini.bak
Add the default socket paths to php.ini
pdo_mysql.default_socket=/opt/local/var/run/mysql5/mysqld.sock (may vary based on MySQL version. Check the /opt/local/var/run directory if not sure)
mysql.default_socket=/opt/local/var/run/mysql5/mysqld.sock
mysqli.default_socket=/opt/local/var/run/mysql5/mysqld.sock
If you are having problems connecting to MySQL, check for typos in the above paths FIRST. Trust me it will save you tons of time!
If all went according to plan, you should be able to restart the machine, test to make sure all ports started automatically and things should be working.
The files that you add (eg: MySQL DBs) are not destroyed by an update. If you modify files that are managed by MacPorts (eg: they are listed in 'port contents '), then those modifications will be clobbered by an update.
Some projects install config files as examples and have the user make the real config file so as to not clobber it with an update. It looks like the apache2 port follows this pattern. It installs /opt/local/apache2/conf/original/httpd.conf and then copies it to the real location of /opt/local/apache2/conf/httpd.conf at activation time only if the file does not exist.

Is there a way to install Drupal from bash script without using Drush?

Looking for relatively easy way to install Drupal instance from command line without using Drush.
I want just to use installing script and archive of custom modules to install Drupal on customer's server (it may not have Drush installed).
You can install Drupal and additional modules and themes without Drush. It's just a sequence of UNIX commands, like wget, tar xzf, cat. You can even write a Makefile and use GNU make to deploy a website. But you don't want to do it (unless you have huge amounts of spare time at your diplosal). Drush provides a lot of best practices and convenience layers to bootstrap and deploy a Drupal installation.
The only reason not to use Drush is, when your server is only reachable via FTP. But then I would do drush make or all the drush dl on my local box and upload the resulting folder structure to the server.
Use Drush. ;)

Resources