I need to update ddev on all my projects, how can I do that? - ddev

My ddev installation has gotten quite old... I've just been going along quite happily with v1.0, and now they're at v1.5.0. How do I catch up? Is it hard? What are the risks?

There are two major things that I can think of in recent releases:
We switched from bind-mounted databases to docker-volume mounted databases.
We upgraded to Mariadb 10.2, TYPO3 v8 has trouble with MariaDB 10.2, but not much else does.
Here's what I recommend:
Get a db dump of each project. I save dumps like that in a directory named .tarballs in the project. (Use the original techniques from How can I export a database from ddev? or do it however you like. If your version already has ddev export-db use that). Having reasonable db dumps around is always a good idea.
Make a good backup of ~/.ddev where the databases were stored until about v1.2 (they now live on docker volumes).
Make a good backup of your projects.
Make sure all your projects have been rm'ed (ddev list should show nothing, preferably docker ps -a should show nothing). If you have a version with the feature, just use ddev rm -a
Move your ~/.ddev out of the way. mv ~/.ddev ~/.ddev.bak so you don't even have those bind-mounted databases any more.
Upgrade ddev to the latest version
In each project as you come to it, ddev config it and then ddev start and ddev import-db from your saved db dump.

Related

macOS: How to update Docker Desktop that's running on an external drive, without removing existing containers?

I have installed Docker Desktop 3.0 on my mac - on an external drive.
I'm trying to uprade it to latest. I don't get prompted for them and I can't see any option to check for updates in the GUI.
I'd rather not uninstall incase it removes the containers and the data I have set up. But maybe I won't lose them?
can someone point me in the right direction?
thanks.
The location of the files dosn't play role in the upgrade. What plays the major role role is the source version and target version. There are two ways to upgrade the Docker for Mac, and there will be described in detail below:
Incremental upgrade
Upgrade with rebuild (Export / Import)
In the specific case of the upgrade from 3.03 to 3.3, the upgrade to 3.1 and 3.2 can be done as incremental, without need to export / import data, however the upgrade to 3.2.1 will require rebuild of the VM, so the export / import method will help to move the data to the new VM.
Location of data
The application dosn't contain the data for the containers. Docker for macOS is running on a virtual machine, and the data is stored on the virtual disk of the virtual machine.
The raw data (the virtual disk) of the virtual machine is located in
~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
Where the 0 is for the 0th VM. By default it's only one VM, but it's better to check. You can find this information from the GUI from Preferences > Resources > Disk Image Location. Simple copy of the content of the data folder will preserve the virtual machine.
Incremental Upgrade
It's always recommended to do incremental upgrade by following the official release versions between your current version and the target version. The releass can be found in the release notes.
The steps to upgrade are:
Stop the docker application (from the menu, find the docker icon and select Quit Docker Desktop
Make full backup of the ~/Library/Containers/com.docker.docker/Data
Rename the application (you might need it to go back, in case the upgrade is not successful)
Copy the new application in the destination and start it.
This steps will be repeated for each minor version between the source version and target version. Some versions can be skipped, however it's not recommended.
In case of problems you can use the troubleshooting guid.
Upgrades with rebuild of the VM
Be aware that upgrading the docker from certain older version to new version might require a rebuild of the virtual machine.
In addition, sometimes the upgrade is not working properly and you will need to uninstall the application as described in the troubleshooting page.
So before you proceed with the uninstall and rebuild of the VM, if you have data that it's important for you, it is highly recommendable to export the containers, volumes and images before the upgrade.
You will need to make a list of all of the containers, images and volumes you want to move across to the new VM. Following docker commands will help:
~# docker ps -a
~# docker volume ls
~# docker image ls
To export the information, you can use the docker export and docker image save. The following example will save the image or the container/volume to file called file.raw:
~# docker export <<volume name or contianer name>> > file.raw
~# docker image save <<image name>> > file.raw
Once the Docker is installed (the latest version) and the VM is rebuild, to import the data back from the files, you will need to use:
~# docker import file.raw
~# docker image load file.raw

Looking for cloud based ide/ides where i can setup apache-superset for developement

I am looking to play with apache-superset on a cloud-based ide. I have it on my local. I tried unsuccessfully to set it up on gitpod. I wanted suggestions on where can I set it up, opensource preferably not necessarily. I believe cloud9 is 1 such place, but I am looking for other options before I settle. If you've ever set it up on any such platform, even if it is on gitpod and can help me, kindly do so.
[Disclaimer: Gitpod staff]
You can indeed use Gitpod to work on apache-superset, and for that you'll just need a working configuration.
From what I can see in apache-superset's requirements, you'll need to get:
PostgreSQL (e.g. by using Gitpod's official gitpod/workspace-full-postgres Docker base image)
Redis (e.g. by installing it in a Dockerfile via sudo apt-get install)
Various Python dependencies (e.g. by running pip install . after cloning)
Various Node.js dependencies for the front-end (e.g. by running npm install)
Here is a basic configuration I wrote to achieve this:
https://github.com/jankeromnes/incubator-superset/commit/0d345a76ec8126fd1f8b9bc7b6ce4961bf3b593d
What it does is:
Create a Docker image with PostgreSQL and Redis
Once the repository is cloned, open 4 separate Terminals ("tasks"):
Redis server
Superset backend
Superset worker
Superset front-end
All dependencies will be installed automatically, and once the front-end is ready, it will automatically open in a web preview IDE side panel.
You can try it out by opening my personal fork of the apache-superset repository in Gitpod, e.g. by following this link:
https://gitpod.io/#https://github.com/jankeromnes/incubator-superset

ElasticSearch - uninstall version 6.4.3, install version 6.4.2 - Linux Ubuntu

We have a 3-node cluster with ElasticSearch 6.4.3 on Ubuntu 16.04. There is nothing existing outside of the fresh install of ES - no indexes, no Kibana, no Beats, no Logstash, etc.
I have been asked to downgrade to version 6.4.2. I have limited Linux experience, but enough to be able to run command line commands and understand the output. Google has lead me to bits and pieces around accomplishing this, but I'd feel a lot less anxiety around it if someone with ES experience may be able to point me to something that's a bit more step-by-step.
I do have this link to download 6.4.2, but one of the things I need to know is which file to download: https://www.elastic.co/downloads/past-releases/elasticsearch-6-4-2
Sure here you go with step by step guide, As I did this for you, using your version.
Using this link https://www.elastic.co/downloads/past-releases/elasticsearch-6-4-2, which you mentioned, download the tar file to your local system.
Use SCP to transfer the .tar file to your ubuntu instance, I used my AWS ubuntu instance.
scp -i ~/your-identity-file ~/Desktop/elasticsearch-6.4.2.tar.gz
ubuntu#aws-ec2-instance-ip:/home/ubuntu
Untar file using tar -xvf elasticsearch-6.4.2.tar.gz command.
Go to config folder like cd elasticsearch-6.4.2/config/ and set the proper values in elasticsearch.config.
Start the elasticsearch from bin folder ./elastic command.
Update:- Based on the chat with OP, Adding official ES link https://www.elastic.co/guide/en/elasticsearch/reference/current/targz.html and https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html for detailed instruction.

Syncing terminal setups across machines (VM, server, etc)

I have recently set up Vagrant on my machine, and the first thing I noticed was that my terminal config was not synced, when I sshed into my server.
For instance I have changed my shell from bash to zsh, which does a lot of beautiful things for me (like removing case-sensitive auto completion). But on my vagrant virtual machine, or on my server, all this cool stuff is now gone. Also stuff like my important aliases is not synced.
Now, what is a proper way to sync stuff like this?
EDIT:
So currently, when I create/remove/edit an alias on my local machine, I have to copy the exact same changes into my VM and all other servers I frequently use. I see this as a very time consuming and unnecessary task.
What I do is version control my dotfiles and I keep them on github. Dotfiles are just the files in your root that start with a dot in the name such as .bashrc or .zshrc. They are "invisible" files, so you have to use ls -a instead of just ls to see them.
Here are my dotfiles: https://github.com/aharris88/dotfiles
When I get on a new machine, I just clone the repository to ~/dotfiles
Then, I have a bash script in there called setup.sh that backs up any old dotfiles that might already be in root into ~/dotfiles_old. Then it creates symlinks to the files that are in ~/dotfiles.
It also installs zsh and oh-my-zsh if it isn't already. It should work for linux or mac os x.
Here is an article describing how to version control your dotfiles: http://blog.smalleycreative.com/tutorials/using-git-and-github-to-manage-your-dotfiles/
Another thing that I do to get a new mac ready is use kitchenplan: https://github.com/kitchenplan/kitchenplan, which can sync a lot more settings, but this probably isn't what you're asking about. Here is my kitchenplan config: https://github.com/aharris88/kitchenplan-config

Codeigniter Deployment Process

I have a CI 2.0 project under VCS w/ the repo hosted on my server. Currently I have a bash script that I've posted below. It checks out the source code, moves some files around, and restarts the server to reflect the updated web site.
Is there anything wrong w/ my current method? Does anyone else have any other recommendations on other tools I could use or ways to do it better? Thanks!
# Stop apache while we update the server, and export our svn repo to a tmp dir
sudo /etc/init.d/apache2 stop
svn export file:///home/steve/repository/example/trunk /home/steve/example_dev/
# Prepare the public_html folder for the update, and remove the tmp directory
rm -rf /home/steve/public_html/example.com/public/
mv /home/steve/example_dev/ /home/steve/public_html/example.com/public/
rm -rf /home/steve/public_html/example.com/public/license.txt
rm -rf /home/steve/public_html/example.com/public/user_guide
rm -rf /home/steve/example_dev
# Restart apache
sudo /etc/init.d/apache2 start
I work using a local WAMP directory, which stores the directories of my projects (something DreamWeaver automatically does). I then use DreamWeaver to work directly with the live server. So everytime I edit a file it overwrites in my local directory. Changes are made instantly on the live server, which then when I'm ready to commit to my SVN trunk I simply run SmartSVN (or whatever you use) then commit my local WAMP directory to the SVN.
I don't know if it's the best option really, but it's most likely better than rebooting your webserver for changes.
Well I have a found a much simpler way to update my site now. No longer am I restarting Apache either :) Check out my script below.
svn export --force file:///home/steve/repo/example/trunk \
/home/steve/public_html/example.com/public/
We use Capistrano for CI and other PHP deployments. It works pretty well. https://github.com/namics/capistrano-php

Resources