Greenplum installation error - greenplum

While installing greenplum we are getting below error after running gpcheck command
GPCHECK_ERROR : uname -r output is different among hosts.
on two machines we have installed centos6 and in one machine we have installed centos7.
for greenplum installation is it necessary all hosts should have same os version?
should we ignore this error and go ahead.?

You must have the same OS version on all the cluster machines. Greenplum home directory is used for installing gppkgs (add-ons) that are in fact packed rpms. Greenplum initializes rpm database inside of GPDB home directory for managing add-ons. Whenever you do "gpseginstall" (installation, expansion), GPDB copies the content of GPDB home directory to other hosts. However RPM database created on one version of OS is not valid on another, so you would get errors trying to install/list/remove packages there
In general, if you don't plan to use any gppkgs and use it merely for PoC purposes, this should work, but I would strongly recommend to use the same OS version on all the cluster hosts

It is recommended to have same OS (kernel). If it is not production environment you can give try ignoring it. I have never tested it.

Related

macOS: How to update Docker Desktop that's running on an external drive, without removing existing containers?

I have installed Docker Desktop 3.0 on my mac - on an external drive.
I'm trying to uprade it to latest. I don't get prompted for them and I can't see any option to check for updates in the GUI.
I'd rather not uninstall incase it removes the containers and the data I have set up. But maybe I won't lose them?
can someone point me in the right direction?
thanks.
The location of the files dosn't play role in the upgrade. What plays the major role role is the source version and target version. There are two ways to upgrade the Docker for Mac, and there will be described in detail below:
Incremental upgrade
Upgrade with rebuild (Export / Import)
In the specific case of the upgrade from 3.03 to 3.3, the upgrade to 3.1 and 3.2 can be done as incremental, without need to export / import data, however the upgrade to 3.2.1 will require rebuild of the VM, so the export / import method will help to move the data to the new VM.
Location of data
The application dosn't contain the data for the containers. Docker for macOS is running on a virtual machine, and the data is stored on the virtual disk of the virtual machine.
The raw data (the virtual disk) of the virtual machine is located in
~/Library/Containers/com.docker.docker/Data/vms/0/data/Docker.raw
Where the 0 is for the 0th VM. By default it's only one VM, but it's better to check. You can find this information from the GUI from Preferences > Resources > Disk Image Location. Simple copy of the content of the data folder will preserve the virtual machine.
Incremental Upgrade
It's always recommended to do incremental upgrade by following the official release versions between your current version and the target version. The releass can be found in the release notes.
The steps to upgrade are:
Stop the docker application (from the menu, find the docker icon and select Quit Docker Desktop
Make full backup of the ~/Library/Containers/com.docker.docker/Data
Rename the application (you might need it to go back, in case the upgrade is not successful)
Copy the new application in the destination and start it.
This steps will be repeated for each minor version between the source version and target version. Some versions can be skipped, however it's not recommended.
In case of problems you can use the troubleshooting guid.
Upgrades with rebuild of the VM
Be aware that upgrading the docker from certain older version to new version might require a rebuild of the virtual machine.
In addition, sometimes the upgrade is not working properly and you will need to uninstall the application as described in the troubleshooting page.
So before you proceed with the uninstall and rebuild of the VM, if you have data that it's important for you, it is highly recommendable to export the containers, volumes and images before the upgrade.
You will need to make a list of all of the containers, images and volumes you want to move across to the new VM. Following docker commands will help:
~# docker ps -a
~# docker volume ls
~# docker image ls
To export the information, you can use the docker export and docker image save. The following example will save the image or the container/volume to file called file.raw:
~# docker export <<volume name or contianer name>> > file.raw
~# docker image save <<image name>> > file.raw
Once the Docker is installed (the latest version) and the VM is rebuild, to import the data back from the files, you will need to use:
~# docker import file.raw
~# docker image load file.raw

How to run RediSearch module with Redis on WSL (Windows Subsystem for Linux)

I followed the steps on https://redislabs.com/blog/redis-on-windows-10/ and have installed Ubuntu 18.04 and am successfully running Redis v4.09 on Windows. But, when following the steps on https://oss.redislabs.com/redisearch/Quick_Start/, I have some issues.
In the download and running binaries section, I don't understand what I'm supposed to replace /path/to/module/src/redisearch.so with. I've downloaded RediSearch for Ubuntu 18.04 and I've moved the files to a folder named RediSearch within my Downloads folder. Could someone help me with the pathing considering I'm using Ubuntu on Windows? I've also tried it with Building and Running with Source section but that just runs into an error every time I run make:
*** No rule to make target 'build'. Stop.
How can I run the module with Redis?
With WSL, you have access to C: through /mnt/c/ from Linux.
So, if you really want to have redisearch on a folder in Downloads you need to use something like:
/mnt/c/Users/<yourUser>/Downloads/yourFolder/src/redisearch.so
However, you probably should use a folder within Linux instead. You can use wget to download from Linux.

Installing Gemstash onto Jenkins Slave

I am trying to cache ruby gems onto a Jenkins slave. I have installed gemstash onto my linux virtualbox which runs the slave, but however, I am not sure if I am installing it in the right location.
Should I be installing it by logging into the Jenkins user in the terminal and installing it there? Because when I created the slave node, I didn't need to install Jenkins onto the box. The source I use for the gemfile is localhost:9292
EDIT:
And how can I check what packages gemstash has cached?
Checking if gemstash has cached packages can be done by following https://github.com/bundler/gemstash#bundling
Any help would be appreciated.
As the README says, have a look in ~/.gemstash:
You might wonder where the gems are stored. After running the commands above, you will find a new directory at ~/.gemstash. This directory holds all the cached and private gems. It also has a server log, the database, and configuration for Gemstash.

Trying to Vagrant up using Vmware Fusion - Getting file not found error

Getting this error:
The executable '/Users/nick/vs/mmvagrant/Contents/Library/vmware-vmx' Vagrant is trying to run was not
found in the PATH variable. This is an error. Please verify
this software is installed and on the path.
(im vagrant upping in my ~/vs/mmvagrant folder)
Purchased, installed and properly licensed vmware-fusion
Selecting this in the puphpet as a Local option, and i get the above failure message. When I run with VirtualBox, its fine.
Where can I find this contents/library reference, or what can I add to my path?
(Apple OS/X)
In PuPHPet, one must choose a "provider" which exists on your system:
PuPHPet will create a Vagrantfile that makes certain assumptions based on your choice of provider. In your case, you have chosen VMWare Fusion, which the question assumes is installed but in fact is not. This causes this error:
The executable vmware-vmx Vagrant is trying to run was not found in the PATH variable.
Reinstall VMWare Fusion, then make sure the VMWare Fusion Plugin is installed (with the vagrant plugin list command). Finally, with all those pieces in place, try vagrant up again.

How can I use/install "make" on the Amazon Linux AMI for EC2?

I'm a new user of Amazon EC2.
I want to compile the pptpd package on EC2, but receive the following error:
[root#ip-10-112-xxx-xxx /]# /var/tmp/rpm-tmp.2eILT0: line 58: /usr/bin/make: No such file or directory
I searched the entire root directory tree, but make isn't available:
[root#ip-10-112-59-187 /]# find . -name "make"
./etc/mail/make
I'm wondering whether make is actually installed on the Amazon Linux AMI initially? If not, how do I install it?
Preface
The Amazon Linux AMI is (loosely) based on CentOS and a perfectly decent OS for EC2, in fact it has been tailored by Amazon for EC2 specifically:
The Amazon Linux AMI is a supported and maintained Linux image
provided by Amazon Web Services for use on Amazon Elastic Compute
Cloud (Amazon EC2). It is designed to provide a stable, secure, and
high performance execution environment for applications running on
Amazon EC2. It also includes packages that enable easy integration
with AWS, [...]. Amazon Web Services provides ongoing security and
maintenance updates to all instances running the Amazon Linux AMI. [...] [emphasis mine]
However, it is indeed not as widely used yet as some other distributions, with the most popular likely being Ubuntu due to its popularity in general and its dedicated long time tailored support of EC2 in particular (see e.g. the EC2StartersGuide, the Ubuntu Cloud Images or the convenient listing of the Ubuntu AMIs for Amazon EC2 on alestic). This yields two drawbacks:
You'll find much more examples/tutorials/etc. for EC2 based on Ubuntu, making things easier eventually.
You'll find slightly less precompiled packages available for CentOS, requiring compiling your own eventually (but see below).
Solution
That said, CentOS (and the Amazon Linux AMI in turn) uses the Yum package manager to install and update packages from CentOS (and 3rd party) Repositories (Debian/Ubuntu use the APT package manager instead - the inherent concepts are very similar though), see e.g. section Adding Packages in Amazon Linux AMI Basics:
In addition to the packages included in the Amazon Linux AMI, Amazon
provides a yum repository consisting of common Linux applications for
use inside of Amazon EC2. The Amazon Linux AMI is configured to point
to this repository by default for all yum actions. The packages can be
installed by issuing yum commands. For example:
# sudo yum install httpd
Accordingly, you can install make via yum install make (you can get a listing of all readily available packages via yum list all).
Be advised though, that you might actually not need to do that, insofar the Amazon Linux AMI has been built to be binary-compatible with the CentOS series of releases, and therefore packages built to run on CentOS should also run on the Amazon Linux AMI. [emphasis mine]
The desired package pptpd is not part of the standard repositories on CentOS either though, but it is available in the 3rd party Extra Packages for Enterprise Linux (EPEL) repository (see Letter P) - I can't comment on the viability of using this one vs. compiling your own though.
Good luck!
Make is not installed by default on Amazon Linux AMIs. However, you can install it quite easily with yum. If you choose to only install make, you might get some errors later for other packages in the compilation process. If you are going to compile software, you might want to just install all of the development tools at once.
sudo yum groupinstall "Development Tools"
sudo yum groupinstall "Development Tools"
According to the documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/compile-software.html

Resources