MariaDB clashing with MySQL on Travis-CI - continuous-integration

I have a test suite that runs on Travis-CI and requires MariaDB (but it breaks on MySQL). The pre-test scripts call the mysql command, but run commands against MariaDB, as the command is the same for both.
echo "CREATE DATABASE test1" | mysql -u travis
The tests on worker v2.5.0 were passing just fine (https://travis-ci.org/stems/join-monster/jobs/256751422). However, the tests started running on a later version of the worker v2.9.3 and failing without any changes to the code (https://travis-ci.org/stems/join-monster/jobs/260001701). According to the system build information, this new version seems to be installing MySQL in addition to MariaDB. Now when I run my mysql command, it's running against MySQL instead of MariaDB and breaking the build.
I need either:
to go back to a previous version of the worker (can't find any info on how to do this in the Travis docs.
to specify that I want to run commands and connect to MariaDB, NOT MySQL.
to tell Travis to not install MySQL to avoid the clashing
Any of these solutions would be appreciated.

Fixed it by switching the Ubuntu version back to 12 rather than 14, which had become the new default.
In the .travis.yml
dist: precise

Related

Composer script to start a background process, whose lifetime is bound to that of the Composer script

I've started experimenting with Composer Scripts.
I have a project where there are "Functional tests" of the API endpoints. Running the whole test suite requires running the following commands in order:
composer install to install all required dependencies of the backend APIs
php yii server --test to start a lite server that is connected to the "test" MySQL database. The test server starts running on localhost:9000.
sh vendor/bin/phpunit --configuration tests/functional/phpunit.xml to run th actual tests. This last command triggers the execution of all test cases, most of which execute HTTP calls to the lite server, launched at step 2.
I would like to automate and "atomize" this 3 step process into a single Composer script that can be easily started, killed and restarted effortlessly.
Here's my current progress:
"scripts": {
"test-functional": [
"#composer install",
"php yii server --test",
"sh vendor/bin/phpunit --configuration tests/functional/phpunit.xml"
]
}
The problem is that the 2-nd command (php yii server --test) "steals" the terminal because PHPs built-in lite server requires the terminal to be open while the command is running. Killing the command kills the lite server as well. I tried suffixing the second step of the script with & which generally makes processes go in the background and not steal the terminal, but it seems this trick doesn't work for Composer Scripts. Any other workaround or possibility that I'm missing?
My final goal is to make the 3 steps execute in an atomic way, output the results of the tests and end the command, cleaning up everything (including killing the lite server, launched in step 2).

SolrCore is loading running as Windows Service

Logged into Windows Server 2016 as Administrator, I can run Solr from the command line: bin\solr.cmd start -p 8983 -f
I have configured a Solr to run as a Windows Service - running as the same user, with the same command, same startup directory, etc. - however under load, the following error comes back from the upstream application (Sitecore xConnect, though this shouldn't make a difference)
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg=SolrCore is loading,code=503}
To reiterate, everything works fine when Solr is started from the command line, only when it's run as a Windows Service does it error.
Solr version: 6.6.3
Windows version: Server 2016
Environment: AWS (m5.large EC2 instance)
Sitecore compatibility table says to use Solr 6.6.1 with Sitecore, You should still use 6.6.2 as it fixes a bug in Solr 6.6.1 that can affect the installation of SIF. Read here
I recommend you try again with Solr 6.6.2
It turns out that the service was configured to run without the -f flag. So the process would continually stop and re-spawn.

running sonarqube 6.7.1 on Ubuntu Server 16.04 using provided deb-files fails

I'm unable to get the latest Sonarqube up and running on Ubuntu 16.04.
Installation was done using the deb provided by http://sonar-pkg.sourceforge.net
After a first cursory glance it seems that the installation routine sets up Sonarqube to run as user "root", whereas Elastisearch - which is mandatory - refuses to be started as "root".
Has anybody been able to set this up properly (i.e. running as a non-root user) and can point me to the respective documentation?
Thanks.
First, navigate to the install folder > ex: /opt/sonarqube-6.7.1/bin/linux-x86-64.
Then open the sonar.sh file in that folder using vim, nano commands etc.
Then uncomment the line which reads "RUN_AS_USER=" which is around line number 48.
change the line to "RUN_AS_USER=sonar"
restart your system, start the sonarqube server. You should be able to run your server.

Cassandra Detected unreadable sstables(data not caches)

ERROR [main] 2017-08-04 13:24:21,949 CassandraDaemon.java:638 - Detected unreadable sstables /opt/cassandra/data/some_key_space/ep_lc_events-adc44160dbe611e6953689bcd3ed73aa/mc-547-big-Summary.db, and many others...
That has happened after I upgraded Cassandra to 3 version and after a while downgraded it to 2nd version.
When I run this command: sudo service cassandra status
I have got such message:
could not access pidfile for Cassandra
In /var/log/cassandra/system.log I have logs which I wrote at the beginning.
PS: let me pay your attention that everything is happening on EC2 Amazon instance.
Well, I have just upgraded back to 3rd version, used cassandra-unloader to export all data, then downgraded back to 2nd version and used cassandra-loader to import all data. But if you were lucky and had backups and snapshots it would not be an obstacle for you.
PS. Afterwards, I had to run this command nodetool resetlocalschema to reset local schema and resynchronize.
PPS. This you can find how to do that.
https://github.com/brianmhess/cassandra-loader
I also got the same error, but it was due to switching between cassandra 4.0.0 and version 3.11 and back again while using docker.
Update the version to the right one for the ssltable, or delete the data volume:
docker-compose logs cassandra
docker volumes ls
docker ps
docker-compose down
docker volume rm testapp_cassandra
docker volume ls
docker-compose up

Running Chef cookbooks on ExaData

I am trying to run a Chef Cookbook on an ExaData server and I'm running into issues. I was able to bootstrap my ExaData servers. However when I run chef-client on the target nodes, I get an error like this. Then I went back and did a verbose output of the error, and still don't have any idea of what the issue is. I am able to ping, traceroute, and nc to and from the ExaData server to the Chef Server. None of the files transfer from the cookbook, or none of the files download from the remote Zabbix repository. The Chef run completes the role, and recipes but nothing is installed. Is there something different about ExaData from regular RHEL distributions that would cause issues?
--EDIT - 2013-07-15--
From looking at a "successful" chef-client run on a regular RHEL 6.2 OS, where as ExaData runs RHEL 5.8, I saw fewer errors. There does seem to be a lot of libraries missing from ExaData in order to run chef-client. From what I have heard, and read in other posts, was that ExaData is a stripped version of RHEL 5.8, using only what is needed to run databases.
According to a comment on the Chef IRC Logs the 404 message is because the client is attempting to use a feature that your server version doesn't support.
If you add the setting enable_reporting false to your client.rb file it should disable the request to the /reports URL.

Resources