Test fails only in Gitlab CI, locally successful - laravel

I'm just starting with Gitlab CI (using a docker executor). After facing and solving some beginner's issues, I'm now facing a pretty strange problem.
All my unit tests succeed locally but when I run them with CI some of them fail.
One example:
[2018-12-09 18:05:57] testing.ERROR: Trying to get property 'email' of non-object {"userId":834,"email":"hugh.will#example.org","exception":"[object] (ErrorException(code: 0): Trying to get property 'email' of non-object at /builds/.../laravel/framework/src/Illuminate/Mail/Mailable.php:492)
Well.. I know what this error means but as the test succeeds locally it's extreme hard to debug and I don't know where to start.
Here is the test_job part of my .yml file:
test_job:
stage: test
services:
- mysql:5.7
- redis:latest
artifacts:
when: always
paths:
- storage/logs/
dependencies:
- build_job
script:
- sh .gitlab-test.sh
tags:
- docker
Just to be sure - my test.sh
#!/bin/sh
set -xe
php -v
ping -c 3 mysql
php artisan migrate
php artisan db:seed --class=TestingSeeder
php vendor/bin/phpunit --coverage-text --colors
My question:
It seems that somewhere the parsing of the json fails, but Mailable is a Laravel component which should (and indeed does) work.
Is there any known issue with Laravel 5.5 and Gitlab CI?
How can I debug a test which fails only on Gitlab CI?
Are there maybe some things to remember when testing a Laravel app using CI?

The reason was a faulty configuration.
It turned out, that most of the tests which failed were testing some mail related stuff. I forgot to add my smtp credentials.
Instead of complaining about missing/wrong credentials, it returned this strange absolutely not related error message.

Related

Cloud Build does not trigger new pipeline on webhook request

I have set up a Trigger in Google Cloud Build to start a new pipeline when receiving a HTTP POST request.
The last pipeline in Build History has failed because there was problems with volumes in the yaml.
Now, I cannot start new pipelines using this Trigger. The webhook requests does receive HTTP 200 from Google, but no new pipeline is initiated.
How can I start a new pipeline from a webhook request, even when the last build failed?
I use the inline-cloudbuild-yaml, to describe the pipeline.
This issue seem to be related to the Yaml description for the pipeline, but the big problem is that it does not show any error message - it just silently fail without initiating a new run.
Here is a simple inline-pipeline that works:
steps:
- name: 'ubuntu'
entrypoint: 'bash'
args:
- '-c'
- |
echo "Hello, world!"
and here is one that does not work, it is taken from the Cloud Build documentation for integrating with GitLab, but shortened to only two steps:
steps:
- name: gcr.io/cloud-builders/git
args:
- '-c'
- |
echo "$$SSHKEY" > /root/.ssh/id_rsa
chmod 400 /root/.ssh/id_rsa
ssh-keyscan gitlab.com > /root/.ssh/known_hosts
entrypoint: bash
secretEnv:
- SSHKEY
volumes:
- name: ssh
path: /root/.ssh
- name: gcr.io/cloud-builders/git
args:
- clone
- 'git#gitlab.com/<my-gitlab-repo>'
- .
volumes:
- name: ssh
path: /root/.ssh
availableSecrets:
secretManager:
- versionName: <my-path-to-secret-version>
env: SSHKEY
And the big problem is that no build is initiated, so no error message is shown.
In both cases, the Webhook request receives HTTP 200.
I tried to replicate the issue from my end using curl. But, i can able to trigger the build. And note that the build invocations are independent meaning that the build history makes no difference to future builds. Try using verbose -v flag with curl command to find to display detailed processing information on your screen as below.
As it is working for me it seems to be working as intended. And to resolve your issue i suggest you to, contact google support here as it seems inspection on your project is required.
Update
I tried with the yaml that you shared in the question. Still, I was able to trigger the build.
(Ignore the build error, it was due to some permission error)
If you think many people are facing the same issue. Please report the issue on public issue tracker which is best forum for reporting these kind of issue.

Deploying a Laravel vapor project with vapor-ui causes every request to the service to respond with a 502

The problem:
Whenever I try to make a request to a Laravel API hosted with Laravel Vapor, I get a 502.
When I view error logs in Cloudwatch, I see this:
As you can see here, the 502 is being returned because there seems either to be a permissions problem with the bootstrap/cache folder or the folder is not present for some reason after deployment.
What I did to try to fix this:
I ensured the folder permissions were correct (the cache folder has 755, and the .gitignore 644).
As you can see I ensured there was a .gitignore ignoring the entire contents of the cache folder.
I tried to dump-autoload and composer update.
I tried deploying from the command line and from git actions
All of the above did not yield any results and the support team tried their best but also could not identify the issue.
The solution??
So, one of my teammates noticed that we only started experiencing this issue after adding the vapor-ui package. I removed it completely from our projects, redeployed, and no more error. The API responds as expected now.
My question is, why would installing vapor-ui cause this issue?
I know that the vapor-ui project itself is a Laravel project so could it be that there is no bootstrap/cache folder in the vapor-ui project?
Here are snippets of my vapor.yml file before removing the vapor-ui:
id: 1234
name: notreal
environments:
production:
domain: notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-123
build:
- 'composer install --no-dev'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run prod && rm -rf node_modules'
staging:
domain: sta-notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-1647335449
database: notreal-sta
build:
- 'composer install'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run dev && rm -rf node_modules'
development:
domain: dev-notreal.notreal.com
memory: 1024
cli-memory: 512
runtime: docker
network: vapor-network-123
database: notreal-dev
build:
- 'composer install'
- 'php artisan vapor-ui:install'
- 'npm ci && npm run prod && rm -rf node_modules'
I don't think you need this build step:
php artisan vapor-ui:install
You do it locally and then everything is in place already when you deploy

Gitlab CICD for laravel app not deploying?

I am using Gitlab tutorial https://docs.gitlab.com/ee/ci/examples/laravel_with_gitlab_and_envoy/ for deploying Laravel application to my digital ocean server
But when it’s running task two I am getting following errors.
$ ~/.composer/vendor/bin/envoy run deploy --commit="$CI_COMMIT_SHA"
/bin/bash: line 103: /root/.composer/vendor/bin/envoy: No such file or directory
ERROR: Job failed: exit code 1
Try to install envoy in your before_script globally in you composer home directory:
before_script:
- export COMPOSER_HOME=`pwd`/composer && mkdir -pv $COMPOSER_HOME
- composer global require --prefer-dist laravel/envoy=~1.0 --no-interaction --prefer-dist --quiet
After this you can call envoy in your deploy script like this:
- ${COMPOSER_HOME}/vendor/laravel/envoy/envoy run deploy --commit="$CI_COMMIT_SHA"
thank you for the answer i had to add .../envoy/**bin**/envoy in the script what worked for me.
complete code : - ${COMPOSER_HOME}/vendor/laravel/envoy/bin/envoy run deploy --commit="$CI_COMMIT_SHA"

Laravel setting up circle CI

I want to setup circle Ci, here's my full config file:
version: 2
jobs:
build:
steps:
- run: sudo apt-get install -y libsqlite3-dev
- run: composer install -n --ignore-platform-reqs
- run: npm install
- run: npm run production
- run: vendor/bin/phpunit
- run:
name: Start Chrome Driver
command: ./vendor/laravel/dusk/bin/chromedriver-linux
background: true
- run:
name: Run Laravel Server
command: php artisan serve
background: true
- run:
name: Run Laravel Dusk Tests
command: php artisan dusk
however I get this error message:
build-agent version 0.0.4142-1bd195a (2017-09-11T09:57:00+0000)
Configuration errors: 2 errors occurred:
Error parsing config file: yaml: line 1: mapping values are not allowed in this context
Cannot find a job named build to run in the jobs: section of your configuration file. This can happen if you have no workflows
defined, or a typo in your 'workflows:' config.
I use AWS RDS database for my production, is it possible that during the testing it uses different database, i.e local? fake some information and then purge the database after the test? This must not affect my production database. Also the issue I have need to be fixed but I don't know what's wrong here
This one might be helpful
config.yml

Behat + CircleCI Configuration for my drupal8 site

I am very new to drupal. I am using drupal8 with pantheon. I have created a site "ucfictious". I have created a local copy by using composer and drush. Everything went well and I configured behat tests which also went well. Now I am trying to configure CircleCI through github. I ran into so many errors and I couldn't solve my errors. Can anyone help me with the configuration of CircleCI? I am using Craychee's Work to build CircleCI and When I run I get the following error:
build/install.sh
Command config-import needs a higher bootstrap level to run - you [error]
will need to invoke drush from a more functional Drupal environment
to run this command.
The drush command 'config-import' could not be executed. [error]
Drush was not able to start (bootstrap) the Drupal database. [error]
Hint: This may occur when Drush is trying to:
* bootstrap a site that has not been installed or does not have a
configured database. In this case you can select another site with a
working database setup by specifying the URI to use with the --uri
parameter on the command line. See drush topic docs-aliases for
details.
Drupal Version: 8.2.5
Drush Version: 8.1.8
Php Vesion: 5.6
For Behat Configuration I followed Craychee's work: http://craychee.io/blog/2015/08/04/no-excuses-part4-testing/
Thanks.
Got it Working...My database is not populated properly.....by doing a sql-sync from the DEV environment (so it was a known-clean copy, i.e. not filled with testing junk) and then used mysqldump.

Resources